CN106851424B - Video broadcasting method and device - Google Patents
Video broadcasting method and device Download PDFInfo
- Publication number
- CN106851424B CN106851424B CN201710223156.XA CN201710223156A CN106851424B CN 106851424 B CN106851424 B CN 106851424B CN 201710223156 A CN201710223156 A CN 201710223156A CN 106851424 B CN106851424 B CN 106851424B
- Authority
- CN
- China
- Prior art keywords
- time point
- interaction
- video
- recording file
- candidate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/232—Content retrieval operation locally within server, e.g. reading video streams from disk arrays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/239—Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
- H04N21/2393—Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4782—Web browsing, e.g. WebTV
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/858—Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
- H04N21/8586—Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot by using a URL
Abstract
This application discloses a kind of video broadcasting methods, comprising: receives recording file when at least one user watches the video;The interaction time point of the video is determined according to the recording file;Configure each corresponding interaction content of interaction time point and its download link;And the download link of the interaction time point and the interaction content of each interaction time point is sent to terminal, by the terminal when playing the video to any interaction time point, according to the download link of the interaction content of the interaction time point, the interaction content of the interaction time point and displaying are obtained.The application also proposed corresponding video play device.
Description
Technical field
The present invention relates to multimedia technology field more particularly to a kind of video broadcasting methods and device.
Background technique
Video playing has become essential component part in people's life, and by television-viewing, it broadcasts televiewer
Various programs out, in addition, people can also watch the video of its broadcasting by various video websites.Video traffic mainly mentions at present
For two kinds of business models of video on demand and net cast.Video on demand is also referred to as interactive television VOD system, can according to
The needs at family play corresponding video program, and net cast is exactly online live streaming.But either video on demand or net cast,
It is all the video flowing that simple viewing content providers generate, video user follows content to walk.Video interactive makes spectators participate in video
Content can make user obtain aseity sense, group identification sense and psychological sympathetic response, video interactive and receive more and more attention.
Summary of the invention
Present example proposes a kind of video broadcasting method, comprising:
Receive recording file when at least one user watches the video;
The interaction time point of the video is determined according to the recording file;
Configure each corresponding interaction content of interaction time point and its download link;And
The download link of the interaction time point and the interaction content of each interaction time point is sent to terminal, by
The terminal is when playing the video to any interaction time point, according to the downloading chain of the interaction content of the interaction time point
It connects, obtains the interaction content of the interaction time point and displaying.
Present application example also proposed a kind of video broadcasting method, comprising:
The request for playing the video is sent to server,
It receives in the interaction time point and the corresponding interaction of each interaction time point for the video that server is sent
The download link of appearance;
The data information for receiving the video that server is sent, plays the video;
In the video playing to any interaction time point, according under interaction content corresponding to the interaction time point
It carries link and obtains the corresponding interaction content of the interaction time point;And
Show acquired interaction content.
Present application example also proposed a kind of video play device characterized by comprising
Recording file receiving unit, for receiving recording file when at least one user watches the video;
Interaction time point determination unit, for determining the interaction time point of the video according to the recording file;
Interactive configuration unit, for configuring each corresponding interaction content of interaction time point and its download link;And
Transmission unit, for by the download link of the interaction time point and the interaction content of each interaction time point
It is sent to terminal, by the terminal when playing the video to any interaction time point, according to the mutual of the interaction time point
The download link of dynamic content, obtains the interaction content of the interaction time point and displaying.
Present application example also proposed a kind of video play device, comprising:
Playing request transmission unit, for sending the request for playing the video to server,
Interaction time point and download link receiving unit, the interaction time point of the video for receiving server transmission
And the download link of interaction content corresponding to each interaction time point;
Data information receives broadcast unit, the data information of the video for receiving server transmission, described in broadcasting
Video;
Interaction content acquiring unit is used in the video playing to any interaction time point, when according to the interaction
Between put the download link of corresponding interaction content and obtain the corresponding interaction content of the interaction time point;And
Displaying interactive content unit, for showing acquired interaction content.
The above scheme proposed using the application, can be enhanced the effect of video playing, promotes the viewing experience of user.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention without any creative labor, may be used also for those of ordinary skill in the art
To obtain other drawings based on these drawings.
Fig. 1 is the system architecture schematic diagram that the video broadcasting method that present application example proposes is related to;
Fig. 2 is the flow diagram for the video broadcasting method that present application example proposes;
Fig. 3 is the flow diagram that interaction time point is determined according to recording file that present application example proposes;
Fig. 4 is the flow diagram that interaction time point is determined according to recording file that the application yet another embodiment proposes;
Fig. 5 is the flow diagram that interaction time point is determined according to recording file that the another example of the application proposes;
Fig. 6 is the flow diagram for the video broadcasting method that the application yet another embodiment proposes;
Fig. 7 is the structural schematic diagram for the video play device that one example of the application proposes;
Fig. 8 is the structural schematic diagram for the video play device that the application yet another embodiment proposes;And
Fig. 9 is the composite structural diagram of the computer equipment where the video play device that present application example proposes.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that the described embodiment is only a part of the embodiment of the present invention, instead of all the embodiments.Based on this
Embodiment in invention, every other reality obtained by those of ordinary skill in the art without making creative efforts
Example is applied, shall fall within the protection scope of the present invention.
The application proposes a kind of video broadcasting method, can be applied in system architecture shown in FIG. 1.As shown in Figure 1, this is
Framework of uniting includes: one or more terminals 101, video server 102, and terminal 101 and video server 102 pass through internet
103 are communicated.
Terminal 101 can be smart phone, tablet computer, desktop computer, personal digital assistant, be also possible to intelligent electricity
Depending on etc. the various smart machines with the Internet access capability.The display screen of terminal 101 can be liquid crystal display, electric ink
Display screen etc..The input unit of terminal 101 can be the touch layer covered on display screen, be also possible to be arranged on the shell of terminal
Key, trace ball, Trackpad and sound input unit, be also possible to external keyboard, Trackpad and mouse etc..Terminal 101
On videoconference client is installed, user sends video playing request to video server by the videoconference client in terminal 101,
It include the file identification of video to be visited in the playing request, it, will after video server 102 receives video playing request
The data information of video corresponding with the file identification is sent to the videoconference client in terminal 101, i.e., by the video
Video flowing and audio streams to the videoconference client in terminal 101, videoconference client plays the video.The video clothes
Business device can be streaming media server, and the videoconference client can be video player.
In some instances, the video that above-mentioned videoconference client plays is the video flowing generated by provider, video user
Video content is passively received, interacting for user and video content is lacked.
Based on above-mentioned technical problem, the application proposes a kind of video broadcasting method, and this method can be applied to video server
102.In one example, as shown in Fig. 2, method includes the following steps:
Step 201: receiving recording file when at least one user watches the video.
Recording device or the terminal 101 are set in terminal 101 and are circumscribed with recording device, such as terminal 101 is external
There is recording microphone, terminal 101 is circumscribed with headset.When the video that the videoconference client in user's viewing terminal 101 plays, pass through
The sound pick-up outfit records to user, generates the recording file that user watches the video, and will be on the recording file
Reach video server.In some instances, the mouth of the close user of the sound pick-up outfit setting, the separate terminal 101,
So that the recording file obtained is the sound that user issues, the interference of the sound for playing video is avoided.Such as user
The video is watched by wearing headset, is recorded by the Mike on the headset to user, to reach better record
Audio fruit.Furthermore in order to obtain ideal recording file, some testers can be made to watch the video, obtained by Mike
The recording file of each tester.
Step 202: the interaction time point of the video is determined according to the recording file.
The recording file is interaction reaction when user watches video, and the recording file can react the video
Some content informations, such as when the video playing is to excellent, user can issue sound of cheer, when video playing is to making laughs a little
When, user can issue heartily laugh, and when video playing to fear point, user can issue terrified shriek.Pass through the record of user
Sound file can obtain above-mentioned excellent time point of the video, the time point of making laughs and terrified time point, so that subsequent can be
Interaction content is added at the time point, the video is made to reach better result of broadcast, improves user experience.When for interaction
Between the determination method put will be described in detail later in this article.
Step 203: configuring each corresponding interaction content of interaction time point and its download link.
After the interaction time point for obtaining the video, interaction content and the interaction are configured for each interaction time point
The download link of content.Configuring interaction content includes but is not limited at least one of configured letters, picture, audio, video, example
Such as when the video is terrified video, the interaction content of terrified audio is configured in interaction time point, thus the index that adds to one's terrors,
When the video is funny video, heartily laugh audio is configured in interaction time point, renders happy atmosphere.It is simultaneously each
The interaction content of interaction time point configures download link, and the download link can use the side of uniform resource locator (ULR)
Formula configures, and mainly may include protocol type, host address, path and filename.
Step 204: the download link of the interaction time point and the interaction content of each interaction time point is sent
To terminal, by the terminal when playing the video to any interaction time point, according in the interaction of the interaction time point
The download link of appearance obtains the interaction content of the interaction time point and displaying.
Server by the interaction time point of the video of acquisition, and configuration each interaction time point interaction in
The download link of appearance is sent to terminal 101.It is provided with time set in terminal 101, when terminal passes through videoconference client thereon
When playing the video, start the time set, when the video playing to any interaction time point or any interaction time point
When predetermined time point, computing device issues an alarm signal to terminal, and terminal is according to the signal, while according to the interaction
The download link of time point corresponding interaction content, obtains corresponding with interaction time point interaction content, and by the interaction
Content and the audio video synchronization play.
Simultaneously on the videoconference client interface in terminal 101 there may also be interactive button, user can voluntarily select be
The terrified interaction of no needs, interacts if user is cancelled by interactive button, when video playing to interaction time point, video visitor
Family end does not obtain interaction content from video server, does not show the interaction content to user.
Using video broadcasting method provided by the present application, recording file when watching video according to multiple users obtains the view
The interaction time point of frequency, and the corresponding interaction content of interaction time point is configured, thus when playing video, at interaction time point
Interaction content is played simultaneously, increases the effect of video playing, improves user experience.
In some instances, in above-mentioned steps 202, the interaction that the video is determined according to the recording file is being executed
When time point, as shown in figure 3, may comprise steps of:
Step 301: respectively in each recording file choose volume sort from high to low in top n time point, work
For candidate time point.
The most basic time quantum of recording file is to obtain the recording file each second for any recording file the second
The volume is ranked up by volume from high to low, top n volume corresponding time point is chosen in the sequence, and will choosing
The time point taken is as candidate time point.In addition, needing to reject invalid recording when obtaining candidate time point according to recording file
File, recording file may include the ambient noises such as chat, and the decibel of the noise in some recording files is possible to be more than user
Sound, thus to choose candidate time point generate interference.After the N number of candidate time point for obtaining each recording file, sentence
N number of candidate time point that a disconnected recording file obtains whether and N number of candidate time point deviation for obtaining of most of recording file
It is too big, if it is, the recording file may includes other noises such as chat, the recording file is rejected, such as can count
The candidate time that each candidate time point is chosen in all recording files in N number of candidate time point that one recording file is chosen
The frequency of middle appearance, if in the candidate time point that a recording file is chosen, when the lower candidate time point of the frequency is more,
Then determine that the recording file contains noise, is rejected by the recording file and in the candidate time point wherein chosen.
Step 302: will be merged for candidate time point selected by each recording file, generate candidate time point
Set.
The candidate time point that each recording file is selected carries out duplicate removal merging, generates candidate time point set.Example
The candidate time point selected such as recording file L1 is (t1, t2, t3, t4, t6, t9), when the candidate that recording file L2 is selected
Between point be (t1, t2, t4, t6, t9, t10), such as the candidate time point that selects of recording file L3 be (t1, t2, t3, t4, t7,
T9), then the candidate time point chosen in recording file L1, L2, L3 is removed merging, and the candidate time point set of generation is combined into
(t1, t2, t3, t4, t6, t7, t9, t10).
Step 303: calculating in the candidate time point set that volume meets first condition on each candidate time point
Recording file accounts for the ratio of all recording files.
Candidate time point number in the candidate time point set chosen in step 302 is more, and in video
During broadcasting, we may only need be interacted at several representative time points, such as most of user all
The time point for reacting ardenter is interacted.Thus we calculate each candidate time point in the candidate time point set
The recording file that upper volume meets first condition accounts for the ratio of all recording files, using the high candidate time point of the ratio as
Interaction time point.In some instances, the first condition is preset first threshold value condition, when the volume of recording file is more than the
When one threshold value, it is believed that user has ardenter reaction to video content.For example, working as user when the video is terrified video
Recording decibel be more than first threshold condition when, it is believed that user has occurred terror and screams, in candidate time point set
Each candidate time point calculates the ratio that the recording file that user screams on the candidate time point accounts for all recording files
Example, so that each candidate time point obtains a ratio value.
Step 304: the time point that the ratio meets second condition is determined as to the interaction time point of the video.
The time point that ratio value on each the candidate time point obtained in step 303 meets second condition is chosen
For the interaction time point of the video.In some instances, the second condition is default second threshold condition, as a candidate
When time point corresponding ratio value is more than second threshold, it is believed that most user has comparison ardent on the candidate time point
Reaction, using the candidate time point as interaction time point.
In some instances, in above-mentioned steps 202, the interaction that the video is determined according to the recording file is being executed
When time point, as shown in figure 4, may comprise steps of:
Step 401: obtaining the volume at each time point in each recording file respectively.
The most basic time quantum of recording file is to obtain the recording file each second for any recording file the second
Volume.
Step 402: choosing the time point that the volume meets first condition.
In some instances, the first condition is preset first threshold value condition, when the volume of recording file is more than first
When threshold value, it is believed that user has ardenter reaction to video content.For example, when the video is terrified video, when user's
When the decibel of recording is more than first threshold condition, it is believed that user has occurred terror and screams.
Step 403: the volume at the time point of the selection is ranked up from high to low.
Step 404: choosing top n time point as the candidate time point.
In a upper example, i.e., in example shown in Fig. 3, in choosing the candidate time point in each recording file,
A upper example needs for the volume at each time point of recording file to be ranked up from high to low, then chooses top n
Time point as candidate time point, because the basic time of recording file is the second, thus arranges corresponding volume of each second
Sequence is the bigger work of a calculation amount.In this example, one was carried out tentatively for each second in recording volume
Screening only chooses the time point subsequent sequence of progress that volume meets first condition, wherein meeting the time point of first condition
Ardenter point is reacted when watching video for user, the time point quantity for meeting first condition is limited, to reduce subsequent
The calculation amount of sequence, improves efficiency.
Step 405: will be merged for candidate time point selected by each recording file, generate candidate time point
Set.
Step 406: calculating in the candidate time point set that volume meets first condition on each candidate time point
Recording file accounts for the ratio of all recording files.And
Step 407: the time point that the ratio meets second condition is determined as to the interaction time point of the video.
The operation that step 405-407 is executed is consistent with the operation that above-mentioned steps 302-304 is executed, and details are not described herein.
In some instances, in above-mentioned steps 202, the interaction that the video is determined according to the recording file is being executed
When time point, as shown in figure 5, may comprise steps of:
Step 501: obtaining the volume at each time point in each recording file respectively.
The most basic time quantum of recording file is to obtain the recording file each second for any recording file the second
Volume.
Step 502: choosing the volume and meet the time point of first condition as the candidate time point.
In some instances, the first condition is preset first threshold value condition, when the volume of recording file is more than first
When threshold value, it is believed that user has ardenter reaction to video content.For example, when the video is terrified video, when user's
When the decibel of recording is more than first threshold condition, it is believed that user has occurred terror and screams.
In a upper example, i.e., in example shown in Fig. 4, one was carried out tentatively for each second in recording volume
Screening only chooses the time point subsequent sequence of progress that volume meets first condition, wherein meeting the time point of first condition
Ardenter point is reacted when watching video for user, the time point quantity for meeting first condition is limited, to reduce subsequent
The calculation amount of sequence, improves efficiency.But since the time point for meeting first condition is natively few, and subsequent also to pass through
Second condition further screens time point, thus in this example, what is selected meets the time point of first condition
Without sequence, directly as candidate time point, the operation being ranked up to the time point of selection is eliminated, to further mention
High efficiency.
Step 503: will be merged for candidate time point selected by each recording file, generate candidate time point
Set.
Step 504: calculating in the candidate time point set that volume meets first condition on each candidate time point
Recording file accounts for the ratio of all recording files.And
Step 505: the time point that the ratio meets second condition is determined as to the interaction time point of the video.
The operation that step 503-505 is executed is consistent with the operation that above-mentioned steps 405-407 and step 302-304 is executed,
This is repeated no more.
In some instances, the interaction content is dynamic counting-down reminding picture.The counting-down reminding picture is reminded
There may be certain irritations for video content below user, and user is allowed to have a psychological preparation in advance, avoid psychological excessively thorn
Swash, is suitble to the groups such as old man, child.The counting-down reminding picture can be in the form of floating layer in the broadcasting pictures of the video
The upper displaying dynamic counting-down reminding picture.Since counting-down reminding picture is begun to before interaction time point starts
Show, therefore, in above-mentioned steps 202 can by determining each interaction time point again shift to an earlier date scheduled time point, with
Counting-down reminding picture is downloaded in step 204 in advance, the purpose shown in advance.Alternatively, the alternative solution as above scheme,
In above-mentioned steps 204, the terminal is before playing the video to any interaction time point when scheduled time point, so that it may
According to the download link of the interaction content of the interaction time point, obtains the interaction content of the interaction time point and start to open up
Show.
In some instances, the interaction content is the recording text that volume meets first condition on corresponding interaction time point
Merging audio of the part at the time point.On interaction time point other users at the time point on recording merge after play
To user, to enhance the result of broadcast of video.Such as terrified video, compare in interaction time point, namely terrified index
At high time point, the terror that other users watch the time point is screamed after recording merges and plays to user, thus further
The terrified index for improving terrified video, improves user experience.By the merging audio on the interaction time point with original video
In audio be superimposed play.
The application proposes a kind of video broadcasting method, and this method can be applied to terminal 101.In one example, such as Fig. 6 institute
Show, method includes the following steps:
Step 601: the request for playing the video is sent to server.
Before the playing request that terminal to server sends the video, the retrieval of the video is first completed.Such as user
The request for searching for the video is issued to web server by the web browser in terminal, web server is in video server
After navigating to the video, video server address information where the attribute information and the video of the video is returned.Web browser
Video server address information where the attribute information of the video and the video is sent to the video consumer in terminal 101
End, the videoconference client in terminal 101 sends the playing request of the video to video server later.The wherein video visitor
Family end can be video player, and the video server 102 can be streaming media server.
Step 602: receiving the data information for the video that server is sent, play the video;
Videoconference client in terminal 101 includes the text of the video into the video playing request that video server is initiated
Part mark, video server obtain the data information of the video according to the file identification, and the data of the video are believed
Breath, i.e., the video consumer of the video flowing and audio streams of the described video to the videoconference client in terminal 101, in terminal 101
End plays the video.
Step 603: interaction time point and each the interaction time point institute for receiving the video that server is sent are right
Answer the download link of interaction content.
In this step, the server obtains in the interaction of the interaction time point and each interaction time point of video
Holding can be obtained using the video broadcasting method applied to server described above, and server is mutual by the video of acquisition
The download link of dynamic time point and each time point corresponding interaction content is sent to the videoconference client in terminal.
Step 604: in the video playing to any interaction time point, being interacted according to corresponding to the interaction time point
The download link of content obtains the corresponding interaction content of the interaction time point.
It is provided with time set in terminal 101, when terminal 101 plays the video by videoconference client thereon,
Start the time set, the scheduled time point before the video playing to any interaction time point or any interaction time point
When, time set issues an alarm signal to terminal, and terminal is according to the signal, while according to the corresponding interaction of interaction time point
The download link of content obtains the interaction content, and the interaction content and the audio video synchronization is played.
Step 605: showing acquired interaction content.
After videoconference client in terminal 101 gets interaction content, in interaction time point by interaction content and the view
Frequency is played simultaneously.Simultaneously there are interactive button on the videoconference client interface in terminal 101, user can voluntarily choose whether to need
Terrified interaction is wanted, is interacted if user is cancelled by interactive button, when video playing to interaction time point, videoconference client
Interaction content is not obtained from video server, does not show the interaction content to user.
Using video broadcasting method provided by the present application, the interaction time point and each interaction that server is sent are received
Time point corresponding interaction content, so that interaction content be played simultaneously at interaction time point, increase view when playing video
The effect that frequency plays, improves user experience.
In some instances, the interaction content is dynamic counting-down reminding picture.The counting-down reminding picture is reminded
There may be certain irritations for video content below user, and user is allowed to have a psychological preparation in advance, avoid psychological excessively thorn
Swash, is suitble to the groups such as old man, child.The download link of the interaction content according to corresponding to the interaction time obtains described mutual
Dynamic time point corresponding interaction content includes: at the interaction time point or the preset time point before the interaction time point
The dynamic counting-down reminding picture is obtained according to the download link of interaction content corresponding to the interaction time point.And it is described
Show that acquired interaction content includes: from the interaction time point or from preset time point before the interaction time point
Start to the interaction time point to terminate, in the upper displaying dynamic countdown of the broadcasting pictures of the video in the form of floating layer
Reminding picture.
In some instances, the interaction content is audio file.The audio file is on corresponding interaction time point
Volume meets merging audio of the recording file at the time point of first condition.Other users at this on interaction time point
Recording on time point plays to user after merging, to increase the broadcasting atmosphere of video.Such as it for terrified video, is interacting
Time point, namely the time point place that terrified index is relatively high scream the terror that other users watch the time point recording merging
After play to user, to further increase the terrified index of terrified video, improve user experience.It is described according to the interaction when
Between put the download link of corresponding interaction content to obtain the corresponding interaction content of the interaction time point include: in the interaction
Between put before the download link of interaction content according to corresponding to the interaction time point of preset time point obtain audio text
Part.And interaction content acquired in the displaying include: before the interaction time point preset time point to institute
Stating interaction time point terminates the superposition broadcasting audio file.
In some instances, the method further includes:
During the video playing, the process for watching the video to user is recorded, and generates user's viewing
The recording file of the video;And the recording file is uploaded to server.Consequently facilitating subsequent interaction time point
Update and the update of interaction content.Recording device is set in terminal 101 or the terminal 101 is circumscribed with recording device,
Such as terminal 101 is circumscribed with recording microphone, terminal 101 is circumscribed with headset.When the videoconference client in user's viewing terminal 101 is broadcast
It when the video put, is recorded by the sound pick-up outfit to user, generates the recording file that user watches the video, and will
The recording file is uploaded to video server.In some instances, the mouth of the close user of the sound pick-up outfit setting, far
From the terminal 101, so that the recording file obtained is the sound that user issues, the sound for playing video is avoided
Interference.
The application also proposes that a kind of video play device 700, the device can be applied to video server 102.In an example
In, as shown in fig. 7, the device includes:
Recording file receiving unit 701, for receiving recording file when at least one user watches the video;
Interaction time point determination unit 702, for determining the interaction time point of the video according to the recording file;
Interactive configuration unit 703, for configuring each corresponding interaction content of interaction time point and its download link;With
And
Transmission unit 704, for by the downloading of the interaction time point and the interaction content of each interaction time point
Link is sent to terminal, by the terminal when playing the video to any interaction time point, according to the interaction time point
Interaction content download link, obtain the interaction content of the interaction time point and displaying.
Using video play device provided by the present application, recording file when watching video according to multiple users obtains the view
The interaction time point of frequency, and the corresponding interaction content of interaction time point is configured, thus when playing video, at interaction time point
Interaction content is played simultaneously, increases the effect of video playing, improves user experience.
In some instances, the interaction time point determination unit 702 is used for:
Respectively in each recording file choose volume sort from high to low in top n time point, when as candidate
Between point;
It will be merged for candidate time point selected by each recording file, generate candidate time point set;
Calculate in the candidate time point set that volume meets the recording file of first condition on each candidate time point
Account for the ratio of all recording files;And
The time point that the ratio meets second condition is determined as to the interaction time point of the video.
In some instances, the interaction time point determination unit 702 is used for:
The volume at each time point in each recording file is obtained respectively;
Choose the time point that the volume meets first condition;
The volume at the time point of the selection is ranked up from high to low;
Top n time point is chosen as the candidate time point;
It will be merged for candidate time point selected by each recording file, generate candidate time point set;
Calculate in the candidate time point set that volume meets the recording file of first condition on each candidate time point
Account for the ratio of all recording files;And
The time point that the ratio meets second condition is determined as to the interaction time point of the video.
In some instances, the interaction time point determination unit 702 is used for:
The volume at each time point in each recording file is obtained respectively;
It chooses the volume and meets the time point of first condition as the candidate time point;
It will be merged for candidate time point selected by each recording file, generate candidate time point set;
Calculate in the candidate time point set that volume meets the recording file of first condition on each candidate time point
Account for the ratio of all recording files;And
The time point that the ratio meets second condition is determined as to the interaction time point of the video.
The application also proposes that a kind of video play device 800, the device can be applied to terminal 101.In one example, as schemed
Shown in 8, which includes:
Playing request transmission unit 801, for sending the request for playing the video to server,
Interaction time point and download link receiving unit 802, when for receiving the interaction for the video that server is sent
Between put and each interaction time point corresponding to interaction content download link;
Data information receives broadcast unit 803, and the data information of the video for receiving server transmission plays institute
State video;
Interaction content acquiring unit 804 is used in the video playing to any interaction time point, according to the interaction
The download link of interaction content corresponding to time point obtains the corresponding interaction content of the interaction time point;And
Displaying interactive content unit 805, for showing acquired interaction content.
Using video play device provided by the present application, the interaction time point and each interaction that server is sent are received
Time point corresponding interaction content, so that interaction content be played simultaneously at interaction time point, increase view when playing video
The effect that frequency plays, improves user experience.
In some instances, described device further comprises:
Recoding unit 806, for during the video playing, the process for watching the video to user to be recorded
Sound generates the recording file that user watches the video;And
Uploading unit 807, for the recording file to be uploaded to the server.
Recording device or the terminal 101 are set in terminal 101 and are circumscribed with recording device, such as terminal 101 is external
There is recording microphone, terminal 101 is circumscribed with headset.When the video that the videoconference client in user's viewing terminal 101 plays, pass through
The sound pick-up outfit records to user, generates the recording file that user watches the video, and will be on the recording file
Reach video server.In some instances, the mouth of the close user of the sound pick-up outfit setting, the separate terminal 101,
So that the recording file obtained is the sound that user issues, the interference of the sound for playing video is avoided.
Fig. 9 shows the composite structural diagram of the calculating equipment where video play device 700 and video play device 800.
As shown in figure 9, the calculating equipment includes one or more processor (CPU) 902, communication module 904, memory 906, user
Interface 910, and the communication bus 908 for interconnecting these components.
Processor 902 can send and receive data by communication module 904 to realize network communication and/or local communication.
User interface 910 includes one or more output equipments 912 comprising one or more speakers and/or one
Or multiple visual displays.User interface 910 also includes one or more input equipments 914 comprising such as, keyboard, mouse
Mark, voice command input unit or loudspeaker, touch screen displays, touch sensitive tablet, posture capture camera or other inputs are pressed
Button or control etc..
Memory 906 can be high-speed random access memory, such as DRAM, SRAM, DDR RAM or other deposit at random
Take solid storage device;Or nonvolatile memory, such as one or more disk storage equipments, optical disc memory apparatus, sudden strain of a muscle
Deposit equipment or other non-volatile solid-state memory devices.
The executable instruction set of 906 storage processor 902 of memory, comprising:
Operating system 916, including the program for handling various basic system services and for executing hardware dependent tasks;
Using 918, including the various application programs for video playing, this application program can be realized above-mentioned each example
In process flow, for example may include unit or Fig. 8 institute some or all of in video play device 700 shown in Fig. 7
Some or all of in the video play device 800 shown.At least one unit in each unit or module 701-704 can store
There is a machine-executable instruction, at least one unit in each unit or module 801-807 can store machine-executable instruction.
Processor 902 is by executing in memory 906 in each unit 701-704 or in each unit 801-807 at least one unit
Machine-executable instruction, and then can be realized in the function or realization of at least one module in above-mentioned each unit 701-704
State the function of at least one module in each unit 801-807.
It should be noted that step and module not all in above-mentioned each process and each structure chart be all it is necessary, can
To ignore certain steps or module according to the actual needs.Each step execution sequence be not it is fixed, can according to need into
Row adjustment.The division of each module is intended merely to facilitate the division functionally that description uses, and in actual implementation, a module can
It is realized with point by multiple modules, the function of multiple modules can also be realized by the same module, these modules can be located at same
In a equipment, it can also be located in different equipment.
Hardware module in each embodiment can in hardware or hardware platform adds the mode of software to realize.Above-mentioned software
Including machine readable instructions, it is stored in non-volatile memory medium.Therefore, each embodiment can also be presented as software product.
In each example, hardware can be by special hardware or the hardware realization of execution machine readable instructions.For example, hardware can be with
Permanent circuit or logical device (such as application specific processor, such as FPGA or ASIC) specially to design are used to complete specifically to grasp
Make.Hardware also may include programmable logic device or circuit by software provisional configuration (as included general processor or other
Programmable processor) for executing specific operation.
In addition, each example of the application can pass through the data processor by data processing equipment such as computer execution
To realize.Obviously, data processor constitutes the application.In addition, being commonly stored data processing in one storage medium
Program is by directly reading out storage medium or the storage by program being installed or being copied to data processing equipment for program
It is executed in equipment (such as hard disk and/or memory).Therefore, such storage medium also constitutes the application, and present invention also provides one
Kind non-volatile memory medium, wherein being stored with data processor, this data processor can be used for executing in the application
State any one of method example example.
The corresponding machine readable instructions of module in Fig. 9 can be such that operating system operated on computer etc. completes here
The some or all of operation of description.Non-volatile computer readable storage medium storing program for executing can be in the expansion board in insertion computer
In set memory or write the memory being arranged in the expanding element being connected to a computer.Be mounted on expansion board or
CPU on person's expanding element etc. can be according to instruction execution part and whole practical operations.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention
Within mind and principle, any modification, equivalent substitution, improvement and etc. done be should be included within the scope of the present invention.
Claims (12)
1. a kind of video broadcasting method, which is characterized in that executed by server, comprising:
Receive recording file when multiple users watch the video;
For each recording file:
Choose multiple candidate time points;
Count what each candidate time point in multiple candidate time points of recording file selection was chosen in all recording files
The frequency occurred in candidate time point determines whether the recording file contains noise according to the frequency, if comprising will
The recording file and its candidate time of selection point are rejected;
It will be merged for multiple candidate time points selected by each recording file, generate candidate time point set;
It calculates the recording file that volume meets first condition on each candidate time point in the candidate time point set and accounts for institute
There is the ratio of recording file;
The time point that the ratio meets second condition is determined as to the interaction time point of the video;
Configure each corresponding interaction content of interaction time point and its download link;And
The download link of the interaction time point and the interaction content of each interaction time point is sent to terminal, by described
Terminal is when playing the video to any interaction time point, according to the download link of the interaction content of the interaction time point,
Obtain the interaction content of the interaction time point and displaying, wherein the interaction content is the volume on the interaction time point
Meet merging audio of the recording file at the time point of first condition.
2. according to the method described in claim 1, wherein, the multiple candidate time points of selection include:
Respectively in each recording file choose volume sort from high to low in top n time point, as candidate time point;
Wherein, N is predetermined natural number.
3. according to the method described in claim 1, wherein, including: according to multiple candidate time points are chosen
The volume at each time point in each recording file is obtained respectively;
Choose the time point that the volume meets first condition;
The volume at the time point of the selection is ranked up from high to low;
Top n time point is chosen as the candidate time point;Wherein, N is predetermined natural number.
4. according to the method described in claim 1, wherein, including: according to N number of candidate time point is chosen
The volume at each time point in each recording file is obtained respectively;
It chooses the volume and meets the time point of first condition as the candidate time point.
5. a kind of video broadcasting method characterized by comprising
The request for playing the video is sent to server;
The data information for receiving the video that server is sent, plays the video;
During the video playing, the process for watching the video to user is recorded, and is generated described in user's viewing
The recording file of video;
The recording file is uploaded to the server;Wherein, when the server receives multiple users' viewing videos
Recording file, for each recording file: choosing multiple candidate time points;Count multiple candidates of recording file selection
The frequency that each candidate time point occurs in the candidate time point that all recording files are chosen in time point, according to the frequency
Whether the secondary determination recording file contains noise, if comprising picking the recording file and its candidate time of selection point
It removes;It will be merged for multiple candidate time points selected by each recording file, generate candidate time point set;It calculates
The recording file that volume meets first condition on each candidate time point in the candidate time point set accounts for all recording texts
The ratio of part;The time point that the ratio meets second condition is determined as to the interaction time point of the video;
Receive interaction content corresponding to the interaction time point and each interaction time point for the video that server is sent
Download link;
In the video playing to any interaction time point, according to the downloading chain of interaction content corresponding to the interaction time point
It obtains and takes the corresponding interaction content of the interaction time point, wherein the interaction content is the volume on the interaction time point
Meet merging audio of the recording file at the time point of first condition;And
Show acquired interaction content.
6. according to the method described in claim 5, wherein, the interaction content is dynamic counting-down reminding picture;
It is corresponding mutually that the download link of the interaction content according to corresponding to the interaction time point obtains the interaction time point
Dynamic content includes: interaction content according to corresponding to the interaction time point of preset time point before the interaction time point
Download link obtains the dynamic counting-down reminding picture;And
Interaction content acquired in the displaying include: before the interaction time point preset time point to it is described mutually
Dynamic time point terminates, and shows the dynamic counting-down reminding picture in the broadcasting pictures of the video in the form of floating layer.
7. according to the method described in claim 5, wherein, the interaction content is audio file;
It is corresponding mutually that the download link of the interaction content according to corresponding to the interaction time point obtains the interaction time point
Dynamic content includes: interaction content according to corresponding to the interaction time point of preset time point before the interaction time point
Download link obtains the audio file;And
Interaction content acquired in the displaying include: before the interaction time point preset time point to it is described mutually
Dynamic time point terminates superposition and plays the audio file.
8. a kind of video play device, which is characterized in that be applied to server, comprising:
Recording file receiving unit, for receiving recording file when multiple users watch the video;
Interaction time point determination unit, for being directed to each recording file: choosing multiple candidate time points;Count recording text
Each candidate time point occurs in the candidate time point that all recording files are chosen in multiple candidate time points that part is chosen
The frequency, determine whether the recording file contains noise according to the frequency, if comprising by the recording file and its choosing
The candidate time point taken is rejected;It will be merged for multiple candidate time points selected by each recording file, generate and wait
Select time point set;Calculate in the candidate time point set that volume meets the recording of first condition on each candidate time point
File accounts for the ratio of all recording files;When the time point that the ratio meets second condition is determined as the interaction of the video
Between point;
Interactive configuration unit, for configuring each corresponding interaction content of interaction time point and its download link;And
Transmission unit, for sending the download link of the interaction time point and the interaction content of each interaction time point
To terminal, by the terminal when playing the video to any interaction time point, according in the interaction of the interaction time point
The download link of appearance obtains the interaction content of the interaction time point and displaying, wherein the interaction content is in the interaction
Volume meets merging audio of the recording file at the time point of first condition on time point.
9. device according to claim 8, wherein the interaction time point determination unit is used for:
Respectively in each recording file choose volume sort from high to low in top n time point, as candidate time point.
10. device according to claim 8, wherein the interaction time point determination unit is used for:
The volume at each time point in each recording file is obtained respectively;
Choose the time point that the volume meets first condition;
The volume at the time point of the selection is ranked up from high to low;
Top n time point is chosen as the candidate time point.
11. device according to claim 8, wherein the interaction time point determination unit is used for:
The volume at each time point in each recording file is obtained respectively;
It chooses the volume and meets the time point of first condition as the candidate time point.
12. a kind of video play device characterized by comprising
Playing request transmission unit, for sending the request for playing the video to server,
Interaction time point and download link receiving unit, for receive server transmission the video interaction time point and
The download link of interaction content corresponding to each interaction time point;
Data information receives broadcast unit, and the data information of the video for receiving server transmission plays the video;
Interaction content acquiring unit is used in the video playing to any interaction time point, according to the interaction time point
The download link of corresponding interaction content obtains the corresponding interaction content of the interaction time point, wherein the interaction content is
Volume meets merging audio of the recording file at the time point of first condition on the interaction time point;And
Displaying interactive content unit, for showing acquired interaction content;
Recoding unit is used for during the video playing, and the process for watching the video to user is recorded, and is generated
User watches the recording file of the video;And
Uploading unit, for the recording file to be uploaded to the server;Wherein, the server receives multiple users and sees
The recording file when video is seen, for each recording file: choosing multiple candidate time points;Count recording file choosing
The frequency that each candidate time point occurs in the candidate time point that all recording files are chosen in the multiple candidate time points taken
It is secondary, determine whether the recording file contains noise according to the frequency, if comprising by the recording file and its selection
Candidate time point is rejected;It will be merged for multiple candidate time points selected by each recording file, when generating candidate
Between point set;Calculate in the candidate time point set that volume meets the recording file of first condition on each candidate time point
Account for the ratio of all recording files;The time point that the ratio meets second condition is determined as to the interaction time of the video
Point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710223156.XA CN106851424B (en) | 2017-04-07 | 2017-04-07 | Video broadcasting method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710223156.XA CN106851424B (en) | 2017-04-07 | 2017-04-07 | Video broadcasting method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106851424A CN106851424A (en) | 2017-06-13 |
CN106851424B true CN106851424B (en) | 2019-08-30 |
Family
ID=59146900
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710223156.XA Active CN106851424B (en) | 2017-04-07 | 2017-04-07 | Video broadcasting method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106851424B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108391168B (en) * | 2018-02-02 | 2020-10-02 | 浙江吉博教育科技有限公司 | Education video control device based on internet |
CN113115117B (en) * | 2021-03-16 | 2022-12-23 | 新东方教育科技集团有限公司 | Interactive video playing method and device, storage medium and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101420579A (en) * | 2007-10-22 | 2009-04-29 | 皇家飞利浦电子股份有限公司 | Method, apparatus and system for detecting exciting part |
CN103997691A (en) * | 2014-06-02 | 2014-08-20 | 合一网络技术(北京)有限公司 | Method and system for video interaction |
CN104754419A (en) * | 2015-03-13 | 2015-07-01 | 腾讯科技(北京)有限公司 | Video-based interaction method and device |
CN105009599A (en) * | 2012-12-31 | 2015-10-28 | 谷歌公司 | Automatic identification of a notable moment |
CN105451086A (en) * | 2015-09-22 | 2016-03-30 | 合一网络技术(北京)有限公司 | Method and apparatus for realizing video interaction |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1859558A (en) * | 2006-03-06 | 2006-11-08 | 华为技术有限公司 | Digital TV telecast system and its method based on radio transmission |
CN103634681B (en) * | 2013-11-29 | 2017-10-10 | 腾讯科技(成都)有限公司 | Living broadcast interactive method, device, client, server and system |
-
2017
- 2017-04-07 CN CN201710223156.XA patent/CN106851424B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101420579A (en) * | 2007-10-22 | 2009-04-29 | 皇家飞利浦电子股份有限公司 | Method, apparatus and system for detecting exciting part |
CN105009599A (en) * | 2012-12-31 | 2015-10-28 | 谷歌公司 | Automatic identification of a notable moment |
CN103997691A (en) * | 2014-06-02 | 2014-08-20 | 合一网络技术(北京)有限公司 | Method and system for video interaction |
CN104754419A (en) * | 2015-03-13 | 2015-07-01 | 腾讯科技(北京)有限公司 | Video-based interaction method and device |
CN105451086A (en) * | 2015-09-22 | 2016-03-30 | 合一网络技术(北京)有限公司 | Method and apparatus for realizing video interaction |
Also Published As
Publication number | Publication date |
---|---|
CN106851424A (en) | 2017-06-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108184144B (en) | Live broadcast method and device, storage medium and electronic equipment | |
RU2387013C1 (en) | System and method of generating interactive video images | |
CN105450642B (en) | It is a kind of based on the data processing method being broadcast live online, relevant apparatus and system | |
US8930817B2 (en) | Theme-based slideshows | |
US9015590B2 (en) | Multimedia comment system and multimedia comment method | |
KR101796005B1 (en) | Media processing methods and arrangements | |
KR101377235B1 (en) | System for sequential juxtaposition of separately recorded scenes | |
KR101983107B1 (en) | Method for inserting information push into live video streaming, server and terminal | |
CN103369367B (en) | Streamable content is used to improve the system and method for Consumer's Experience | |
US20050204287A1 (en) | Method and system for producing real-time interactive video and audio | |
US20190104325A1 (en) | Event streaming with added content and context | |
EP2834972B2 (en) | Multi-source video navigation | |
CN109168026A (en) | Instant video display methods, device, terminal device and storage medium | |
JP2003308328A (en) | Regenerator and method for regenerating content link, program therefor, and recording medium | |
CN102790922B (en) | Multimedia player and share multimedia method | |
CN105144739B (en) | Display system with media handling mechanism and its operating method | |
CN103929669A (en) | Interactive video generator, player, generating method and playing method | |
CN108650547A (en) | A kind of video sharing method, apparatus and equipment | |
Scheible et al. | MobiToss: a novel gesture based interface for creating and sharing mobile multimedia art on large public displays | |
CN110462595A (en) | Virtual processing server, the control method of virtual processing server, the application program of content distribution system and terminal installation | |
CN113596553A (en) | Video playing method and device, computer equipment and storage medium | |
US20160275108A1 (en) | Producing Multi-Author Animation and Multimedia Using Metadata | |
CN106851424B (en) | Video broadcasting method and device | |
EP2754112B1 (en) | System amd method for producing complex multimedia contents by an author and for using such complex multimedia contents by a user | |
CN109361954A (en) | Method for recording, device, storage medium and the electronic device of video resource |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |