CN106658167B - Video interaction method and device - Google Patents
Video interaction method and device Download PDFInfo
- Publication number
- CN106658167B CN106658167B CN201611147714.0A CN201611147714A CN106658167B CN 106658167 B CN106658167 B CN 106658167B CN 201611147714 A CN201611147714 A CN 201611147714A CN 106658167 B CN106658167 B CN 106658167B
- Authority
- CN
- China
- Prior art keywords
- information
- video
- preset
- user
- playing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 24
- 238000000034 method Methods 0.000 title claims abstract description 18
- 238000000605 extraction Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 14
- 238000004590 computer program Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
- H04N21/4394—Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/475—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
- H04N21/4758—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for providing answers, e.g. voting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4781—Games
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The invention discloses a video interaction method and device. The video interaction method comprises the following steps: receiving information input by a user, wherein the information comprises any one or more of character information, picture information, audio information and video information; analyzing semantic information of the information input by the user; when the preset playing condition is met, playing a preset answer video; analyzing semantic information of the preset answer video; and judging whether the semantic information of the information input by the user is matched with the semantic information of the preset answer video. According to the invention, the semantic information of the information input by the user and the semantic information of the answer video are analyzed, the information input by the user is verified to be matched with the answer, the input mode of the information of the user and the output mode of the answer are enriched, and thus the use experience of the user is improved.
Description
Technical Field
The invention relates to the technical field of intelligent videos, in particular to a video interaction method and device.
Background
In daily life, people often play guessing games, and users select favorite options among several options to be selected. The answer to the guessed game is then displayed after a certain time interval or after clicking on the post answer option. However, in the conventional guessing game, the format of information input by the user is limited, and the user can generally input only character information, so that it is difficult to input information by inputting picture information, audio information, and video information. Similarly, in the aspect of verifying the answer, it is only verified whether the text information input by the user matches with the text information of the answer in the prior art to verify whether the text information input by the user is the correct answer. How to properly solve the format limitation of the information input by the user and the information of the preset answer becomes an urgent issue to be solved in the industry.
Disclosure of Invention
The invention provides a video interaction method and a video interaction device, which are used for verifying that information input by a user is matched with an answer by analyzing semantic information of the information input by the user and semantic information of an answer video.
According to a first aspect of the embodiments of the present invention, there is provided a method for video interaction, including:
receiving information input by a user, wherein the information comprises any one or more of character information, picture information, audio information and video information;
analyzing semantic information of the information input by the user;
when the preset playing condition is met, playing a preset answer video;
analyzing semantic information of the preset answer video;
and judging whether the semantic information of the information input by the user is matched with the semantic information of the preset answer video.
In one embodiment, further comprising:
and before receiving information input by a user, playing a preset guide video, wherein the preset guide video is used for providing reference information for the user.
In one embodiment, the playing the preset answer video after the preset playing condition is met includes:
judging whether the preset playing condition is met in real time;
and when the preset playing condition is met, playing the preset answer video in real time.
In one embodiment, the analyzing semantic information of the preset answer video includes:
extracting any one or more of image frame information, voice information and subtitle information in the preset answer video;
and analyzing semantic information of any one or more of image frame information, voice information and subtitle information in the preset answer video.
In one embodiment, further comprising:
and sending the matching result and feedback information corresponding to the matching result to the client where the user is located.
According to a second aspect of the embodiments of the present invention, there is provided an apparatus for video interaction, including:
the receiving module is used for receiving information input by a user, wherein the information comprises any one or more of character information, picture information, audio information and video information;
the first analysis module is used for analyzing semantic information of the information input by the user;
the first playing module is used for playing a preset answer video when a preset playing condition is met;
the second analysis module is used for analyzing the semantic information of the preset answer video;
and the judging module is used for judging whether the semantic information of the information input by the user is matched with the semantic information of the preset answer video.
In one embodiment, further comprising:
and the second playing module is used for playing a preset guide video before receiving the information input by the user, wherein the preset guide video is used for providing reference information for the user.
In one embodiment, the first playing module includes:
the judgment submodule is used for judging whether the preset playing condition is met in real time;
and the playing sub-module is used for playing the preset answer video in real time when the preset playing condition is met.
In one embodiment, the second analysis module comprises:
the extraction sub-module is used for extracting any one or more of image frame information, voice information and subtitle information in the preset answer video;
and the analysis sub-module is used for analyzing semantic information of any one or more of image frame information, voice information and subtitle information in the preset answer video.
In one embodiment, further comprising:
and the sending module is used for sending the matching result and the feedback information corresponding to the matching result to the client where the user is located.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart illustrating a method of video interaction in accordance with an exemplary embodiment of the present invention;
FIG. 2 is a flow chart illustrating a method of video interaction in accordance with another exemplary embodiment of the present invention;
FIG. 3 is a flowchart illustrating a step S13 of a method for video interaction according to an exemplary embodiment of the invention;
FIG. 4 is a flowchart illustrating a step S14 of a method for video interaction according to an exemplary embodiment of the invention;
FIG. 5 is a flow chart illustrating a method of video interaction in accordance with yet another exemplary embodiment of the present invention;
FIG. 6 is a block diagram of an apparatus for video interaction, according to an exemplary embodiment of the present invention;
FIG. 7 is a block diagram of an apparatus for video interaction according to another exemplary embodiment of the present invention;
fig. 8 is a block diagram of a first playing module 63 of a video interaction device according to an exemplary embodiment of the present invention;
FIG. 9 is a block diagram of a second analysis module 64 of an apparatus for video interaction according to an exemplary embodiment of the present invention;
fig. 10 is a block diagram illustrating an apparatus for video interaction according to still another exemplary embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Fig. 1 is a flowchart illustrating a method of video interaction according to an exemplary embodiment, and the method of video interaction, as shown in fig. 1, includes the following steps S11-S15:
in step S11, receiving information input by a user, wherein the information includes any one or more of text information, picture information, audio information and video information;
in step S12, semantic information of the information input by the user is analyzed;
in step S13, when a preset playing condition is satisfied, playing a preset answer video;
in step S14, semantic information of the preset answer video is analyzed;
in step S15, it is determined whether semantic information of the information input by the user matches semantic information of the preset answer video.
In one embodiment, people often play guessing games in daily life, and users select a favorite option among several options to be selected. The answer to the guessed game is then displayed after a certain time interval or after clicking on the post answer option. However, in the conventional guessing game, the format of information input by the user is limited, and the user can generally input only character information, so that it is difficult to input information by inputting picture information, audio information, and video information. Similarly, in the aspect of verifying the answer, it is only verified whether the text information input by the user matches with the text information of the answer in the prior art to verify whether the text information input by the user is the correct answer. The technical scheme in the embodiment can properly solve the format limitation of the information input by the user and the information of the answer.
The detailed steps are as follows, receiving information input by a user, wherein the information comprises any one or more of character information, picture information, audio information and video information. And analyzing semantic information of the information input by the user.
And when the preset playing condition is met, playing the preset answer video. Further, whether the preset playing condition is met or not is judged in real time. And when the preset playing condition is met, playing the preset answer video in real time.
And analyzing the semantic information of the preset answer video. Further, any one or more of image frame information, voice information and subtitle information in the preset answer video are extracted. And analyzing semantic information of any one or more of image frame information, voice information and subtitle information in the preset answer video.
And judging whether the semantic information of the information input by the user is matched with the semantic information of the preset answer video.
In addition, a preset guide video is played before receiving information input by the user, and the preset guide video is used for providing reference information for the user.
And after judging whether the semantic information of the information input by the user is matched with the semantic information of the preset answer video or not, sending the matching result and feedback information corresponding to the matching result to the client side where the user is located.
According to the technical scheme in the embodiment, the semantic information of the information input by the user and the semantic information of the answer video can be analyzed, the information input by the user is verified to be matched with the answer, the input mode of the information of the user and the output mode of the answer are enriched, and therefore the user experience is improved.
In one embodiment, as shown in fig. 2, the following step S21 is further included:
in step S21, before receiving the information input by the user, a preset guide video for providing the reference information to the user is played.
In one embodiment, the pilot video may provide reference information to the user, where the reference information includes topic information to be answered by the user and prompt information associated with the topic information. The user may choose to repeat playing the pre-set leader video.
In one embodiment, as shown in FIG. 3, step S13 includes the following steps S31-S32:
in step S31, it is determined in real time whether the preset playing condition is satisfied;
in step S32, when the preset playing condition is satisfied, the preset answer video is played in real time.
In one embodiment, the preset playing condition is determined to be satisfied in real time, and the preset playing condition may be a countdown condition, or may be that the user actively submits a playing command, or the like. And when the preset playing condition is met, playing the preset answer video in real time. The user can obtain more videos of related knowledge information from the preset answer video, so that the use experience of the user is improved.
In one embodiment, as shown in FIG. 4, step S14 includes the following steps S41-S42:
in step S41, extracting any one or more of image frame information, voice information, and subtitle information in the preset answer video;
in step S42, semantic information of any one or more of image frame information, voice information, and subtitle information in the preset answer video is analyzed.
In one embodiment, after extracting any one or more of image frame information, voice information, and subtitle information in the preset answer video, semantic information of any one or more of image frame information, voice information, and subtitle information in the preset answer video is analyzed. For example, a plurality of pieces of continuous image frame information of a dog catching and dialing a mouse appear in the preset answer video, the voice information in the preset answer video describes the voice content of a dog catching and dialing a mouse, and the caption information related to a dog catching and dialing a mouse appears in the preset answer video, so that the semantic information of the video is analyzed as 'dog taking and consuming'.
In one embodiment, as shown in fig. 5, the following step S51 is further included:
in step S51, the matching result and the feedback information corresponding to the matching result are sent to the client where the user is located.
In one embodiment, the matching result of the semantic information of the information input by the user and the voice information of the preset answer video is informed to the client where the user is located. Further, when the matching result is a correct answer, the system may send reward information to the client where the user is located. When the matching result is the wrong archive office, the system can respond to the client side where the user is located to send punishment information.
In one embodiment, FIG. 6 is a block diagram illustrating an apparatus for video interaction in accordance with an example embodiment. As shown in fig. 6, the apparatus includes a receiving module 61, a first analyzing module 62, a first playing module 63, a second analyzing module 64 and a judging module 65.
The receiving module 61 is configured to receive information input by a user, where the information includes any one or more of text information, picture information, audio information, and video information;
the first analysis module 62 is configured to analyze semantic information of the information input by the user;
the first playing module 63 is configured to play a preset answer video when a preset playing condition is met;
the second analysis module 64 is configured to analyze semantic information of the preset answer video;
the judging module 65 is configured to judge whether semantic information of the information input by the user matches semantic information of the preset answer video.
As shown in fig. 7, a second playing module 71 is further included.
The second playing module 71 is configured to play a preset guide video before receiving information input by the user, where the preset guide video is used to provide reference information for the user.
As shown in fig. 8, the first playing module 63 includes a judgment sub-module 81 and a playing sub-module 82.
The judgment submodule 81 is configured to judge whether the preset playing condition is met in real time;
the playing sub-module 82 is configured to play the preset answer video in real time when the preset playing condition is met.
As shown in fig. 9, the second analysis module 64 includes an extraction sub-module 91 and an analysis sub-module 92.
The extraction sub-module 91 is configured to extract any one or more of image frame information, voice information, and subtitle information in the preset answer video;
the analysis sub-module 92 is configured to analyze semantic information of any one or more of image frame information, voice information, and subtitle information in the preset answer video.
As shown in fig. 10, a sending module 101 is further included.
The sending module 101 is configured to send the matching result and the feedback information corresponding to the matching result to the client where the user is located.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (2)
1. A method for video interaction, comprising:
receiving information input by a user, wherein the information comprises any one or more of character information, picture information, audio information and video information;
analyzing semantic information of the information input by the user;
when the preset playing condition is met, playing a preset answer video;
analyzing semantic information of the preset answer video;
judging whether the semantic information of the information input by the user is matched with the semantic information of the preset answer video;
further comprising:
before receiving information input by a user, playing a preset guide video, wherein the preset guide video is used for providing reference information for the user;
after meeting the preset playing condition, playing the preset answer video, including:
judging whether the preset playing condition is met in real time;
when the preset playing condition is met, playing the preset answer video in real time;
the analyzing semantic information of the preset answer video includes:
extracting any one or more of image frame information, voice information and subtitle information in the preset answer video;
analyzing semantic information of any one or more of image frame information, voice information and subtitle information in the preset answer video;
further comprising:
and sending the matching result and feedback information corresponding to the matching result to the client where the user is located.
2. An apparatus for video interaction, comprising:
the receiving module is used for receiving information input by a user, wherein the information comprises any one or more of character information, picture information, audio information and video information;
the first analysis module is used for analyzing semantic information of the information input by the user;
the first playing module is used for playing a preset answer video when a preset playing condition is met;
the second analysis module is used for analyzing the semantic information of the preset answer video;
the judging module is used for judging whether the semantic information of the information input by the user is matched with the semantic information of the preset answer video;
further comprising:
the second playing module is used for playing a preset guide video before receiving information input by a user, wherein the preset guide video is used for providing reference information for the user;
the first playing module comprises:
the judgment submodule is used for judging whether the preset playing condition is met in real time;
the playing sub-module is used for playing the preset answer video in real time when the preset playing condition is met;
the second analysis module comprises:
the extraction sub-module is used for extracting any one or more of image frame information, voice information and subtitle information in the preset answer video;
the analysis submodule is used for analyzing semantic information of any one or more of image frame information, voice information and subtitle information in the preset answer video;
further comprising:
and the sending module is used for sending the matching result and the feedback information corresponding to the matching result to the client where the user is located.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611147714.0A CN106658167B (en) | 2016-12-13 | 2016-12-13 | Video interaction method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611147714.0A CN106658167B (en) | 2016-12-13 | 2016-12-13 | Video interaction method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106658167A CN106658167A (en) | 2017-05-10 |
CN106658167B true CN106658167B (en) | 2020-03-17 |
Family
ID=58825872
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611147714.0A Expired - Fee Related CN106658167B (en) | 2016-12-13 | 2016-12-13 | Video interaction method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106658167B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108391152A (en) * | 2018-01-15 | 2018-08-10 | 上海全土豆文化传播有限公司 | Display control method and display control unit |
CN109788367A (en) * | 2018-11-30 | 2019-05-21 | 北京达佳互联信息技术有限公司 | A kind of information cuing method, device, electronic equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102354462A (en) * | 2011-10-14 | 2012-02-15 | 北京市莱科智多教育科技有限公司 | Childhood education system and childhood education method |
WO2013156828A1 (en) * | 2012-04-16 | 2013-10-24 | Talkalter Inc. | Method and system for creating and sharing interactive video portraits |
CN103970791A (en) * | 2013-02-01 | 2014-08-06 | 华为技术有限公司 | Method and device for recommending video from video database |
CN104142936A (en) * | 2013-05-07 | 2014-11-12 | 腾讯科技(深圳)有限公司 | Audio and video match method and audio and video match device |
CN104216990A (en) * | 2014-09-09 | 2014-12-17 | 科大讯飞股份有限公司 | Method and system for playing video advertisement |
CN105095272A (en) * | 2014-05-12 | 2015-11-25 | 阿里巴巴集团控股有限公司 | Question and answer processing method, device and system based on image recognition |
CN105427696A (en) * | 2015-11-20 | 2016-03-23 | 江苏沁恒股份有限公司 | Method for distinguishing answer to target question |
-
2016
- 2016-12-13 CN CN201611147714.0A patent/CN106658167B/en not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102354462A (en) * | 2011-10-14 | 2012-02-15 | 北京市莱科智多教育科技有限公司 | Childhood education system and childhood education method |
WO2013156828A1 (en) * | 2012-04-16 | 2013-10-24 | Talkalter Inc. | Method and system for creating and sharing interactive video portraits |
CN103970791A (en) * | 2013-02-01 | 2014-08-06 | 华为技术有限公司 | Method and device for recommending video from video database |
CN104142936A (en) * | 2013-05-07 | 2014-11-12 | 腾讯科技(深圳)有限公司 | Audio and video match method and audio and video match device |
CN105095272A (en) * | 2014-05-12 | 2015-11-25 | 阿里巴巴集团控股有限公司 | Question and answer processing method, device and system based on image recognition |
CN104216990A (en) * | 2014-09-09 | 2014-12-17 | 科大讯飞股份有限公司 | Method and system for playing video advertisement |
CN105427696A (en) * | 2015-11-20 | 2016-03-23 | 江苏沁恒股份有限公司 | Method for distinguishing answer to target question |
Also Published As
Publication number | Publication date |
---|---|
CN106658167A (en) | 2017-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101377235B1 (en) | System for sequential juxtaposition of separately recorded scenes | |
CN107316520B (en) | Video teaching interaction method, device, equipment and storage medium | |
US20180070143A1 (en) | System and method for optimized and efficient interactive experience | |
CN111770356B (en) | Interaction method and device based on live game | |
CN114339285B (en) | Knowledge point processing method, video processing method, device and electronic equipment | |
US10864447B1 (en) | Highlight presentation interface in a game spectating system | |
CN111294606B (en) | Live broadcast processing method and device, live broadcast client and medium | |
US10363488B1 (en) | Determining highlights in a game spectating system | |
WO2014151352A1 (en) | Language learning environment | |
CN104952009A (en) | Resource management method, system and server and interactive teaching terminal | |
CN108664536B (en) | Interactive video and audio sharing method and system | |
CN111800668B (en) | Barrage processing method, barrage processing device, barrage processing equipment and storage medium | |
CN111654754A (en) | Video playing method and device, electronic equipment and readable storage medium | |
CN106658167B (en) | Video interaction method and device | |
CN108924651A (en) | Instructional video intelligent playing system based on training operation identification | |
CN111935551A (en) | Video processing method and device, electronic equipment and storage medium | |
CN110072140A (en) | A kind of video information reminding method, device, equipment and storage medium | |
CN104320682A (en) | Formulation task on-demand broadcasting method and system and associated equipment | |
CN112131361A (en) | Method and device for pushing answer content | |
CN111479124A (en) | Real-time playing method and device | |
CN113824983A (en) | Data matching method, device, equipment and computer readable storage medium | |
CN114339451A (en) | Video editing method and device, computing equipment and storage medium | |
US20170139933A1 (en) | Electronic Device, And Computer-Readable Storage Medium For Quickly Searching Video Segments | |
US10593366B2 (en) | Substitution method and device for replacing a part of a video sequence | |
KR101721231B1 (en) | 4D media manufacture methods of MPEG-V standard base that use media platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: A method and device for video interaction Effective date of registration: 20210104 Granted publication date: 20200317 Pledgee: Inner Mongolia Huipu Energy Co.,Ltd. Pledgor: TVMINING (BEIJING) MEDIA TECHNOLOGY Co.,Ltd. Registration number: Y2020990001527 |
|
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20200317 Termination date: 20211213 |