CN1946163A - Method for making and playing interactive video frequency with heat spot zone - Google Patents
Method for making and playing interactive video frequency with heat spot zone Download PDFInfo
- Publication number
- CN1946163A CN1946163A CN 200610053953 CN200610053953A CN1946163A CN 1946163 A CN1946163 A CN 1946163A CN 200610053953 CN200610053953 CN 200610053953 CN 200610053953 A CN200610053953 A CN 200610053953A CN 1946163 A CN1946163 A CN 1946163A
- Authority
- CN
- China
- Prior art keywords
- video
- frame
- hot spot
- spot region
- motion vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Information Transfer Between Computers (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
This invention discloses a method for producing and playing interactive videos with hot point regions including: 1, adding hot point interactive regions in the videos, 2, preserving the interactive video, 3, playing the video. This invention adds interactable factors in the traditional video, any close regions in any frames in the video can be added with hot point interactive information, the change of the position and size of the regions can be computed by motion parameters of a camera and the region can timely receive interactive operation of users and responds it relatively. The interactive information in the video is preserved in the mode of appended data file and SMIL standard file can be derived and generated by combining the original video file with the interactive data files, which can be operated ordinarily in all players supporting the SMIL standard.
Description
Technical field
The present invention relates to the computer video editor, relate in particular to a kind of method that is used to make and play interactive video with hot spot region.
Background technology
Occurred since 1980 " Movie Map " [1], HyperText is towards setting up related HyperMedia development between dissimilar multimedias, and then occurred not only on the two-dimensional space but also support the HyperVideo of multimedia hyperlink on the video sequential.Because the time attribute of video, HyperVideo becomes the research tendency of HyperMedia than other static mediums closing to reality and flexibly more.HyperVideo has obtained comparatively successful practical application [2] [3] with its good interactivity and vividness in fields such as education, scientific research and supplemental trainings, and it also very is suitable for the expression structure video and generates summary [4] [5].The research field of HyperVideo can be divided into content description, display mode and three aspects of manufacture method.Aspect content description and display mode, MPEG7[6] and SMIL[7] become the corresponding universal standard, and occurred describing the method [8] that is automatically changeb to SMIL from MPEG7.Because video itself lacks structure and text semantic, and on two-dimensional screen, can not show full content with the locus with other static mediums are direct like that, the making of HyperVideo is difficulty relatively, though some auxiliary systems that make (as HyperCafe[9] and Hyper-Hitchcock[10]) occurred, the poor efficiency of manual manufacture HyperVideo remains and limits the bottleneck that it is further promoted.Based on video analysis and structuring algorithm, the segmentation video index mechanism that generates multilayer from linear video automatically is the target that the researcher makes great efforts.
Attached: list of references
[1]A.Lippman.Movie-Maps:An?Application?of?the?Optical?Videodisc?to?Computer?Graphics.Proc.of?ACM?SIGGRAPH,ACM,pp.32-42,1980.
[2]T.Chambel,C.Zahn,and?M.Finke.Hypervideo?Design?and?Support?for?ContextualizedLearning.IEEE?International?Conference?on?Advanced?Learning?Technologies,pp.345-349,2004.
[3]O.Aubert?and?Y.Prie.Advene:Active?Reading?through?Hypervideo.Proc.of?the?sixteenthACM?conference?on?Hypertext?and?hypermedia,pp.235-244,2005.
[4]A.Girgensohn,F.Shipman,and?L.Wilcox.Hypervideo?Summaries,SPIE?InformationTechnologies?and?Communications,2003.
[5]F.Shipman,A.Girgensohn,L.Wilcox.Generation?of?Interactive?Multi-Level?Video?Summaries.Proc.of?ACM?Multimedia,pp.392-401,2003.
[6]P.Mallorca.MPEG-7?Overview(version?10).ISO/IEC?JTC1/SC29/WG11?N6828,2004.
[7]D.Bulterman?et?al.Synchronized?Multimedia?Integration?Language(SMIL?2.1).2005.
[8]T.Zhou,T.Gedeon?and?J.Jin.Automatic?Generating?Detail-on-demand?Hypervideo?UsingMPEG-7?and?SMIL.Proc.of?the?13th?annual?ACM?international?conference?on?Multimedia,pp.379-382,2005.
[9]D.B.Nitin?Sawhney?and?I.Smith.HyperCafe:Narrative?and?Aesthetic?Properties?ofHypervideo.Proc.of?the?Seventh?ACM?Conference?on?Hypertext,pp.1-10,1996.
[10]F.Shipman?III,A.Girgensohn?and?L.Wilcox.Hypervideo?Expression:Experiences?withHyper-Hitchcock.Proc.of?ACM?Hypertext?Hypertext?and?HyperMedia,pp.217-226,2005.
Summary of the invention
The purpose of this invention is to provide a kind of method that is used to make and play interactive video with hot spot region.
Be used to make and play the method for interactive video with hot spot region:
1) interpolation of the focus interaction area in the video
The user specifies the position and the size of the focus interaction area in some frames, and determine each regional activationary time section and add corresponding additional content in each zone and response target information, according to the kinematic parameter of camera in the every frame that calculates, extrapolate each regional position and size in interior other frame of activationary time section automatically;
2) preservation of interactive video
The markup information of the focus interaction area in the video is preserved with the form of additional data file, or video and interaction data are combined, and exports as the SMIL normative document;
3) broadcast of interactive video
From corresponding mark file, read hot spot region information during video playback, in case the current time is in the activationary time section of hot spot region, then demonstrate this regional additional annotations content in the video, the user clicks the mouse in this zone, then directly jumps to its corresponding response target.
Focus interaction area in the described frame: the shape of focus interaction area can be the figure of closed geometry arbitrarily in the video single frames picture size scope; Focus interaction area in the video is freely specified by the user, or is generated automatically by system supplymentary.
Corresponding additional content in zone and response target information: the additional displaying contents of interaction area is text or image in the video; When the user clicked the focus interaction area, the file destination of response can be audio frequency, video, webpage.
The activationary time section in zone: only in the time period of appointment, the focus interaction area just is in state of activation, could show additional content and response target information.
Automatically extrapolate each regional position and big or small method in interior other frame of activationary time section, comprise the steps:
(1) calculates the kinematic parameter of camera in each frame;
(2) according to the regional location in former frame size and corresponding camera motion calculation of parameter go out in the next frame should the zone changes position and size.
Calculate the method for the kinematic parameter of camera in each frame, comprise the steps:
(1) from the video compression file, reads the motion vector of this frame.
(2) motion vector is carried out normalization.Motion vector under I frame, B frame, P frame and these several different situations such as interframe encode, intraframe coding, hybrid coding is united.
(3) the part noise in the removal motion vector.Main according to being: the slickness when neighborhood of motion vector and variation.
(4) further optimize motion vector.Earlier the motion vector in this frame is carried out cluster, judge that according to the distribution characteristics of each class after the cluster this type of motion vector belongs to camera motion or object motion, only keep the cluster that belongs to camera motion then.
(5) set up camera parameter model solution kinematic parameter.Set up the camera parameter equation group of a hypothesis, with the motion vector after optimizing respectively in the substitution equation group, come unknown parameter in the solving equation group by the numerical linear algebra method of standard again.
The information of mark comprises: 1) time started of this zone state of activation, concluding time; 2) hot spot region position and size during each frame in state of activation; 3) Qu Yu additional content information, text comprises text color, size, hyperlink, image comprises the information of all picture elements, if directly read from file, the path address of log file then; 4) this area relative response target information is image, audio frequency, video or webpage, the path address of log file.
The useful effect that the present invention has is: but in conventional video, added interaction factor, the focus interactive information all can be added in any enclosed zone in the arbitrary frame in the video, and can calculate by the kinematic parameter of camera automatically with big or small change in video playback the position of focus interaction area.During the video playback, the focus interaction area can receive user's interactive operation in real time and make corresponding response.Interactive information in the video is preserved with the form of additional data file, does not rely on the specific coding mode of source video file, does not also need recompile.Source video file and interaction data file can be combined, derive and generate the SMIL normative document, in the player of all support SMIL standards, all can normally move.
Description of drawings
When being the part noise of removing in the motion vector, Fig. 1 (a) judges the schematic diagram that changes slickness;
When being the part noise of removing in the motion vector, Fig. 1 (b) judges the schematic diagram of neighborhood;
Fig. 2 (a) is the cluster schematic diagram that belongs to the camera motion type after the motion vector cluster;
Fig. 2 (b) is the cluster schematic diagram that belongs to the object motion type after the motion vector cluster;
Fig. 2 (c) is the cluster schematic diagram that belongs to the abnormal motion type after the motion vector cluster;
Fig. 3 makes the workflow diagram with focus interaction area video;
Fig. 4 plays the system flow chart with focus interaction area video;
Fig. 5 is the operation interface exemplary plot of making sight spot class interactive video;
Fig. 6 is the application exemplary plot of interactive video in sight formula digital travelling project;
Fig. 7 is the response exemplary plot after the focus interaction area is clicked in the sight formula digital travelling project.
Fig. 8 is the operation interface exemplary plot of making figure kind's interactive video.
Fig. 9 is that the user clicks the response exemplary plot behind the focus personage in the video.
Embodiment
The step of interactive video that making of the present invention has the hot spot region is as follows:
1. the user adds focus interaction area information
The user navigates to a certain frame with video earlier, sketches the contours of the position and the shape size of hot spot region again by mouse, and specifies the time started and the concluding time of this regional state of activation.Adding corresponding additional content for then this hot spot region, comprise text and image, can be directly to read from file, also can be to import in text box and draw by mouse.The pairing response target information in last Adding Area can be image, audio frequency, video, webpage or other file, the normally path address of log file.
2. generate the position and the size of the hot spot region in other frame in the activationary time section automatically
If allow the user that each frame in the activationary time section is all marked the hot spot area location size by hand, be very loaded down with trivial details, so native system assist by the camera motion parameter of calculating each frame and finish this a part of work, concrete steps are as follows:
(1) calculates the kinematic parameter of camera in each frame.
A. from the video compression file, read the motion vector of this frame.
B. motion vector is carried out normalization.Motion vector under I frame, B frame, P frame and these several different situations such as interframe encode, intraframe coding, hybrid coding is united.
C. remove the part noise in the motion vector.Main according to being: the slickness when neighborhood of motion vector and variation.As shown in Figure 1, the motion vector of macro block and four macro blocks that diagonally opposing corner is adjacent were on every side got average respectively in the middle of the part on the left side was represented among the figure, if the number of macroblocks that is lower than a certain threshold values with the difference of middle macro block motion vector in four averages is less than certain value, think that then this macro block does not keep the slickness that changes, noise is arranged, should remove.The motion vector of macro block the and when quantity of difference respectively in threshold values scope necessarily of eight macro blocks is less than certain value on every side in the middle of the part on the right is represented among the figure thinks that then this macro block does not keep neighborhood, and noise is arranged, and should remove.
D. further optimize motion vector.Earlier the motion vector in this frame is carried out cluster, judge that according to the distribution characteristics of each class after the cluster this type of motion vector belongs to camera motion or object motion, only keep the cluster that belongs to camera motion then.As shown in Figure 2, the part on the left side represents to belong to the macro block clustering distribution situation of camera motion among the figure, the macro block clustering distribution situation of middle indicated object motion, the situation that the expression on the right is unusual.
E. set up camera parameter model solution kinematic parameter.Set up the camera parameter equation group of a hypothesis, with the motion vector after optimizing respectively in the substitution equation group, come unknown parameter in the solving equation group by the numerical linear algebra method of standard again.Parameter is many more, and model is accurate more, but it is slow more to find the solution speed.Usually adopt the affine model of 6 parameters:
A in the following formula
0A
5Be unknown parameter, x, y are the coordinate at macro block center, and u, v are two components of the motion vector of macro block.
(2) according to the regional location in former frame size and corresponding camera motion calculation of parameter go out in the next frame should the zone changes position and size.Every point coordinates on the zone boundary is brought into the movement velocity that can calculate this point in the camera model equation group, also just can calculates the position at this place in the next frame.
3. the interactive information in the preservation video
The relevant information of each focus interaction area is saved in the additional data file, and relevant information comprises:
1) time started of this zone state of activation, concluding time.
2) hot spot region position and size during each frame in state of activation.
3) Qu Yu additional content information, text comprises text color, size, hyperlink etc., image comprises the information of all picture elements, if directly read from file, the path address of log file then.
4) this area relative response target information can be image, audio frequency, video, webpage or other file, the path address of log file.
4. derive the SMIL normative document
The SMIL language can weave contents such as audio frequency, video in sequence with set of locations, and support to add the link area layer, so just can write out the multimedia file that can represent identical mutual effect, on the player of all support SMIL standards, can play with the SMIL language.Among the SMIL<region〉label can be used for specifying the display position and the size of the corresponding additional content in hot spot region,<text〉label is used for the display text content,<img〉label is used for display image content.<anchor〉label is used for defining the hot spot region, and wherein the start attribute is used for the time started of appointed area state of activation, and the end attribute is used for specifying the concluding time, the path address of href attribute specified response file destination.By these grammatical functions, system just can generate corresponding SMIL file according to the interaction data file of self.
The step of interactive video that broadcast of the present invention has the hot spot region is as follows:
1. read the corresponding interaction data file of video
In the interaction data file, read path address of position size in state of activation time started, concluding time, each frame of each focus interaction area in the video, additional content information, response file destination etc.
2. show the focus interaction area
When showing each frame, find out current all be in the hot spot region of state of activation, show the additional content that this is regional with particular form respectively.In order to improve the efficient of searching active region, can when reading in the interaction data file, be that a chained list set up in index with the frame number, after each sequence number record this be in the hot spot region of state of activation constantly.
3. judge user's interactive operation
When user's mouse was clicked on broadcasting pictures, traversal was judged the zone under the click location in current all activated hot spot region, and makes this regional respective objects response.Judging point whether in the zone employing be rudimentary algorithm intersection point counting method in the computer graphics, principle is to be that starting point is made a horizontal rays with judging point P, count this ray and the intersection point number of interface boundary is arranged, come judging point whether in the zone according to the intersection point number.
Embodiment 1
As shown in Figure 6, the application in sight formula virtual tour system of this method and system, if the user is interested in the sight spot of being seen in virtual tourism, think further to view and admire, then can be by on video, realizing alternately with the hot spot region, scenic spot, describe the concrete steps that this example is implemented below in detail, as follows:
(1) as shown in Figure 5, in tourism guide's video, add focus interaction area information, suspend when arriving the position that sight spot " Yue Wangmiao " begins to occur, sketch the contours of a rectangle focus interaction area with mouse, add additional text content " Yue Wangmiao ", provide the relevant introduction at " Yue Wangmiao " sight spot; And add this regional target response file, the i.e. file path of further viewing and admiring video of " Yue Wangmiao "; The time started and the concluding time of specifying this hot spot region to activate at last.
(2) generate the position and the size of " Yue Wangmiao " hot spot region in other frame in the activationary time section automatically.
(3) relevant information with the focus interaction area of mark in the step (1) is saved in the additional data file.Comprise time started, concluding time that this zone is activated, the hot spot region is position and the size during each frame in state of activation, additional text content " Yue Wangmiao ", pairing response file destination path.
(4) as shown in Figure 6, when the user browsed to highway section, " Yue Wangmiao " place, sight spot, system read the corresponding interaction data file in this highway section.
(5) when video playback, detect the current hot spot region that is in state of activation in real time, the user goes near " Yue Wangmiao " sight spot, this hot spot region is activated, and begins videotex additional contents such as " Yue Wangmiao ", the sight spot shown in the brief introduction picture at ad-hoc location on the picture.
When (6) being in state of activation in the hot spot region, if the user clicks this zone, bottom-right broadcast window is promptly made response, and what begin to play " Yue Wangmiao " sight spot specifically introduces video.
Embodiment 2
As shown in Figure 9,, then can describe the concrete steps that this example is implemented below in detail by on video, realizing alternately with the personage hot spot region if the user is unfamiliar with the personage in the picture but thinks further understanding in video playback, as follows:
(1) as shown in Figure 8, in video, add focus interaction area information, suspend when arriving the position that personage " Li Jizhu " begins to occur, sketch the contours of a rectangle focus interaction area with mouse, add additional text content " Taiwan celebrity Li Ji pearl ", when the user views and admires video, give prompting; And add this regional target response file, the i.e. file path of the picture brief introduction of " Li Jizhu "; The time started and the concluding time of specifying this hot spot region to activate at last.
(2) generate the position and the size of " Li Jizhu " hot spot region in other frame in the activationary time section automatically.
(3) relevant information with the focus interaction area of mark in the step (1) is saved in the additional data file.Comprise time started, concluding time that this zone is activated, the hot spot region is position and the size during each frame in state of activation, additional text content " Li Jizhu ", pairing response file destination path.
(4) as shown in Figure 9, when video playback, detect the current hot spot region that is in state of activation in real time, personage " Li Jizhu " is when occurring, this hot spot region is activated, begin videotex additional contents such as " Taiwan celebrity Li Ji pearls " at ad-hoc location on the picture, in order to point out this personage's information.
When (5) being in state of activation,, click this zone, then demonstrate the brief introduction of " Li Jizhu " in the broadcasting pictures if the user thinks the correlation circumstance of further understanding " Li Jizhu " in the hot spot region.
Foregoing description is just in order to illustrate and describe the method and system that makings, broadcast have the interactive video of hot spot region.It is not detailed description, does not limit the invention to form illustrated and that describe yet, and obviously, many modifications and variations also are fine.The conspicuous modifications and variations of person skilled in the art are also included within the defined scope of the present invention of subsidiary claim.
Claims (7)
1. method that is used to make and play the interactive video with hot spot region is characterized in that:
1) interpolation of the focus interaction area in the video
The user specifies the position and the size of the focus interaction area in some frames, and determine each regional activationary time section and add corresponding additional content in each zone and response target information, according to the kinematic parameter of camera in the every frame that calculates, extrapolate each regional position and size in interior other frame of activationary time section automatically;
2) preservation of interactive video
The markup information of the focus interaction area in the video is preserved with the form of additional data file, or video and interaction data are combined, and exports as the SMIL normative document;
3) broadcast of interactive video
From corresponding mark file, read hot spot region information during video playback, in case the current time is in the activationary time section of hot spot region, then demonstrate this regional additional annotations content in the video, the user clicks the mouse in this zone, then directly jumps to its corresponding response target.
2. a kind of method that is used to make and play interactive video according to claim 1 with hot spot region, it is characterized in that the focus interaction area in the described frame: the shape of focus interaction area can be the figure of closed geometry arbitrarily in the video single frames picture size scope; Focus interaction area in the video is freely specified by the user, or is generated automatically by system supplymentary.
3. a kind of method that is used to make and play interactive video according to claim 1 with hot spot region, it is characterized in that corresponding additional content in described zone and response target information: the additional displaying contents of interaction area is text or image in the video; When the user clicked the focus interaction area, the file destination of response can be audio frequency, video, webpage.
4. a kind of method that is used to make and play interactive video according to claim 1 with hot spot region, it is characterized in that, the activationary time section in described zone: only in the time period of appointment, the focus interaction area just is in state of activation, could show additional content and response target information.
5. a kind of method that is used to make and play the interactive video with hot spot region according to claim 1 is characterized in that, the described method of extrapolating in the activationary time section each regional position and size in other frame automatically comprises the steps:
(1) calculates the kinematic parameter of camera in each frame;
(2) according to the regional location in former frame size and corresponding camera motion calculation of parameter go out in the next frame should the zone changes position and size.
6. a kind of method that is used to make and play the interactive video with hot spot region according to claim 5 is characterized in that the described method that calculates the kinematic parameter of camera in each frame comprises the steps:
(1) from the video compression file, reads the motion vector of this frame.
(2) motion vector is carried out normalization.Motion vector under I frame, B frame, P frame and these several different situations such as interframe encode, intraframe coding, hybrid coding is united.
(3) the part noise in the removal motion vector.Main according to being: the slickness when neighborhood of motion vector and variation.
(4) further optimize motion vector.Earlier the motion vector in this frame is carried out cluster, judge that according to the distribution characteristics of each class after the cluster this type of motion vector belongs to camera motion or object motion, only keep the cluster that belongs to camera motion then.
(5) set up camera parameter model solution kinematic parameter.Set up the camera parameter equation group of a hypothesis, with the motion vector after optimizing respectively in the substitution equation group, come unknown parameter in the solving equation group by the numerical linear algebra method of standard again.
7. a kind of method that is used to make and play the interactive video with hot spot region according to claim 1 is characterized in that the information of described mark comprises: 1) time started of this zone state of activation, concluding time; 2) hot spot region position and size during each frame in state of activation; 3) Qu Yu additional content information, text comprises text color, size, hyperlink, image comprises the information of all picture elements, if directly read from file, the path address of log file then; 4) this area relative response target information is image, audio frequency, video or webpage, the path address of log file.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 200610053953 CN100471255C (en) | 2006-10-25 | 2006-10-25 | Method for making and playing interactive video frequency with heat spot zone |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 200610053953 CN100471255C (en) | 2006-10-25 | 2006-10-25 | Method for making and playing interactive video frequency with heat spot zone |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1946163A true CN1946163A (en) | 2007-04-11 |
CN100471255C CN100471255C (en) | 2009-03-18 |
Family
ID=38045352
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 200610053953 Expired - Fee Related CN100471255C (en) | 2006-10-25 | 2006-10-25 | Method for making and playing interactive video frequency with heat spot zone |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100471255C (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101950578A (en) * | 2010-09-21 | 2011-01-19 | 北京奇艺世纪科技有限公司 | Method and device for adding video information and method and device for displaying video information |
CN102057400A (en) * | 2009-04-08 | 2011-05-11 | 索尼公司 | Image processing device, image processing method, and computer program |
CN101753913B (en) * | 2008-12-17 | 2012-04-25 | 华为技术有限公司 | Method and device for inserting hyperlinks in video, and processor |
CN102523512A (en) * | 2011-11-30 | 2012-06-27 | 江苏奇异点网络有限公司 | Video output method with operable implicit content |
CN102572601A (en) * | 2010-09-21 | 2012-07-11 | 北京奇艺世纪科技有限公司 | Display method and device for video information |
CN101571812B (en) * | 2008-04-30 | 2012-08-29 | 国际商业机器公司 | Visualization method and device for dynamic transition of objects |
CN102868919A (en) * | 2012-09-19 | 2013-01-09 | 上海高越文化传媒股份有限公司 | Interactive play equipment and play method |
CN103428539A (en) * | 2012-05-15 | 2013-12-04 | 腾讯科技(深圳)有限公司 | Pushed information publishing method and device |
CN103702222A (en) * | 2013-12-20 | 2014-04-02 | 惠州Tcl移动通信有限公司 | Interactive information generation method and video file playing method for mobile terminal |
CN103780973A (en) * | 2012-10-17 | 2014-05-07 | 三星电子(中国)研发中心 | Video label adding method and video label adding device |
CN103797783A (en) * | 2012-07-17 | 2014-05-14 | 松下电器产业株式会社 | Comment information generation device and comment information generation method |
CN103856824A (en) * | 2012-12-08 | 2014-06-11 | 周成 | Method for popping up video of tracked objects in video |
CN103986980A (en) * | 2014-05-30 | 2014-08-13 | 中国传媒大学 | Hypermedia editing and producing method and system |
CN104967908A (en) * | 2014-09-05 | 2015-10-07 | 腾讯科技(深圳)有限公司 | Video hot spot marking method and apparatus |
CN105657564A (en) * | 2015-12-30 | 2016-06-08 | 广东欧珀移动通信有限公司 | Video processing method and video processing system for browser |
CN106792157A (en) * | 2016-12-13 | 2017-05-31 | 广东中星电子有限公司 | A kind of information labeling based on video and display methods and system |
CN110909037A (en) * | 2019-10-09 | 2020-03-24 | 中国人民解放军战略支援部队信息工程大学 | Frequent track mode mining method and device |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10204417B2 (en) | 2016-05-10 | 2019-02-12 | International Business Machines Corporation | Interactive video generation |
-
2006
- 2006-10-25 CN CN 200610053953 patent/CN100471255C/en not_active Expired - Fee Related
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101571812B (en) * | 2008-04-30 | 2012-08-29 | 国际商业机器公司 | Visualization method and device for dynamic transition of objects |
CN101753913B (en) * | 2008-12-17 | 2012-04-25 | 华为技术有限公司 | Method and device for inserting hyperlinks in video, and processor |
CN102057400A (en) * | 2009-04-08 | 2011-05-11 | 索尼公司 | Image processing device, image processing method, and computer program |
CN101950578A (en) * | 2010-09-21 | 2011-01-19 | 北京奇艺世纪科技有限公司 | Method and device for adding video information and method and device for displaying video information |
WO2012037813A1 (en) * | 2010-09-21 | 2012-03-29 | 北京奇艺世纪科技有限公司 | Method and device for adding video information, method and device for displaying video information |
CN102572601A (en) * | 2010-09-21 | 2012-07-11 | 北京奇艺世纪科技有限公司 | Display method and device for video information |
CN101950578B (en) * | 2010-09-21 | 2012-11-07 | 北京奇艺世纪科技有限公司 | Method and device for adding video information |
CN102572601B (en) * | 2010-09-21 | 2014-07-16 | 北京奇艺世纪科技有限公司 | Display method and device for video information |
CN102523512A (en) * | 2011-11-30 | 2012-06-27 | 江苏奇异点网络有限公司 | Video output method with operable implicit content |
CN103428539A (en) * | 2012-05-15 | 2013-12-04 | 腾讯科技(深圳)有限公司 | Pushed information publishing method and device |
CN103797783A (en) * | 2012-07-17 | 2014-05-14 | 松下电器产业株式会社 | Comment information generation device and comment information generation method |
CN103797783B (en) * | 2012-07-17 | 2017-09-29 | 松下知识产权经营株式会社 | Comment information generating means and comment information generation method |
CN102868919A (en) * | 2012-09-19 | 2013-01-09 | 上海高越文化传媒股份有限公司 | Interactive play equipment and play method |
CN102868919B (en) * | 2012-09-19 | 2016-03-30 | 上海基美文化传媒股份有限公司 | Interactive play equipment and player method |
CN103780973A (en) * | 2012-10-17 | 2014-05-07 | 三星电子(中国)研发中心 | Video label adding method and video label adding device |
CN103856824A (en) * | 2012-12-08 | 2014-06-11 | 周成 | Method for popping up video of tracked objects in video |
CN107995533B (en) * | 2012-12-08 | 2020-09-18 | 周成 | Method for popping out video of tracking object in video |
CN107995533A (en) * | 2012-12-08 | 2018-05-04 | 周成 | The method of the video of pop-up tracking object in video |
CN103702222A (en) * | 2013-12-20 | 2014-04-02 | 惠州Tcl移动通信有限公司 | Interactive information generation method and video file playing method for mobile terminal |
CN103986980A (en) * | 2014-05-30 | 2014-08-13 | 中国传媒大学 | Hypermedia editing and producing method and system |
CN103986980B (en) * | 2014-05-30 | 2017-06-13 | 中国传媒大学 | A kind of hypermedia editing method and system |
CN104967908B (en) * | 2014-09-05 | 2018-07-24 | 腾讯科技(深圳)有限公司 | Video hotspot labeling method and device |
CN104967908A (en) * | 2014-09-05 | 2015-10-07 | 腾讯科技(深圳)有限公司 | Video hot spot marking method and apparatus |
CN105657564A (en) * | 2015-12-30 | 2016-06-08 | 广东欧珀移动通信有限公司 | Video processing method and video processing system for browser |
CN106792157A (en) * | 2016-12-13 | 2017-05-31 | 广东中星电子有限公司 | A kind of information labeling based on video and display methods and system |
CN110909037A (en) * | 2019-10-09 | 2020-03-24 | 中国人民解放军战略支援部队信息工程大学 | Frequent track mode mining method and device |
CN110909037B (en) * | 2019-10-09 | 2024-02-13 | 中国人民解放军战略支援部队信息工程大学 | Frequent track mode mining method and device |
Also Published As
Publication number | Publication date |
---|---|
CN100471255C (en) | 2009-03-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN1946163A (en) | Method for making and playing interactive video frequency with heat spot zone | |
TWI768308B (en) | Methods and apparatus for track derivation for immersive media data tracks | |
US10210907B2 (en) | Systems and methods for adding content to video/multimedia based on metadata | |
TWI749483B (en) | Methods and apparatus for signaling spatial relationships for point cloud multimedia data tracks | |
CN105745938B (en) | Multi-angle of view audio and video interactive playback | |
CN101601286B (en) | Concurrent presentation of video segments enabling rapid video file comprehension | |
TW202106001A (en) | Methods and apparatus for spatial grouping and coordinate signaling for immersive media data tracks | |
JP2012114909A (en) | Method and system of encoding and decoding media content | |
CN101047795A (en) | Moving image division apparatus, caption extraction apparatus, method and program | |
CN1822651A (en) | Method for dynamically forming caption image data and caption data flow | |
CN101193250A (en) | System, method and medium generating frame information for moving images | |
CN102111672A (en) | Method, system and terminal for viewing panoramic images on digital television | |
Saeghe et al. | Augmented reality and television: Dimensions and themes | |
CN105245817A (en) | Video playback method and video playback device | |
CN1745424A (en) | The information storage medium of storing scenario and the equipment and the method that write down this story of a play or opera | |
CN1357198A (en) | Systems and methods for enhanced visual presentation using interactive video streams | |
CN1909641A (en) | Edit system and method for multimedia synchronous broadcast | |
Chen et al. | Simplified carriage of MPEG immersive video in HEVC bitstream | |
CN115661420A (en) | Design and implementation method of POLY VR editor system | |
Van Rijsselbergen et al. | Semantic Mastering: content adaptation in the creative drama production workflow | |
CN102724425B (en) | A kind of method that teletext template is broadcasted | |
Jamil et al. | Overview of JPEG Snack: A Novel International Standard for the Snack Culture | |
Chen et al. | Application of VR virtual reality in film and television post-production | |
CN111726478A (en) | Method for making film and television | |
CN1529514A (en) | Layering coding and decoding method for video signal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20090318 Termination date: 20121025 |