EP1366624A2 - System and method for providing an omnimedia package - Google Patents

System and method for providing an omnimedia package

Info

Publication number
EP1366624A2
EP1366624A2 EP01990739A EP01990739A EP1366624A2 EP 1366624 A2 EP1366624 A2 EP 1366624A2 EP 01990739 A EP01990739 A EP 01990739A EP 01990739 A EP01990739 A EP 01990739A EP 1366624 A2 EP1366624 A2 EP 1366624A2
Authority
EP
European Patent Office
Prior art keywords
content
metadata
audio
video
framework
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP01990739A
Other languages
German (de)
French (fr)
Inventor
Steven Reynolds
Joel Hassell
Thomas Lemmons
Ian Zenoni
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
OpenTV Inc
Original Assignee
Intellocity USA Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intellocity USA Inc filed Critical Intellocity USA Inc
Priority claimed from PCT/US2001/044510 external-priority patent/WO2002043396A2/en
Publication of EP1366624A2 publication Critical patent/EP1366624A2/en
Ceased legal-status Critical Current

Links

Definitions

  • the present invention relates to enhanced multimedia television and more particularly to a system and method for organization, combination, transmission and reception of media from a range of sources wherein the media may comprise a plurality of video streams, audio streams, and other information, such as may be accessed via the - Tntemet.
  • NBI vertical blanking interval
  • the present invention overcomes the disadvantages and limitations of the prior art by providing a system and method that allows a transmission system to organize and transmit a related set of media and for a display platform to organize and render related media information in a manner that reflects the available media and the capabilities of the platform.
  • a framework definition identifies a set of associated content (media) for a broadcast program.
  • the present invention compares the format of the media with a transmission format and converts media of other formats to that of the transmission format.
  • An omnimenu describes the content. Media content and the omnimenu are combined into a broadcast stream and transmitted.
  • the present invention may therefore comprise a method for producing a broadcast stream that contains audio content, video content, and metadata content comprising: creating a framework definition that identifies the audio content, the video content and the metadata content associated with a broadcast and attributes thereof, comparing the audio format of the audio content with an audio transmission format and converting the audio content to the audio transmission format if the audio format and the audio transmission format differ, comparing the video format of the video content with a video transmission format and converting the video content to the video transmission format if the video format and the video transmission format differ, comparing the metadata format of the metadata content with a metadata transmission format and converting the metadata content to the metadata transmission format if the metadata format and the metadata transmission format differ, creating a menu describing the audio content, the video content, and the metadata content, combining the audio content, the video content, and the metadata content into a broadcast stream, transmitting the menu; and transmitting the broadcast stream.
  • a framework controller may utilize the framework definition to access media content, to process and format the media content and to control packaging and multiplexing of the
  • the present invention may further comprise a system for combining multiple media and metadata streams having content into a framework for distribution of the content to a viewer comprising: at least one video so ⁇ rce having an output, at least one audio source having an output, at least one metadata source having an output, a framework controller that receives the video source, audio source, and metadata source and produces an omnimedia package integrating the outputs into a framework, a framework definition module that interfaces with the framework controller and defines all content to be used in the omnimedia package, a delivery module that receives the omnimedia package from the framework controller and transmits the omnimedia package to a receiver, and a receiver that receives and distributes the content of the omriimedia package to display devices and audio outputs, the receiver further coupled to at least one user input device that provides interactivity between the viewer and the receiver.
  • the present invention may utilize pre-loaded content that is transferred to a receiver prior to the broadcast of a media stream with which the preloaded content is associated.
  • Pre-loaded content allows voluminous and complex content to be employed during a broadcast without requiring bandwidth to transfer the pre-loaded content at the time of broadcast and without latencies that may be incurred if the pre-loaded content were transferred at the time of broadcast.
  • the present invention may additionally comprise a method for rendering portions of a broadcast stream that contains audio content, video content, and metadata content arjd a menu indicating the contents of the audio content, video content, and metadata content comprising: transferring preloaded metadata associated with the broadcast stream to a receiver prior to transmission of the broadcast stream, receiving the broadcast stream, displaying the menu wherein the menu includes an icon representing the preloaded metadata, receiving a user input, and rendering the preloaded metadata in response to the user input.
  • the present invention allows a viewer to select among a plurality of audio, video and metadata sources to obtain a television presentation tailored to the viewer's preferences, offering increased viewing satisfaction.
  • the present invention may be employed utilizing the capabilities of emerging digital transmission formats to provide an enhanced viewing experience and to increase audience size, allowing increased advertising revenue.
  • Figure 1 is a high-level block diagram of the present invention.
  • Figure 2 depicts components of framework controller 108.
  • Figure 3 depicts a receiver that may be employed with the present invention.
  • Figure 4 is flow chart 400 of the operation of the metadata processor 214.
  • Figure 5 is a flow chart 500 illustrating the operation of o ⁇ mimenu generator 224.
  • a framework definition provides organization of media for transmission, and for rendering of media by a display platform that may comprise a television, interactive television, set- top box, satellite receiver, personal computer or other equipment operable to receive data across a network or airwave and process data according to the method of the present invention.
  • the framework definition allows a media stream or multiple media streams to be packaged together with other content into a single distinct program, hereinafter referred to as an omnimedia package.
  • a content provider may employ the framework definition to specify and deliver a package of related content, encapsulating the information necessary to build, format, transmit and display the content.
  • Content may comprise video, audio, and data information that maybe streamed or cached.
  • a wide range of information types and formats maybe employed as illustrated by table L
  • Table 1 provides organization of media for transmission, and for rendering of media by a display platform that may comprise a television, interactive television, set- top box, satellite receiver, personal computer or other equipment operable to receive data across a network or airwave and process data according to the method of the present invention.
  • the present invention may be employed with broadcast systems that utilize terrestrial, cable, satellite, VDSL, or other transport methods.
  • the present invention may also be employed with systems in which content is requested via a reverse path and with systems that support local or client side storage of pre-delivered content.
  • Digital format transmission systems are well suited to the method of the present invention, but analog systems with NBI data and Internet connection may also be employed.
  • FIG. 1 is a high-level block diagram of the present invention.
  • a plurality of audio/visual and metadata sources which provide the audio, video, metadata and other data services, are combined to produce an omnimedia package.
  • the term omnimedia describes the inclusive nature of the content being delivered by the present invention, as contrasted with the usual audio/video packaging associated with television events.
  • the omnimedia package represents all of the synchronous and asynchronous components from video source(s)102, audio source(s) 104, and metadata sources 106, which may comprise and may be associated with a primary broadcast.
  • Video source(s) 102 may comprise standard television broadcast channels (analog or digital modulation and encoding), any form of analog video (for example, Beta and VHS format tapes), any form of stored digital video (for example, Laserdisc, DVD or any server- based digital video, such as may be used by video-on ⁇ dero nd systems) or may be any form of packet-based streaming media. This may include alternate camera sources, supporting video, alternate advertising video, localized or regionalized video. Audio source(s) 104 may comprise soundtracks that accompany one or more video source(s) 102, or may comprise an independent stream such as a radio signal or other audio only broadcast. Framework definition 110 defines the relationships between different content streams.
  • Metadata sou ⁇ ce(s) 106 may contain data packaged with respect to an event or associated with a general subject and may include executable code, scripts, graphics, logos, or general data. Metadata may be time synchronized- content related, or ancillary to an event.
  • Example One an omnimedia program is provided for the French Open, a tennis tournament featuring several simultaneous events.
  • the primary broadcast may bt from the main for center) court with accompanying broadcasts from other venues.
  • Example Two shows the components of an omrdmedia package relating to the fourth day of the Masters, a major golf tournament.
  • Example Three shows the components of an omnimedia package relating to a retransmission of the H. G. Wells "War of the Worlds” radio broadcast.
  • Framework definition 110 describes all content used in an omnimedia package.
  • Framework controller 108 employs framework definition 110 to create an omnimedia package.
  • Framework definition 110 defines the primary video and audio to be used by, non-omnimedia aware platforms, plus initial audio and video for omnimedia package ' streams.
  • Framework definition 110 is ako employed io generate a main-menu/omnimenu.
  • An ommmenu is an interactive usei inte-face that presents a menu of options available with an omnimedia package. Options may comprise different video streams, different television layout, different audio sources, and other interactive data streams.
  • the definition of each stream may include a start stop time to enable framework controller 108 to switch streams during when creating the omnimedia package.
  • An omnimenu may be presented in a range of formats and may be customized by the system operator.
  • an omnimenu may scale current video into an upper right- hand corner arid display a list of options to the left of alternate video sources, alternate audio sources, and links to other data.
  • Another embodiment of an omnimenu may employ a pre-defined TV layout comprising a % screen primary video format and several secondary thumbnail videos and data indicators across the bottom of the screen.
  • Framework definition 110 may be created at a production facility or other sjte and may employ automated and manual methods of associating media
  • the framework definition 110 may reflect limitations and constraints of source delivery systems, transmission systems, and display platform limitations. Such limitations may include available bandwidth, number of channels tuned by a display platform, cache size, supported formats or other parameters
  • a framework definition may specify that images must be m GIF format and that HTML code must support version 3.23, for example.
  • a framework definition record may be produced that may comprise information defining the type, size, nature, pricing, scheduling, and resource requirements for each media offered Table 2 lists a number of components that may comprise a framework definition record.
  • Oj ⁇ rnmenu associated with the omnimedia package may be created by framework controller.
  • a stream record is provided for each media source (audio, video, or metadata) to be offered in the omnimenu.
  • Each framework definition stream record may comprise:
  • stream IDs may be used by the set top box to determine if the stream is available, i.e., an audio stream may be dependent on the video stream so that only devices that can view/use the video stream can access, the audio stream. Also, the stream IDs can be used by head end equipment that is trying to optimize bandwidth and may want to separate different streams on different transponders for the same event. example of a framework definition is shown below:
  • the framework controller 108 is operable to format and organize media components into a stream or streams that may be delivered by the delivery system 112.
  • Delivery system 112 may compnse a headend system and transmission apparatus such as employed by cable, satellite, terrestrial, and other broadcast systems.
  • the organization and establishment of the stream(s) employs parameters provided in framework definition 110.
  • the stream(s) is (are) delivered to receiver 114 that processes the stream(s) and provides output to display device(s) 11 , audio output(s) 118, and may send and receive signals to/from user input devicefs) 120.
  • Figure 2 depicts components of framework controller 108.
  • Framework controller 108 includes framework control logic 216 that is operable to retrieve and interpret framework definition 110, and employ the parameters thereof to control operation preprocessors 210, 212, and 214, plus packagers 218, 220, and 222 to format and encapsulate information (video, audio and metadata) for multiplexer 226.
  • Video preprocessor 210 and audio preprocessor 212 access media streams or stored data as specified by framework definition 110 and perform processing to prepare the media for the associated packager. Such processing may include rate adaptation, re-encoding, transcoding, format conversions, re-sampling, frame decimation, or other techniques and methods to generate a format and data rate suitable for packagers 218, 20, and 222 as may be specified by framework definition 110.
  • Processing may include MPEG encoding of analog video as may be supported by encoder equipment from Divicom Inc. of Milpitas, C A., which is a wholly owned subsidiary of C-Cube Microsystems, Inc.
  • framework controller 108 may provide a sequence of instructions to the encoder, selecting channels and controlling encoding.
  • Metadata preprocessor 214 accesses metadata elements, specified by framework definition 110, and performs processmg to prepare these the metadata packager 222.. - Such processing may include script conversions, script generation, and image format conversions, for example, I operation, graphical metadata maybe sent to metadata preprocessor 214 in a computer graphics format (Photo Shop, for example) that then maybe converted to a format that the display platform recognizes (gif, for example).
  • FIG. 4 is flow chart 400 of the operation of the metadata processor 214.
  • an image file is accessed by metadata preprocessor 214.
  • the metadata preprocessor accesses metadata and places it in a predetermined format as may e specified by framework definition 110.
  • Metadata may comprise graphics data, sound data, HTML data, video data, or any other type of data.
  • Metadata preprocessor 214 processes the data and outputs the processed data in a transmission/multiplexer format. As shown in figure 4, the flow diagram allows the image file to be converted and output by the metadata processor 214 in real time.
  • the graphic file conversion definitions are loaded into the metadata preprocessor 214 to perform the conversion of the image file.
  • the image file 402 is converted into the graphic file in accordance with the definitions.
  • the metadata preprocessor 214 outputs the converted image file ro the metadata packager 220.
  • a high-speed processor may perform these functions in real time using definitions that can be quickly downloaded from a high-speed storage device.
  • Custom designed state machines may also be employed fo ⁇ format conversion.
  • Metadata preprocessor 214 may also be employed to convert the format of HTML metadata.
  • HTML metadata maybe sent to metadata processor 214 in HTML 4.0 format that then may be converted to HTML 3.2 format such as may be required by a display platform.
  • the framework controller generates commands that are sent to metadata processor 214 that identify the metadata and specify the output format of the metadata.
  • video packager 218 and audio packager 220 are subsystems that package video and audio assets into a format that is compatible with the delivery system 112, including for example, packetization, serialization, rate control, null-packet insertion, and other functions required to prepare the video for compliant transport via MPEG, DVB-C/S/T, PacketVideo, or other transport formats.
  • General Instruments now owned by Motorola
  • the framework controller generates commands that are sent to the QAM modulator specifying the frequency (channel) and PID (packet identifier) for the video based upon the framework definition that was provided for the omriimedia package.
  • Metadata packager 222 is a subsystem that performs packaging on all metadata elements that are to be included in the omnimedia package. Metadata packaging may comprise rate control, packetization, serialization, and synchronization to video and/or audio streams. The metadata is also prepared for transport across any compliant transport mechanism (MPEG, DNB-C/S/T, PacketCable, - DVB-MHP- etc.). A commercially available product for perfo ⁇ ning these functions is the TES3 that is provided by ⁇ orpak Corp., Kanata, Ontario. The TES3 encoder encodes metadata into a ⁇ TSC signal with ⁇ ABTS encoding. ⁇ ABTS is the protocol that allows metadata to be sent in the VBI (vertical blanking interval) of a ⁇ TSC signal. The framework controller commands the TES3 encoder as to what lines of the VBI are employed for transmitting metadata.
  • ornnimenu generator 224 may be implemented as a rules-based subsystem employing framework definition 208 to generate a user-interface that presents program options and allows a viewer to select from these options. Rules may be employed to generate an omnimentt template in HTML page format.
  • the HTML page may comprise a full screen image containing buttons (active icons) that may be selected to activate a particular media stream.
  • Omnimenu generator 224 employs framework definition 208 to identify available streams and associates each stream with a button.
  • FIG. 5 is a flow chart 500 illustrating the operation of omnimenu generator 224.
  • omnimenu generator 224 accesses the framework definition 110.
  • PIDS program IDs
  • the omnimenu template described above, is loaded into the omnimenu generator 224 at step 506 of figure 5.
  • each of the PIDS that have been extracted from the framework definition are assigned to a button such that each of the buttons is labeled with the PID name.
  • selection functions are assigned to each of the buttons in accordance with the labels that have been assigned to those buttons. In this manner, the video can be changed in accordance with the labeled functions of each of the buttons of the template.
  • the template may be assigned a company logo. In this manner, the content may be properly branded to correspond to the source of that content.
  • the omnimenu generator 224 saves or exports the new omnimenu. The omnimenu may be exported to the package multiplexer 226,
  • package multiplexer 226 combines the package elements into a stream or set of streams, in preparation for transmission.
  • the streams are coupled through delivery system interface ⁇ ) 228, to delivery system 112, and the physical and logical interconnects to the transmission system.
  • a DiVicom, Inc., (Milpitas, CA) MUX is an example of a package multiplexer, having the different frequency and P-Ds.
  • delivery system 112 then transports the streams to the receiver 114.
  • Delivery system 112 may be analog or digital. In general, system delivery of content is as open as needed for any particular package and topology. A simple movie with no added content may be delivered on a single broadcast multiplex, while a package for the Super Bowl would contain many different delivery mechanisms. Some may be available on a particular receiver and not on others. The oinnimenu may provide a directory of available media for a broadcast event.
  • the omnimenu is transmitted to a plurality of receivers 114.
  • Receivers 114 may vary in capability and may include upstream (reverse path) communication from the receiver to the headend system, or may use other return systems such, as an Internet connection, for example.
  • Receivers 114 that do not include upstream corr-munications may employ the omnimenu to select audio, video and metadata information contained in a broadcast stream.
  • the bandwidth of an analog NTSC channel maybe employed to carry several digital video streams, audio streams, and metadata.
  • the omnimenu includes tuning or packet information to identify streams that may be accessed by the receiver.
  • the receiver includes a software program operable to display the omnimenu and operable to tune and render selected streams.
  • receiver 114 supports upstream communications.
  • the headend system in response to upstream communications, may provide on-demand programming and data delivery.
  • the omnimenu initiates upstream communication in response to user selection of a displayed media button (icon).
  • the headend system may supply the requested stream in a broadcast channel, or if already broadcast, may provide tuning or packs', decode information to the receiver.
  • the requested stream may be employed to update the framework definition 110.
  • Framework controller 108 uses the framework definition 110 to allocate bandwidth and PID information.
  • the framework controller provides the frequency, PID and bandwidth information from the framework definition and uses it to send/control video and audio packager 218/220.
  • Video packager 218 and audio packager 220 (figure 2) then allocate bandwidth for each of the respective video streams.
  • information from the framework controller 108 is encoded into the omnimenu so that the receiver ill be able to tune * to and decode the streams.
  • the framework controller 108 may also include URL's for demand data or streaming media. Locations based on alternate tuner systems may also be included. For example, a radio station frequency having local commentary may be simulcast with the video.
  • some data and/or programming may be loaded in advance into storage built into the receiver such that the content is available locally for viewing during the airing of the primary content package. For example, all NFL player statistics may be preloaded over a trickle feed before the Super Bowl. In this manner, an interactive fantasy football application may retrieve all needed statistics during the game in order to let the viewer play a fantasy game during the airing of the primary program.
  • FIG. 3 depicts a receiver that may be employed in accordance with the present invention.
  • Receiver 302 is used by the system end-user or viewer for use in viewing and controlling the omnimedia package.
  • the receiver 302 may comprise a decoder 304, parser 306, media controller 308, receiver cache 310 and receiver controller 312.
  • Decoder 304 may extract framework information from the delivery stream omnimenu. Information may be encoded in the vertical blanking interval.
  • Decoder 304 may comprise a NABTS VBI decoder from ATI Technologies, Inc., of Thorn ill, Ontario,
  • the VBI decoder may extract data from the VBI and present it to framework parser 306.
  • the data may comprise an XML format file of the omnimenu.
  • Parser 306 extracts elements that comprise the framework.
  • Framework parser 306 receives framework data from decoder 304 and- prepares the data for use by the receiver controller 312.
  • the receiver controller 312 may comprise a data interpretation module.
  • Media controller 308 selects the media streams (audio, video and/or data) that are described by the framework definition.
  • the media controller comprises 308 a tuner and video/audio decoder.
  • the media controller 308 • receives stream data from decoder 304 and control signals from receiver controller 312.
  • Media controller 308 selects the proper media PIDS as needed and feeds them to display • device(s) 116, audio device(s) 118, or user input device(s) 120.
  • Receiver controller 312 serves as the central processor for the other elements of receiver 302.
  • the receiver controller 312 may compare the capabilities of receiver 302 with the stream type to determine which streams may be received and used by receiver 302. Receiver controller 312 sends control signals to the other units in the system.
  • the functions of receiver controller 312 may be performed by software resident in the control CPU of the receivet 302. Functions include receiving data from framework parser 306 and signaling media controller 308, For example, selection of omnimenu items may alter the framework such that the receiver controller 312 receives information from framework parser 306 and signals media controller 308 to render the selected media stream.
  • Receiver cache 310 may be employed to store the framework definition and any parameters, code objects, data items or other software to control omnimenu display and response to user input.
  • Receiver 302 receives data associated with a content package and employs the data to access the package contents.
  • Advanced receivers may check system capabilities to determine which pieces of content may be rendered.
  • a digital set top box may be able to decode MPEG video and audio.
  • a digital set top box may also be able to decode MP3 audio, but not be able to decode HDTV signals.
  • a radio type receiver may only be able to decode audio formats and, possibly, only formats not related to a video signal.
  • the receiver may also be able to decode various data types from data services. These may take the form of application code that would be executed on the receiver if compatible.
  • the receiver of the present invention has the internal compatibility to be able to receive the orr-nimedia packaged signal and decode the parts that are relevant to its capabilities to give the fullest experience it can, regardless of the format.
  • display device(s) 116 ⁇ rovide(s) the visual presentation of the framework to the end-user/viewer, including video, streaining media, a graphical user interface, static graphics, animated graphics, text, logos and other displayable assets.
  • audio output(s) 118 presents) the audio portions of the framework to the end- user, including primary audio, possible secondary audio tracks, streaming media, audio feedback -from the receiver 114, and other audio assets.
  • User input device(s) 120 allow(s) the user to control the receiver and interact with the framework, providing the ability to choose components of the framework, select links within the framework, navigate the graphical user interface of the receiver, or perform other interactions with the receiver.

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

Disclosed is a system that permits a variable number of disparate streams of datato be packaged together as content into a single distinct program referred to as anomnimedia package. A framework definition may be specified and created for theomnimedia package to allow a client set top box to decode the information and provideaccess to streams comprising video, audio and metadata information. The frameworkdefinition may be employed by a broadcast system to create a broadcast stream. Amechanism is described that permits a content provider to define a framework for delivering a package of related content. The framework definition encapsulates information necessary to build, format, transmit and display the disparate contentstreams. Data may be downloaded to a receiver prior to the broadcast of an associatedprogram. The present invention may be implemented on terrestrial, cable, satellite, VDSL and other transport systems, including those that support upstream communication.

Description

SYSTEM AND METHOD FOR PROVIDING AN OMNIMEDIA PACKAGE
Cross Reference to Related Applications
This application claims the benefit of U.S. patent application number 60/253,168 entitled 'OmniMedia Package", filed November 27, 2000 by Steven Reynolds, which is also specifically incorporated herein by reference for all that it discloses and teaches.
Background of the invention
a. Field of the Invention
The present invention relates to enhanced multimedia television and more particularly to a system and method for organization, combination, transmission and reception of media from a range of sources wherein the media may comprise a plurality of video streams, audio streams, and other information, such as may be accessed via the - Tntemet.
b. Description of the Background The format of television programs often conforms to NTSC, PAL, or SECAM standards wherein a predefined bandwidth, i.e. channel, is employed to carry a single television program. Additional information that may be provided with a program has often been encoded into the vertical blanking interval (NBI), such as closed captionϊng or alternate language support, for example. As television broadcast formats move to digital transmission, many programs continue to be presented in a manner similar to 'channel' based television, comprising one video stream, a primary audio stream, and possibly an alternate audio stream.
Continued expansion of the Internet and high bandwidth networks provides access to an increasing volume of information. Adoption of digital transmission formats allows media such as audio, video, and metadata content, to be associated, combined, and presented to provide viewers with a richer and more diverse media experience. Methods such as MPEG7 provide for relating content information, but do not provide a method by which content of various formats may be grouped, transmitted and displayed. Therefore a new method of organizing, transmitting, and presenting media from multiple sources is needed.
Summary of the Invention
The present invention overcomes the disadvantages and limitations of the prior art by providing a system and method that allows a transmission system to organize and transmit a related set of media and for a display platform to organize and render related media information in a manner that reflects the available media and the capabilities of the platform. A framework definition identifies a set of associated content (media) for a broadcast program. The present invention compares the format of the media with a transmission format and converts media of other formats to that of the transmission format. An omnimenu describes the content. Media content and the omnimenu are combined into a broadcast stream and transmitted.
The present invention may therefore comprise a method for producing a broadcast stream that contains audio content, video content, and metadata content comprising: creating a framework definition that identifies the audio content, the video content and the metadata content associated with a broadcast and attributes thereof, comparing the audio format of the audio content with an audio transmission format and converting the audio content to the audio transmission format if the audio format and the audio transmission format differ, comparing the video format of the video content with a video transmission format and converting the video content to the video transmission format if the video format and the video transmission format differ, comparing the metadata format of the metadata content with a metadata transmission format and converting the metadata content to the metadata transmission format if the metadata format and the metadata transmission format differ, creating a menu describing the audio content, the video content, and the metadata content, combining the audio content, the video content, and the metadata content into a broadcast stream, transmitting the menu; and transmitting the broadcast stream. A framework controller may utilize the framework definition to access media content, to process and format the media content and to control packaging and multiplexing of the content for broadcast
The present invention may further comprise a system for combining multiple media and metadata streams having content into a framework for distribution of the content to a viewer comprising: at least one video soυrce having an output, at least one audio source having an output, at least one metadata source having an output, a framework controller that receives the video source, audio source, and metadata source and produces an omnimedia package integrating the outputs into a framework, a framework definition module that interfaces with the framework controller and defines all content to be used in the omnimedia package, a delivery module that receives the omnimedia package from the framework controller and transmits the omnimedia package to a receiver, and a receiver that receives and distributes the content of the omriimedia package to display devices and audio outputs, the receiver further coupled to at least one user input device that provides interactivity between the viewer and the receiver.
The present invention may utilize pre-loaded content that is transferred to a receiver prior to the broadcast of a media stream with which the preloaded content is associated. Pre-loaded content allows voluminous and complex content to be employed during a broadcast without requiring bandwidth to transfer the pre-loaded content at the time of broadcast and without latencies that may be incurred if the pre-loaded content were transferred at the time of broadcast.
The present invention may additionally comprise a method for rendering portions of a broadcast stream that contains audio content, video content, and metadata content arjd a menu indicating the contents of the audio content, video content, and metadata content comprising: transferring preloaded metadata associated with the broadcast stream to a receiver prior to transmission of the broadcast stream, receiving the broadcast stream, displaying the menu wherein the menu includes an icon representing the preloaded metadata, receiving a user input, and rendering the preloaded metadata in response to the user input. Advantageously, the present invention allows a viewer to select among a plurality of audio, video and metadata sources to obtain a television presentation tailored to the viewer's preferences, offering increased viewing satisfaction. The present invention may be employed utilizing the capabilities of emerging digital transmission formats to provide an enhanced viewing experience and to increase audience size, allowing increased advertising revenue.
Description of the Figures
In the figures,
Figure 1 is a high-level block diagram of the present invention.
Figure 2 depicts components of framework controller 108. Figure 3 depicts a receiver that may be employed with the present invention.
Figure 4 is flow chart 400 of the operation of the metadata processor 214.
Figure 5 is a flow chart 500 illustrating the operation of oπmimenu generator 224.
Detailed Description of the Invention
The present invention is directed to allowing the creation of larger, more robust productions with support for interactivity and multiple video, audio, and data streams. A framework definition provides organization of media for transmission, and for rendering of media by a display platform that may comprise a television, interactive television, set- top box, satellite receiver, personal computer or other equipment operable to receive data across a network or airwave and process data according to the method of the present invention. The framework definition allows a media stream or multiple media streams to be packaged together with other content into a single distinct program, hereinafter referred to as an omnimedia package. A content provider may employ the framework definition to specify and deliver a package of related content, encapsulating the information necessary to build, format, transmit and display the content. Content may comprise video, audio, and data information that maybe streamed or cached. A wide range of information types and formats maybe employed as illustrated by table L The present invention is not limited to any specific types of information, formats, or relationships between information. Table 1
The present invention may be employed with broadcast systems that utilize terrestrial, cable, satellite, VDSL, or other transport methods. The present invention may also be employed with systems in which content is requested via a reverse path and with systems that support local or client side storage of pre-delivered content. Digital format transmission systems are well suited to the method of the present invention, but analog systems with NBI data and Internet connection may also be employed.
Figure 1 is a high-level block diagram of the present invention. A plurality of audio/visual and metadata sources, which provide the audio, video, metadata and other data services, are combined to produce an omnimedia package. The term omnimedia describes the inclusive nature of the content being delivered by the present invention, as contrasted with the usual audio/video packaging associated with television events. Thus, for a given event, the omnimedia package represents all of the synchronous and asynchronous components from video source(s)102, audio source(s) 104, and metadata sources 106, which may comprise and may be associated with a primary broadcast. Video source(s) 102 may comprise standard television broadcast channels (analog or digital modulation and encoding), any form of analog video (for example, Beta and VHS format tapes), any form of stored digital video (for example, Laserdisc, DVD or any server- based digital video, such as may be used by video-on^dero nd systems) or may be any form of packet-based streaming media. This may include alternate camera sources, supporting video, alternate advertising video, localized or regionalized video. Audio source(s) 104 may comprise soundtracks that accompany one or more video source(s) 102, or may comprise an independent stream such as a radio signal or other audio only broadcast. Framework definition 110 defines the relationships between different content streams. For video source(s) 102 there may be zero, one or several associated audio source(s) 104. Further, a given audio source 104 may be assigned to zero, one or more video source(s) 102. There is no requirement that any given video source 102 have an accompanying audio source 104. Audio maybe encoded with a video source, such as may be practiced with packet-based streaming media. Metadata souτce(s) 106 may contain data packaged with respect to an event or associated with a general subject and may include executable code, scripts, graphics, logos, or general data. Metadata may be time synchronized- content related, or ancillary to an event.
Following are three examples illustrating the types and relationships between video source(s) 102, audio source(s) 104, and metadata source(s) 106, and how these sources may be organized and associated to provide a richer media presentation. In Example One, an omnimedia program is provided for the French Open, a tennis tournament featuring several simultaneous events. The primary broadcast may bt from the main for center) court with accompanying broadcasts from other venues.
EXAMPLE ONE
Example Two shows the components of an omrdmedia package relating to the fourth day of the Masters, a major golf tournament.
EXAMPLE TWO
Example Three shows the components of an omnimedia package relating to a retransmission of the H. G. Wells "War of the Worlds" radio broadcast.
EXAMPLE THREE
Framework definition 110 describes all content used in an omnimedia package.
Framework controller 108 employs framework definition 110 to create an omnimedia package. Framework definition 110 defines the primary video and audio to be used by, non-omnimedia aware platforms, plus initial audio and video for omnimedia package ' streams. Framework definition 110 is ako employed io generate a main-menu/omnimenu. An ommmenu is an interactive usei inte-face that presents a menu of options available with an omnimedia package. Options may comprise different video streams, different television layout, different audio sources, and other interactive data streams. The definition of each stream may include a start stop time to enable framework controller 108 to switch streams during when creating the omnimedia package. An omnimenu may be presented in a range of formats and may be customized by the system operator. For example, one embodiment of an omnimenu may scale current video into an upper right- hand corner arid display a list of options to the left of alternate video sources, alternate audio sources, and links to other data. Another embodiment of an omnimenu may employ a pre-defined TV layout comprising a % screen primary video format and several secondary thumbnail videos and data indicators across the bottom of the screen.
Framework definition 110 may be created at a production facility or other sjte and may employ automated and manual methods of associating media The framework definition 110 may reflect limitations and constraints of source delivery systems, transmission systems, and display platform limitations. Such limitations may include available bandwidth, number of channels tuned by a display platform, cache size, supported formats or other parameters A framework definition may specify that images must be m GIF format and that HTML code must support version 3.23, for example. For each event or presentation, a framework definition record may be produced that may comprise information defining the type, size, nature, pricing, scheduling, and resource requirements for each media offered Table 2 lists a number of components that may comprise a framework definition record.
Table 2
Framework Defraition Record Components
Version of the omromedia package
Ojαrnmenu associated with the omnimedia package, may be created by framework controller.
Parameters used by framework controller 108 to allocate, control and pnontize transmission of stream elements
Flags/Cat-gory, languages, feature set, including altiple ratings available (e g PG thru R)
Parameters employed to control delivery, (Tuning, URL, etc. )
Price, if any: An associated cost for the omnimedia package Time the omnimedia package expires
The number of framework definition stream records
A plurality of stream records
A stream record is provided for each media source (audio, video, or metadata) to be offered in the omnimenu. Each framework definition stream record may comprise:
• a unique ID; • a primary Y/N selection indicator;
• the media Type (audio/vϊdeo/data/other);
• the bandwidth required for the media;
• a start time/date time;
• the date and time the omnimedia package expires; • a description having a paragraph or two describing the data stream (by way of example, an audio stream could include the person speaking/band playing, a video stream could include the camera angle, or location, and a data stream could describe the interactive content and what it does. These are used by the framework controller 108 when building the omnimenu; • a location (framework controller 108 would fill this out most likely) PID;
• flags category, languages, feature set, including multiple ratings available (e.g. PG thru R);
• price, including an associated cost for the stream as well as an additional fee beyond the cost of the program such that to see a certain rating version of a show or to see video instead of just hearing audio may require a premium payment; • a unique Group ID within an Event that identifies a group of streams that are used by the framework controller 108 when building the omnimenu (note that only one stream from the group can be used at one time); and
• any Dependent Streams, each having a unique stream ID which is required to access/use the stream. In use, a stream may require another stream.
These stream IDs may be used by the set top box to determine if the stream is available, i.e., an audio stream may be dependent on the video stream so that only devices that can view/use the video stream can access, the audio stream. Also, the stream IDs can be used by head end equipment that is trying to optimize bandwidth and may want to separate different streams on different transponders for the same event. example of a framework definition is shown below:
<XMI <Nersion>4</Nersion>
<Mcnu>Showme.html</Menu> <Flags>
<Language>English</Language> <Rating>PG-l 3 ς/Rating> </Flags>
<Control> ???????? </Control> <Price>2.99</Price> <Expιre>02/17/0021 :00</Exρire> <SfreamRecords>l</StreaιnRecords> <StreamRecordl> D>459812<TD> Trim3ry>True</Primary> <MediaType>Video</MediaType> <Bandwidth>256</Bandwidth> <Start>02/15/0022 0(XStart>
<Expire>02/l7/0021:00</Expire> Descriptionx/Description> <Location>PID:200</Location> <Flags> <Language>English< Language>
<Rating>PG-l 3</Rating> </Flags>
<Price>2.99</Price> <GroupID>002</GroupID> <Dependant>459810 T>ependant> < StreamRecordl>
</XML>
Employing framework definition 110, the framework controller 108 is operable to format and organize media components into a stream or streams that may be delivered by the delivery system 112. Delivery system 112 may compnse a headend system and transmission apparatus such as employed by cable, satellite, terrestrial, and other broadcast systems. The organization and establishment of the stream(s) employs parameters provided in framework definition 110. The stream(s) is (are) delivered to receiver 114 that processes the stream(s) and provides output to display device(s) 11 , audio output(s) 118, and may send and receive signals to/from user input devicefs) 120. Figure 2 depicts components of framework controller 108. Framework controller 108 includes framework control logic 216 that is operable to retrieve and interpret framework definition 110, and employ the parameters thereof to control operation preprocessors 210, 212, and 214, plus packagers 218, 220, and 222 to format and encapsulate information (video, audio and metadata) for multiplexer 226. Video preprocessor 210 and audio preprocessor 212 access media streams or stored data as specified by framework definition 110 and perform processing to prepare the media for the associated packager. Such processing may include rate adaptation, re-encoding, transcoding, format conversions, re-sampling, frame decimation, or other techniques and methods to generate a format and data rate suitable for packagers 218, 20, and 222 as may be specified by framework definition 110. Processing may include MPEG encoding of analog video as may be supported by encoder equipment from Divicom Inc. of Milpitas, C A., which is a wholly owned subsidiary of C-Cube Microsystems, Inc. When such processing is specified, framework controller 108 may provide a sequence of instructions to the encoder, selecting channels and controlling encoding.
Metadata preprocessor 214 accesses metadata elements, specified by framework definition 110, and performs processmg to prepare these the metadata packager 222.. - Such processing may include script conversions, script generation, and image format conversions, for example, I operation, graphical metadata maybe sent to metadata preprocessor 214 in a computer graphics format (Photo Shop, for example) that then maybe converted to a format that the display platform recognizes (gif, for example).
Figure 4 is flow chart 400 of the operation of the metadata processor 214. At step 402 an image file is accessed by metadata preprocessor 214. The metadata preprocessor accesses metadata and places it in a predetermined format as may e specified by framework definition 110. Metadata may comprise graphics data, sound data, HTML data, video data, or any other type of data. Metadata preprocessor 214 processes the data and outputs the processed data in a transmission/multiplexer format. As shown in figure 4, the flow diagram allows the image file to be converted and output by the metadata processor 214 in real time. At step 404 the graphic file conversion definitions are loaded into the metadata preprocessor 214 to perform the conversion of the image file. At step 406 the image file 402 is converted into the graphic file in accordance with the definitions. At step 408 the metadata preprocessor 214 outputs the converted image file ro the metadata packager 220. A high-speed processor may perform these functions in real time using definitions that can be quickly downloaded from a high-speed storage device. Custom designed state machines may also be employed fo^ format conversion. Metadata preprocessor 214 may also be employed to convert the format of HTML metadata. HTML metadata maybe sent to metadata processor 214 in HTML 4.0 format that then may be converted to HTML 3.2 format such as may be required by a display platform. The framework controller generates commands that are sent to metadata processor 214 that identify the metadata and specify the output format of the metadata.
Referring again to figure 2, video packager 218 and audio packager 220 are subsystems that package video and audio assets into a format that is compatible with the delivery system 112, including for example, packetization, serialization, rate control, null-packet insertion, and other functions required to prepare the video for compliant transport via MPEG, DVB-C/S/T, PacketVideo, or other transport formats. For example, General Instruments (now owned by Motorola) produces a QAM modulator that modulates MPEG video that is encoded from video preprocessor 210 to digital video broadcast (DVB) format. The framework controller generates commands that are sent to the QAM modulator specifying the frequency (channel) and PID (packet identifier) for the video based upon the framework definition that was provided for the omriimedia package.
As also shown in figure 2, metadata packager 222 is a subsystem that performs packaging on all metadata elements that are to be included in the omnimedia package. Metadata packaging may comprise rate control, packetization, serialization, and synchronization to video and/or audio streams. The metadata is also prepared for transport across any compliant transport mechanism (MPEG, DNB-C/S/T, PacketCable, - DVB-MHP- etc.). A commercially available product for perfoπning these functions is the TES3 that is provided by Νorpak Corp., Kanata, Ontario. The TES3 encoder encodes metadata into a ΝTSC signal with ΝABTS encoding. ΝABTS is the protocol that allows metadata to be sent in the VBI (vertical blanking interval) of a ΝTSC signal. The framework controller commands the TES3 encoder as to what lines of the VBI are employed for transmitting metadata.
Referring again to figure 2, ornnimenu generator 224 may be implemented as a rules-based subsystem employing framework definition 208 to generate a user-interface that presents program options and allows a viewer to select from these options. Rules may be employed to generate an omnimentt template in HTML page format. The HTML page may comprise a full screen image containing buttons (active icons) that may be selected to activate a particular media stream. Omnimenu generator 224 employs framework definition 208 to identify available streams and associates each stream with a button.
Figure 5 is a flow chart 500 illustrating the operation of omnimenu generator 224. At step 502, omnimenu generator 224 accesses the framework definition 110. At step 504, PIDS (program IDs) are extracted from the framework definition. The omnimenu template, described above, is loaded into the omnimenu generator 224 at step 506 of figure 5. At step 508, each of the PIDS that have been extracted from the framework definition are assigned to a button such that each of the buttons is labeled with the PID name. At step 510, selection functions are assigned to each of the buttons in accordance with the labels that have been assigned to those buttons. In this manner, the video can be changed in accordance with the labeled functions of each of the buttons of the template. At step 512 the template may be assigned a company logo. In this manner, the content may be properly branded to correspond to the source of that content. At step 514 the omnimenu generator 224 saves or exports the new omnimenu. The omnimenu may be exported to the package multiplexer 226,
As shown in figure 2, package multiplexer 226 combines the package elements into a stream or set of streams, in preparation for transmission. The streams are coupled through delivery system interface^) 228, to delivery system 112, and the physical and logical interconnects to the transmission system. A DiVicom, Inc., (Milpitas, CA) MUX is an example of a package multiplexer, having the different frequency and P-Ds.
As indicated in figure 1> delivery system 112 then transports the streams to the receiver 114. Delivery system 112 may be analog or digital. In general, system delivery of content is as open as needed for any particular package and topology. A simple movie with no added content may be delivered on a single broadcast multiplex, while a package for the Super Bowl would contain many different delivery mechanisms. Some may be available on a particular receiver and not on others. The oinnimenu may provide a directory of available media for a broadcast event.
The omnimenu is transmitted to a plurality of receivers 114. Receivers 114 may vary in capability and may include upstream (reverse path) communication from the receiver to the headend system, or may use other return systems such, as an Internet connection, for example. Receivers 114 that do not include upstream corr-munications may employ the omnimenu to select audio, video and metadata information contained in a broadcast stream. The bandwidth of an analog NTSC channel maybe employed to carry several digital video streams, audio streams, and metadata. The omnimenu includes tuning or packet information to identify streams that may be accessed by the receiver. The receiver includes a software program operable to display the omnimenu and operable to tune and render selected streams.
In another embodiment of the present invention, receiver 114 supports upstream communications. The headend system, in response to upstream communications, may provide on-demand programming and data delivery. The omnimenu initiates upstream communication in response to user selection of a displayed media button (icon). The headend system may supply the requested stream in a broadcast channel, or if already broadcast, may provide tuning or packs', decode information to the receiver. The requested stream may be employed to update the framework definition 110. Framework controller 108 uses the framework definition 110 to allocate bandwidth and PID information. The framework controller provides the frequency, PID and bandwidth information from the framework definition and uses it to send/control video and audio packager 218/220. Video packager 218 and audio packager 220 (figure 2) then allocate bandwidth for each of the respective video streams.
As also shown in figure 1, information from the framework controller 108 is encoded into the omnimenu so that the receiver ill be able to tune* to and decode the streams. The framework controller 108 may also include URL's for demand data or streaming media. Locations based on alternate tuner systems may also be included. For example, a radio station frequency having local commentary may be simulcast with the video. Depending of receiver 114 capability, some data and/or programming may be loaded in advance into storage built into the receiver such that the content is available locally for viewing during the airing of the primary content package. For example, all NFL player statistics may be preloaded over a trickle feed before the Super Bowl. In this manner, an interactive fantasy football application may retrieve all needed statistics during the game in order to let the viewer play a fantasy game during the airing of the primary program.
Figure 3 depicts a receiver that may be employed in accordance with the present invention. Receiver 302 is used by the system end-user or viewer for use in viewing and controlling the omnimedia package. The receiver 302 may comprise a decoder 304, parser 306, media controller 308, receiver cache 310 and receiver controller 312. Decoder 304 may extract framework information from the delivery stream omnimenu. Information may be encoded in the vertical blanking interval. Decoder 304 may comprise a NABTS VBI decoder from ATI Technologies, Inc., of Thorn ill, Ontario, The VBI decoder may extract data from the VBI and present it to framework parser 306. The data may comprise an XML format file of the omnimenu. Parser 306 extracts elements that comprise the framework. Framework parser 306 receives framework data from decoder 304 and- prepares the data for use by the receiver controller 312. The receiver controller 312 may comprise a data interpretation module. Media controller 308 selects the media streams (audio, video and/or data) that are described by the framework definition. The media controller comprises 308 a tuner and video/audio decoder. The media controller 308 • receives stream data from decoder 304 and control signals from receiver controller 312. Media controller 308 selects the proper media PIDS as needed and feeds them to display • device(s) 116, audio device(s) 118, or user input device(s) 120. Receiver controller 312 serves as the central processor for the other elements of receiver 302. The receiver controller 312 may compare the capabilities of receiver 302 with the stream type to determine which streams may be received and used by receiver 302. Receiver controller 312 sends control signals to the other units in the system. The functions of receiver controller 312 may be performed by software resident in the control CPU of the receivet 302. Functions include receiving data from framework parser 306 and signaling media controller 308, For example, selection of omnimenu items may alter the framework such that the receiver controller 312 receives information from framework parser 306 and signals media controller 308 to render the selected media stream. Receiver cache 310 may be employed to store the framework definition and any parameters, code objects, data items or other software to control omnimenu display and response to user input. Receiver 302 receives data associated with a content package and employs the data to access the package contents. Advanced receivers may check system capabilities to determine which pieces of content may be rendered. For example, a digital set top box may be able to decode MPEG video and audio. A digital set top box may also be able to decode MP3 audio, but not be able to decode HDTV signals. A radio type receiver may only be able to decode audio formats and, possibly, only formats not related to a video signal. The receiver may also be able to decode various data types from data services. These may take the form of application code that would be executed on the receiver if compatible. The receiver of the present invention has the internal compatibility to be able to receive the orr-nimedia packaged signal and decode the parts that are relevant to its capabilities to give the fullest experience it can, regardless of the format.
Referring again to Fig. 1 , display device(s) 116 ρrovide(s) the visual presentation of the framework to the end-user/viewer, including video, streaining media, a graphical user interface, static graphics, animated graphics, text, logos and other displayable assets. Similarly, audio output(s) 118 presents) the audio portions of the framework to the end- user, including primary audio, possible secondary audio tracks, streaming media, audio feedback -from the receiver 114, and other audio assets. User input device(s) 120 allow(s) the user to control the receiver and interact with the framework, providing the ability to choose components of the framework, select links within the framework, navigate the graphical user interface of the receiver, or perform other interactions with the receiver. The foregoing description of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forrrl'disclosed, and other modifications and variations may be possible in light in the above teachings. The embodiment was chosen and described in order to- best explain the principles of the invention and its practical application to thereby enable others skilled in the art to best utilize the invention in various embodiments and various modifications as are suited to the particular use contemplated. It is intended that the appended claims be construed to include other alternative embodiments of the invention except insofar as limited by the prior art.

Claims

Claims
What is claimed is: : . " " _ .- .
I. - A method for producing a broadcast stream that contains audio content, video :.. : content, and metadata content comprising: creating a framework definition that identifies said audio content, said video content and said metadata content associated with" a broadcast and attributes thereof; . comparing the audio format of said audio content with an audio traiismission format and converting said audio "content to said audio transmission format if said ;. •_, , _ . audio format and said audio transmission format differ
"* comparing the video format of said video content with a video transmissioii .
. format and converting said video content to sai video transmission format ifs-uαV ..' video format and said video transmission format differ; comparing e metadata format of said metadata content with a metadata . transmission format and converting said metadata content to said metadata ' transmission format if said metadata format and said metadata transmission format differ;. and . - . . combining said audio content, said video content, and said metadata content
. , into a broadcast stream,
2. The method of claim 1 wherein said framework definition further comprises: . . a framework definition record for each element of said audio content, said video content, and said metadata content. .• ,
3.. The method of claim 1 wherein said menu further comprises:
,. ;\HP- an icon for each element of said audio content, said video content, and' . said metadata, content.
4. The method of claim 1 wherein said metadata content comprises an image file.
5- The method of claim 5 wherein said converting said metadata content further comprises: loading said image file; loading a file conversion definition; converting said file using said conversion definition; and outputtϊng a converted image file.
6. The method of any one of the preceding claims further comprising the steps of: creating a menu describing said audio content, said video content, and said metadata content; trmsirύtting said menu; and transmitting said broadcast stream.
7. A method for rendering portions of a broadcast stream that contains audio content, video content, and metadata content and a menu indicating the contents of said audio content, video content, and metadata content comprising: transferring preloaded metadata associated with said broadcast stream to a receiver prior to transinission of said broadcast stream; receiving said broadcast stream; displaying said menu wherein said menu includes an icon representing said preloaded metadata; receiving a user input; and rendering said preloaded metadata in response to said user input.
. A system for combining multiple media and metadata streams having content into a framework for distribution of the content to a viewer, comprising: a framework controller that receives said video source, audio source, and metadata source and produces an omnimedia package integrating said outputs into a framework; and a framework definition module that interfaces with said framework controller and defines all content to be used in the orrin media package.
9. The system of claim 8 further comprising: at least one video source having an output; at least one audio source having an output; at least one metadata source having an output*
10. The system of claim 8 or 9 further comprising:
a delivery module that receives said omnimedia package from said framework controller and transmits said omnimedia package to a receiver; and a receiver that receives and distributes the Content of said omnimedia package to display devices and audio outputs, said receiver further coupled to at least one user input device for providing interactivity between said viewer and the receiver.
EP01990739A 2000-11-27 2001-11-27 System and method for providing an omnimedia package Ceased EP1366624A2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US25316800P 2000-11-27 2000-11-27
US253168P 2000-11-27
PCT/US2001/044510 WO2002043396A2 (en) 2000-11-27 2001-11-27 System and method for providing an omnimedia package

Publications (1)

Publication Number Publication Date
EP1366624A2 true EP1366624A2 (en) 2003-12-03

Family

ID=29420122

Family Applications (1)

Application Number Title Priority Date Filing Date
EP01990739A Ceased EP1366624A2 (en) 2000-11-27 2001-11-27 System and method for providing an omnimedia package

Country Status (1)

Country Link
EP (1) EP1366624A2 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5953506A (en) * 1996-12-17 1999-09-14 Adaptive Media Technologies Method and apparatus that provides a scalable media delivery system
WO2000045536A1 (en) * 1999-01-29 2000-08-03 Sony Corporation Transmitter and receiver

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5953506A (en) * 1996-12-17 1999-09-14 Adaptive Media Technologies Method and apparatus that provides a scalable media delivery system
WO2000045536A1 (en) * 1999-01-29 2000-08-03 Sony Corporation Transmitter and receiver
EP1073223A1 (en) * 1999-01-29 2001-01-31 Sony Corporation Transmitter and receiver

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BENITEZ A B ET AL: "Object-based multimedia content description schemes and applications for MPEG-7", SIGNAL PROCESSING. IMAGE COMMUNICAT, ELSEVIER SCIENCE PUBLISHERS, AMSTERDAM, NL, vol. 16, no. 1-2, 1 September 2000 (2000-09-01), pages 235 - 269, XP004216278, ISSN: 0923-5965, DOI: 10.1016/S0923-5965(00)00030-8 *
KATE TEN W ET AL: "PRESENTING MULTIMEDIA ON THE WEB AND IN TV BROADCAST", PROCEEDINGS OF THE EUROPEAN CONFERENCE ON MULTIMEDIA APPLICATIONS,SERVICES AND TECHNIQUES, XX, XX, 26 May 1998 (1998-05-26), pages 56 - 69, XP000984176 *
See also references of WO0243396A3 *

Similar Documents

Publication Publication Date Title
US7020888B2 (en) System and method for providing an omnimedia package
US10750241B2 (en) Browsing and viewing video assets using TV set-top box
US7200857B1 (en) Synchronized video-on-demand supplemental commentary
US9197938B2 (en) Contextual display of information with an interactive user interface for television
JP5703317B2 (en) System and method for generating custom video mosaic pages with local content
US5818441A (en) System and method for simulating two-way connectivity for one way data streams
US7373650B1 (en) Apparatuses and methods to enable the simultaneous viewing of multiple television channels and electronic program guide content
US8073862B2 (en) Methods and apparatuses for video on demand (VOD) metadata organization
EP1057338B1 (en) A system for forming, partitioning and processing electronic program guides objects
US20030023970A1 (en) Interactive television schema
US20090187950A1 (en) Audible menu system
US20080141325A1 (en) Systems and Methods for Dynamic Conversion of Web Content to an Interactive Walled Garden Program
JP2014064305A (en) Interactive media content delivery using separate backchannel communications network
JP2001520491A (en) System for formatting and processing multimedia program data and program guide information
EP2442581A1 (en) Video assets having associated graphical descriptor data
WO2000018114A1 (en) Interactive television program guide with passive content
EP1366624A2 (en) System and method for providing an omnimedia package
US10477283B2 (en) Carrier-based active text enhancement
Series Integrated broadcast-broadband systems
MXPA00008118A (en) A multimedia system for processing program guides and associated multimedia objects

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20030623

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

17Q First examination report despatched

Effective date: 20061009

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: OPENTV, INC.

REG Reference to a national code

Ref country code: DE

Ref legal event code: R003

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20200604