WO2011064674A2 - Content management system and method of operation thereof - Google Patents

Content management system and method of operation thereof Download PDF

Info

Publication number
WO2011064674A2
WO2011064674A2 PCT/IB2010/003432 IB2010003432W WO2011064674A2 WO 2011064674 A2 WO2011064674 A2 WO 2011064674A2 IB 2010003432 W IB2010003432 W IB 2010003432W WO 2011064674 A2 WO2011064674 A2 WO 2011064674A2
Authority
WO
WIPO (PCT)
Prior art keywords
user
video content
information
content
attributes
Prior art date
Application number
PCT/IB2010/003432
Other languages
French (fr)
Other versions
WO2011064674A3 (en
Inventor
Wencheng Li
Zihai Shi
Original Assignee
France Telecom
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by France Telecom filed Critical France Telecom
Publication of WO2011064674A2 publication Critical patent/WO2011064674A2/en
Publication of WO2011064674A3 publication Critical patent/WO2011064674A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/445Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/251Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4755End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for defining user preferences, e.g. favourite actors or genre
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4756End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for rating content, e.g. scoring a recommended movie
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4782Web browsing, e.g. WebTV
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • H04N21/4826End-user interface for program selection using recommendation lists, e.g. of programs or channels sorted out according to their score
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4314Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for fitting data in a restricted space on the screen, e.g. EPG data in a rectangular grid
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA

Definitions

  • the present system relates to at least one of a system, method, user interface, and apparatus which can dynamically select content in accordance with one or more filtering criterion and, more particularly, to a video content distribution system which can filter and select video content or video channels to be output on a Ul in accordance with one or more filtering criterion which may be selected by the system and/or one or more users.
  • Content such as digital audio visual content is pervasive in today's society. Parties are presented with a vast array of sources from which content may be selected included optical media and network provided, such as may be available over the Internet.
  • One system which has been provided is a genre classification system in which, for example, audio visual content is classified in broad categories, such as drama, comedy, action, etc. While this system does provide some insight into what may be expected while watching the audio visual content, the typical classification is broadly applied to an entire audio visual presentation and as such, does not provide much insight into different segments of the audio visual content. For example, while in general, the entire audio visual presentation may be generally classified as belonging in an action genre, different portions of the audio visual content may be related to comedy, drama, etc. Accordingly, the broad classification of the audio visual content ignores these sub-genres that represent portions of the content and thereby, may fail to attract the attention of a party that may have an interest in these sub-genres.
  • Recommendation systems have been provided that utilize a broader semantic description, that may be provided by the producers of the audio visual content and/or may be provided by an analysis of the portions of the audio visual content directly. These systems typically compare the semantic description to a user profile to identify particular audio visual content that may be of interest. Other systems, such as U.S. Patent No. 6,173,287 to Eberman, incorporated herein as if set out its entirety, utilizes 5 metadata to automatically and semantically annotate different portions of the audio visual content to enable retrieval of portions of the audio visual content that may be of interest. Problems exist with this system in that the analysis of audio and visual portions of the audio visual content is very complex and oftentimes produces less than satisfactory results.
  • search results tend to be erratic depending on the particular terms utilized for annotation and search. For example, a sequence relating to and annotated with "automobile,” may not be retrieved by a search term of "car” since searches tend to be literal.
  • None of these prior systems provides a system, method, user interface and/or device to build a video content channel in a simple and intuitive manner.
  • the method may include one or more acts of: populating a build list comprising one or more objects each of which corresponds with a content portion of a plurality of content portions; determining attribute information for each object in the build list, the attribute information comprising information related to a video annotation sequence and a user interest sequence; forming channel information based upon the attribute information; forming a query based upon the channel information; querying content information in accordance with the channel information; rendering content portions corresponding with results of the query; selecting a content portion; and rendering at least part of the selected content portion on the Ul.
  • the method may further include acts of: selecting certain attribute information of the attribute information; and/or updating the channel information in accordance with the selected certain attribute information of the attribute information.
  • the method may further include an act of storing the query with corresponding channel information.
  • the method may further include an act of rendering the plurality of content portions before the act of populating the build list.
  • the method may also include an act of collecting emotion information in accordance with the content.
  • the method may further include an act of creating an attribute vector based upon the user emotion information.
  • a system which may provide content on a user interface (Ul).
  • the system may include a controller which: populates a build list comprising one or more objects each of which corresponds with a content portion of a plurality of content portions, determines attribute information for 5 each object in the build list, the attribute information comprising information related to a video annotation sequence and a user interest sequence, forms channel information based upon the attribute information, forms a query based upon the channel information, and queries content information in accordance with the channel information.
  • the system may further include a user interface which may: render content l u portions corresponding with results of the query, and may render at least part of a selected content portion on the Ul.
  • the controller may also receive one or more selections, from the user, corresponding with certain attribute information of the attribute information. Then, the controller may update the channel information in accordance with the selected certain attribute information.
  • the system may also include a memory to store the query with corresponding channel information.
  • the controller may render the plurality of content portions on the Ul before the controller populates the build list.
  • the system may include a user input device to receive, from the user, emotion information which corresponds with the content. According to the system, the controller may create0 an attribute vector based upon the user emotion information.
  • the computer program configured to provide a user interface (Ul) to accomplish a task
  • the computer program may include a program portion configured to: populate a build list comprising one or more objects each of which corresponds with a5 content portion of a plurality of content portions; determine attribute information for each object in the build list, the attribute information comprising information related to a video annotation sequence and a user interest sequence; form channel information based upon the attribute information; form a query based upon the channel information; query content information in accordance with the channel information; render content portions corresponding with results of the query; and/or render at least part of a selected content portion on the Ul.
  • FIG. 1 shows a Ul 100 in accordance with an embodiment of the present system
  • FIG. 2 shows a Ul in accordance with an embodiment of the present system
  • FIG. 3 shows a Ul in accordance with an embodiment of the present system
  • FIG. 4 shows a Ul in accordance with an embodiment of the present system
  • FIG. 5 shows a Ul in accordance with an embodiment of the present system
  • FIG. 6A shows a Ul in accordance with an embodiment of the present system
  • FIG. 6B shows a Ul in accordance with an embodiment of the present system
  • FIG. 7A shows a Ul in accordance with an embodiment of the present system
  • FIG. 7B shows a Ul in accordance with an embodiment of the present system
  • FIG. 7C shows a Ul in accordance with an embodiment of the present system
  • FIG. 8 shows a Ul in accordance with an embodiment of the present system
  • FIG. 9A shows a Ul in accordance with an embodiment of the present system
  • FIG. 9B shows a Ul in accordance with an embodiment of the present system
  • FIG. 9C shows a Ul in accordance with an embodiment of the present system
  • FIG. 9D shows a Ul in accordance with an embodiment of the present system
  • FIG. 10 shows a flow diagram that illustrates a process in accordance with an embodiment of the present system
  • FIG. 1 1 shows a block diagram of a communication system 1 100 according to an embodiment of the present system.
  • FIG. 12 shows a system in accordance with a further embodiment of the present system.
  • an operative coupling may include one or more of a wired connection and/or a wireless connection between two or more devices that enables a one and/or two-way communication path between the devices and/or portions thereof.
  • an operative coupling may include a wired and/or wireless coupling to enable communication between a content server and one or more user devices.
  • a further operative coupling, in accordance with the present system may include one or more couplings between two or more user devices, such as via a network source, such as the content server, in accordance with an embodiment of the present system.
  • rendering and formatives thereof as utilized herein refer to providing content, such as digital media, such that it may be perceived by at least one user sense, such as a sense of sight and/or a sense of hearing.
  • the present system may render a user interface on a display device so that it may be seen and interacted with by a user.
  • the present system may render audio visual content on both of a device that renders audible output (e.g., a speaker, such as a loudspeaker) and a device that renders visual output (e.g., a display).
  • a device that renders audible output e.g., a speaker, such as a loudspeaker
  • a device that renders visual output e.g., a display
  • the term content and formatives thereof will be utilized and should be understood to include audio content, visual content, audio visual content, textual content and/or other content types, unless a particular content type is specifically intended, as may be readily appreciated.
  • a system, method, device, computer program, and interface for rendering a Ul for a users convenience may include one or more applications which are necessary to complete an assigned task.
  • the present system may collect other statistics related to the user and/or user device (e.g., a MS) in accordance with the present system, such as a relative time of an action, geo-location, position, acceleration, speed, azimuth, network, detected content item, etc.
  • a common interface device for a user interface such as a graphical user interface (GUI) is a mouse, trackball, keyboard, touch-sensitive display, etc.
  • GUI graphical user interface
  • a mouse may be moved by a user in a planar workspace to move a visual object, such as a cursor, depicted on a two-dimensional display surface in a direct mapping between the position of the user manipulation and the depicted position of the cursor. This is typically known as position control, where the motion of the depicted object directly correlates to motion of the user manipulation.
  • GUI in accordance with an embodiment of the present system is a GUI that may be provided by a computer program that may be user invoked, such as to enable a user to select and/or classify/annotate content as is described in U.S. Application No. 61/099,893 entitled "Content Classification Utilizing A Reduced Description Palette To Simplify Content Analysis," filed on September 24, 2008 (hereinafter the '893 application) incorporated herein as if set forth in its entirety.
  • the user may be enabled within a visual environment, such as the GUI, to classify content utilizing a reduced description palette to simplify content analysis, presentation, sharing, etc. of separate content portions in accordance with the present system.
  • the GUI may provide different views that are directed to different portions of the present process.
  • the GUI may present a typical Ul including a windowing environment and as such, may include menu items, pull-down menu items, pop-up windows, etc., that are typical of those provided in a windowing environment, such as may be represented within a WindowsTM Operating System GUI as provided by Microsoft Corporation and/or an OS XTM Operating System GUI, such as provided on an iPhoneTM, MacBookTM, iMacTM, etc., as provided by Apple, Inc., and/or another operating system.
  • the objects and sections of the GUI may be navigated utilizing a user input device, such as a mouse, trackball, finger, virtual locator, and/or other suitable user input.
  • the user input may be utilized for making selections within the GUI such as by selection of menu items, window items, radio buttons, pop-up windows, containers, for example, in response to a mouse-over operation, and other common interaction paradigms as understood by a person of ordinary skill in the art.
  • Similar interfaces may be provided by a device having a touch sensitive screen that is operated on by an input device such as a finger of a user or other input device such as a stylus.
  • the present system may also incorporate a virtual display capability which can detect a virtual location of a user or of the device itself.
  • a cursor may or may not be provided since location of selection is directly determined by the location of interaction with the touch sensitive screen.
  • GUI utilized for supporting touch sensitive inputs or virtual inputs may be somewhat different than a GUI that is utilized for supporting, for example, a computer mouse input, however, for purposes of the present system, the operation is similar. Accordingly, for purposes of simplifying the foregoing description, the interaction discussed is intended to apply to either of these systems or others that may be suitably applied.
  • FIG. 1 shows a Ul 100 in accordance with an embodiment of the present system.
  • the Ul 100 may be provided by an application such as, for example, a browser such as an Internet browser like, for example, Internet ExplorerTM, MozillaTM, FirefoxTM, etc., and may include one or more of windows, subwindows, frames, subframes, toolbars, widgets, instances, menu items, submenu items, toolbars, containers, etc., as may be used in a widowing environment or proprietary Ul (e.g., a mobile station (MS) display, etc.) and may be associated with one or more channels that may be formed in accordance with the present system.
  • the Ul 100 may include a main window 102, one or more content portions 104, one or more menu items 106, a build portion 108, a value/attribute (VA) portion 1 10, and a scroll portion 1 1 1.
  • a browser such as an Internet browser like, for example, Internet ExplorerTM, MozillaTM, FirefoxTM, etc.
  • the Ul 100 may include one
  • the main window 102 may include one or more subwindows, frames, subframes, containers, subcontainers, text boxes, selection boxes, pop-up menus, etc., which may be rendered for a user's convenience.
  • the one or more menus items 106 may include one or more menus, submenus, tabs, and/or other selections which may be selected by a user.
  • the one or more menu items 106 may include tabs such as, for example, an editor's choice menu item 106A, a community menu item 106B, a my channels menu item 106c, recommended menu item 106d, and a search menu item 106e.
  • the user may select one or more of menu items 106A-106E to access corresponding user interfaces.
  • Each content portion 104 may display or otherwise communicate information or objects associated with content that may be accessed, played back, and/or downloaded using the Ul 100.
  • the content portion 104 may include textual and/or graphic information such as, for example, graphical depictions of one or more portions, thumbnails, titles, ratings, duration, views, etc., which may be indicative of certain parts of the associated content and may include one or more selectable portions for selection by a user.
  • the content portion 104 may include a graphic representation of content such as a thumbnail portion 1 12 which may provide a graphical representation of one or more parts (e.g., frames or time periods) of the associated content.
  • the system may play back selected portions of the associated content and/or render one or more options which are available to a user. For example, upon determining that a user has scrolled over or otherwise selected (e.g., via a mouse click) a thumbnail portion 1 12, the system may select graphic representations of the associated content 5 which may, correspond with certain parts of the heat information and render thumbnails which correspond with the selected graphic representations.
  • the system may sequentially display one or more images (e.g., thumbnails, or larger images in frames) as thumbnails which l o correspond with predetermined sections or frames of the associated content.
  • images e.g., thumbnails, or larger images in frames
  • the one or more images may be displayed in a separate frame.
  • the predetermined sections may be selected by the system based upon one or more criteria such as time (e.g., at 5 minute intervals of content during play), frame numbers (e.g., every 1000 th frame), heat information which may correspond with heat map is information and associated meta information as disclosed in the '893 application, such as reaction indicators (e.g., emoticons) provided by a user and associated by a user with a portion of the content that is being rendered at the time of receiving the user selection.
  • reaction indicators e.g., emoticons
  • the system may sequentially render thumbnails which correspond with portions of the0 associated content that have corresponding heat information, such as positive interest over the number of respondents that provided a reaction.
  • the heat information may correspond to an interest profile of a plurality of users that tracks points in the related content that attracted one or more comments and/or annotations.
  • the heat map is a graphical representation of5 numbers of comments per parts/portions of content, such as against a content running time.
  • the reduced set of reaction indicators provide an indication of what type of interest is provided (e.g., happy, mad, etc.) by the content at a given portion of the content. It is significant that in accordance with an embodiment of the present system, the heat information may be provided as an attribute of a content portion and thereby, utilized to build a channel as described further herein.
  • the predetermined section may also correspond with information such as, for example, genre, subgenre, metadata, etc. associated with the content.
  • information such as, for example, genre, subgenre, metadata, etc. associated with the content.
  • frames which are associate with certain portions of the content which have a certain subgenre (e.g., action) in content having a main genre which is, for example, sports, or any other genre may be shown.
  • Each content portion 104 may also include one or more of a title information portion 1 14, a time information portion 116, a view information portion 1 18, and a heat information portion 120.
  • the system may provide additional information in, for example, a container for the user's convenience. For example, upon detecting that the use has scrolled over the title information portion for two second, the system may render a container including information (e.g., year, date, actors, attributes, awards, genre, etc.) about the associated content.
  • Each of the associated content may have corresponding attribute information (Al).
  • the attribute value information may include, for example, attributes such as genre, title, duration, meta information, heat information, etc.
  • the title information portion 1 14 may include title information which corresponds with a title or other identifying feature of associated content information.
  • the time information portion 1 16 may include information related to duration of the associated content.
  • the view information portion 18 may include information related to a number of views of the associated content.
  • the heat information 120 may include information related to the associated content such as heat map information described in the '893 application.
  • the scroll portion 1 1 1 may include one or more scroll bars, or the like, which may function to scroll portions of the Ul 100. Accordingly, the scroll portion 1 1 1 may include a horizontal scroll bar to scroll horizontal portions of the Ul 100 and/or a vertical scroll bar to scroll vertical portions of the Ul 100.
  • a pop-up 203 may be rendered when a user selects the scroll portion 1 1 1 and may indicate current location status (e.g., 21-42 of 2000 results), where 2000 refers to total number of results matching a search criteria as described further herein below.
  • a Ul 200 may be provided as a result of a search to build a channel as described herein based on the content portion 104.
  • the VA portion 1 10 may include channel information which may include for example, attribute value information (AVI) which may be added, deleted, set, selected, and/or deselected by the system, user, or community depending upon channel properties.
  • the channel information may also include identifying information which may be used to identify a channel of a plurality of channels.
  • the AVI may be added to the channel information automatically by, for example, adding attributes to the VA portion 1 10. For example, attributes may be added by dragging a content portion 104 to the build portion 108. In response to a drag/drop operation including the content portion, attributes associated with the content portion are added to the AVI. In response, the system may then determine attribute information of an associated content portion 104 and add this information to the AVI.
  • the AVI may correspond with the attribute information of content portions 104 in the Ul 100.
  • the AVI may have values (hereinafter AVI which correspond with information related to a video annotation sequence such as attribute type (e.g., emotion, see, emoticon 1 15, concept, speed, color, etc.), attribute value (e.g., happy, soccer, fast, green, etc.); and/or a user interest sequence such as, for example, user preferences and/or associated weight information.
  • the user interest sequence may be based upon user inputs and/or selections. For example, user Joe may select a content portion of the "Dukes of Hazard" television show which may have associated attributes of "Surprising" "Comedy” "excited (emoticon)" and "Car Racing".
  • a user channel may be created for Joe of content that has the same or similar attributes as one or more selected (e.g., drag/drop operation) content portions.
  • a user Anny may select one content portion having associated attributes of "Sad” "Drama” and another content portion having associated attributes of "happy” “Hollywood film”. Selection of these two content portions may build a channel based on available content and the associated attributes.
  • 5 multiple selections of content portions may assist a user in tuning a channel for the type of content that the user desires. Additional tuning of the attributes may always be performed thereafter.
  • the VA portion 1 10 may also include functionality which enables a user to add, delete, change, select, deselect etc., certain values in the AVI.
  • the VA l o portion 1 0 may include an add menu item 1 13 which may be used to insert and/or edit values in the AVI for later use by the system and/or user.
  • the system may enter an editing mode to edit the AVI in the VA portion 1 10.
  • each value of AVI may have an associated weight value which may be used to indicate a weight which is given to associated information.
  • Each i s weight value may be selected by the system and/or a user and may be used to weigh, for example, an importance or relevance, of associated information.
  • information that is assigned a weight of, for example, 10 may be determined to be more important or relevant than information that is assigned a weight of, for example, 1.
  • the system and/or the user may determine and/or weigh0 information, as desired.
  • the system may use the channel information to generate corresponding attribute queries. Further, each channel may be used by the user and/or shared with one or more communities (e.g., work, sci-fi, drama s etc.) who may subscribe to the channel.
  • the system may also provide a cost system which may generate cost information and charge/refund monies to one or more users.
  • high-profile users5 such as, for example, actors, may form their own channels and share this information with a community who subscribes to this channel. The actor may be paid with revenues generated from subscription fees paid to join the community.
  • the VA portion 1 10 may also include associated title information 122 which may be used to identify a channel of a plurality of channels.
  • the title0 information 122 may be assigned a name such as "Most Popular" which may be stored in association with corresponding channel information.
  • the title information may correspond with a title which may be input by the user and/or may be automatically assigned by the system.
  • the system may assign a new channel with an arbitrary designation (e.g., "user channel") which may thereafter be changed by a user. Accordingly, the user may accept the arbitrary designation or may or edit it, as desired.
  • System resources may be conserved by storing channel information and/or queries for each channel. Further, user convenience may be enhanced by not having to manage videos directly.
  • the system may also include functionality to push new content which matches the query to a user when the new content becomes available.
  • the build portion 108 may have an associated build list which may include build objects which correspond with content information of one more of the content portions 104.
  • the build list may include build objects which are added or linked to the build list using any suitable method such as, for example, copy/paste, drag/drop, etc. commands. Additionally, build objects may be linked to the build list.
  • the user may add build objects to the build list, by for example, selecting a content portion 104 of a plurality of content portions 104 and inserting the selected content portion 104 in the build list.
  • the user may delete build objects from the build list using any suitable command such as, for example, delete, move, and/or cut commands.
  • the system may include information that is related to the build objects that the user has added to the build portion 108, this information may include, for example, content identification (e.g., code, time, address, pointer, links, identifying information such as, for example, title information, etc.) of corresponding content portions 104 or the like that may be used to identify corresponding content.
  • content identification e.g., code, time, address, pointer, links
  • identifying information such as, for example, title information, etc.
  • the system may edit the corresponding channel build list in accordance with the user's selection.
  • Each channel may have its own build portion 108 and corresponding build list. However, it is also envisioned that a user may copy/paste a build list from one channel to another channel to enable a user to build channels.
  • FIG. 2 shows a Ul 200 in accordance with an embodiment of the present system.
  • the system may render a play menu item 221 and/or a move menu item 223 in response to a mouse over operation of a content portion 204.
  • a user may drag a frame 217 corresponding to a build object of the associated content portion 204 in response to a user input.
  • the system may modify the build list in accordance with the corresponding build object. Accordingly, the system may populate the build list with build objects selected by the user.
  • the system may render menu items such as a play menu item 221 and/or a move menu item 223 for selection by the user when it is determined that the user has scrolled over a particular content portion such as the content portion 204. Accordingly, if the system determines that the user has selected the play menu item 221 , the system may retrieve and/or play one or more portions of content associated with the selected content portion 204. Similarly, if the system determines that the user has selected the move menu item 223, the system may render an object such as the frame 217 for manipulation by a user.
  • the user may form and/or edit the build list by, for example, manually adding/deleting content identification associated with selected content to/from a corresponding build list manually (e.g., using an input device such as a keyboard, a mouse, etc.) or by using a copy/paste command.
  • the system may provide a user with one or more menus to add/remove information from a corresponding build list.
  • the system may provide a user with a pop-up menu to add content to, or delete content from, a corresponding channel build list.
  • FIG. 3 shows a Ul 300 in accordance with an embodiment of the present system.
  • a pop-up container 330 may be generated and rendered by the system in response to user request or when a user scrolls over a content portion 304 for a predetermined period of time.
  • the pop-up container 330 may include additional information about a specific content portion 304.
  • FIG. 4 shows a Ul 400 in accordance with an embodiment of the present system.
  • the Ul 400 may include menu items such as tabs 406A -406E which may correspond with tabs 106A - 106E of FIG. 1 , respectively.
  • the Ul 400A illustrates a container which includes a plurality of content portions 403, each of which corresponds with content 5 included in, for example, a channel such as an "Editor's Choice" channel.
  • This content may be selected by the system based upon, for example, content which is determined to be most watched over a certain period of time such as a week or content which is determined to have attributes which correspond with perceived user desires.
  • a user may add a content portion 403 to a build portion 408 of a desired channel.
  • the build l o portion 408 and a VA portion 410 correspond with the build portion 108 and the VA portion 1 10, respectively, may be minimized to minimize clutter and may be maximized by the system in response to a user's selection.
  • FIG. 5 shows a Ul 500 in accordance with an embodiment of the present system.
  • the Ul 500 may include menu items such as tabs 506A - 506E which may correspond
  • Ul 500 illustrates a search query 532 which may be generated and rendered in response to a user's selection of the search menu item 506e.
  • the user may enter a desired search term in the search query and the system may query one or more databases in response thereto.
  • the system may then return results of the query in a return frame for the user's consideration and/or selection.
  • The0 user may then view selected content and/or may add the selected content to a build portion 508.
  • a search term of the query may correspond with an identifying feature (e.g., a title) of the desired content.
  • the system may also provide recommended query terms for the user's convenience.
  • the search may be provided as a "live search" in that the search results are updated in live time as5 the search is entered with the search being modified as each new search term is added.
  • the system may generate and/or display content which corresponds with a community channel or channels or a recommended channel or channels, respectively.
  • the system may0 display information which corresponds with one or more of the user's channels for the users selection. Accordingly, in response to the user's selection, the system may then render a Ul which may correspond with a selected channel (e.g., Ul 100).
  • FIG. 6A shows a Ul 600A in accordance with an embodiment of the present system.
  • the Ul 600A may include a VA portion 610 which may be similar to the VA 5 portion 1 10 however, the VA portion 610 may correspond with a channel such as, for example "Lakers vs. Magic" channel that was built by the user and/or may include graphic attribute values 636 which may correspond with a build list of the present channel.
  • an AVI value such as, for example, "Concept (4): Winning”
  • the system may generate an expanded information set associated with one or l o more AVI values as will be shown with reference to VA portion 610b of FIG. 6B which shows a Ul 600B in accordance with an embodiment of the present system.
  • the Ul 600B is similar to the Ul 600A, however, the VA portion 610b has been expanded to show detailed information which corresponds with the VA portion 610.
  • AVI values 636 and 638 are expanded to 636B and 638B, respectively.
  • the user may also change/edit AVI values by selecting, deselecting, adding and/or deleting attribute values of the attribute information in VA portions 610 and/or 610B so as to customize a channel.
  • the system may then use the changed/edited attribute information to dynamically select corresponding content to be displayed in a corresponding channel.
  • the portions indicated as "value" in portion 638B are provided0 to indicate that more than one attribute may be provided within a given group, such as "concept" although more than one need not be provided.
  • a grouping of attributes may be removed from the Ul.
  • FIG. 7A shows a Ul 700A in accordance with an embodiment of the present5 system.
  • a user may select one or more AVI values for the corresponding channel and add these AVI values to an attribute set for the current channel.
  • the user may select the "court" AVI value 751 using any suitable method (e.g., double clicking, etc.).
  • the system may then modify the AVI information to reflect the users selection. For example, the system may fade the selected AVI value from green to0 black for a value that has been added to the attribute set and may also move the one or more selected values to the a top portion of the attribute list so that the selected one or more AVI values may be grouped with the other selected values (e.g., see, highlighted values, FIG.
  • the user may also adjust corresponding weighting for an attribute 5 value such as increasing or decreasing a weighting to reflect a corresponding increased or decreased interest for the attribute to the user.
  • the user may also deselect attribute values and thereafter the system may remove the deselected values from an attribute set and may update the attribute information accordingly.
  • the attribute set comprises selected AVI values for a channel.
  • FIG. 7B shows a Ul 700B in accordance with an embodiment of the present system.
  • the Ul 700B may be similar to the Ul 700A however, selection menu items 744 may be provided for a user to perform one or more desired actions on AVI information 740 in a VA portion 710. Accordingly, a user may go back to a previous VA portion which may include previous AVI values by selecting a "back" menu selection; undo i s current attribute values (e.g., selections/deselections) by selecting an "undo" menu item; and/or save current AVI values (e.g., including selected/deselected attribute values) in a current VA area 710 by selecting a "save" menu item.
  • a previous VA portion which may include previous AVI values by selecting a "back" menu selection
  • undo i s current attribute values e.g., selections/deselections
  • current AVI values e.g., including selected/deselected attribute values
  • FIG. 7C shows a Ul 700C in accordance with an embodiment of the present system.
  • the Ul 700C may be similar to the Uls 700A and 700B however, Ul 700C0 illustrates a user entering a search query in the query box 742 of a VA portion 744.
  • the system may provide corresponding results such as results 750 (e.g., see, 750A and 750B) for a user's consideration and/or selection.
  • results 750 e.g., see, 750A and 750B
  • the search may be limited to current AVI values in the current VA portion 744 or may include other data which may be retrieved by the system5 (e.g., an Internet search).
  • the results of the query may have a suitable order (e.g., alphabetic, title, gender, most searched, etc.) and may be placed in a suitable area such as, for example, the VA portion 744, as desired.
  • FIG. 8 shows a Ul 800 in accordance with an embodiment of the present system.
  • the Ul 800 may be similar to the Ul 700A however, Ul 800 illustrates a title editing0 process to change a title 846 of a corresponding channel.
  • the user may edit the title at any time.
  • the title 846 of a channel is shown in a VA portion 850, it may be rendered at any suitable location as desired.
  • "Save" and "Undo" menu items 852 and 848 may be provided for a user to undo or save selections, respectively.
  • a save as menu item may be provided to save the title and associated information as a new channel. Accordingly, when a user selects the save a new menu item, the system may save the title as a new title with the associated AVI.
  • FIG. 9A shows a Ul 900A in accordance with an embodiment of the present system.
  • the Ul 900A illustrates a build portion 908A including a plurality of build objects that were added (i.e., inserted) into the build portion 908A.
  • the build portion 908A is shown in an expanded state which may be generated by the system when the user selects the build portion 908A for example, by scrolling over the build portion 908A for a predetermined amount of time.
  • the system may resize and collapse the build portion 908A upon detecting a user request such as may occur when a user no longer scrolls over the build portion 908A and/or when a user selects a minimization element (e.g., see, downward arrow 971 ).
  • the system may represent each build object using a suitable representation such as, for example, a thumbnail 970 indicative or one or more portions of the associated content.
  • a suitable representation such as, for example, a thumbnail 970 indicative or one or more portions of the associated content.
  • the system may render a window and/or display actions selections such as, for example, a play selection, a move selection, etc., that the user may select.
  • the system may retrieve and render the content in for the user's convenience.
  • FIG. 9B shows a Ul 900B in accordance with an embodiment of the present system.
  • Ul 900B a thumbnail corresponding selected content has been selected (e.g., see, highlighted thumbnail 956) by the system and/or the user in a build portion 908B.
  • AVI information 958 which is associated with the selected content 956 of the build area 908B is distinguished (e.g., using highlights, colors, etc.) for a user's convenience.
  • a user may conveniently determine and/or select/deselect one or more AVI values associated with the selected content so as to build a customized channel.
  • the user may also select another object in the build portion 908B and view AVI values content which are associated with this object.
  • the user may then select some or all of the AVI values associated with one or more selected objects so as customize a channel.
  • FIG. 9C shows a Ul 900C in accordance with an embodiment of the present 5 system.
  • a VA area 910C includes AVI values with dark highlights which correspond with attributes that have been selected by a user and stored in association with a corresponding build area 908C of a current channel (e.g., "Lakers vs. Magic - Mine: Edit Concept (12)").
  • the VA area 910C may also include a query box 960 for receiving queries from a user and /or providing recommendations. The queries may
  • the system may return query results which are limited to the current VA area, the current channel, multiple channels (e.g., selected channels), or one or more content databases or parts thereof.
  • FIG. 9D shows a Ul 900D in accordance with an embodiment of the present system.
  • the Ul 900D is similar to the Ul 900A however, a build portion 908D is shown in a collapsed state and may display build objects such as, for example, thumbnails 970, in a sequential order. A number of build objects within the build portion 908D may be indicated by icon 972.
  • a build object0 such as, for example, thumbnail 970
  • the system may render a representation 970 which is associated with a currently displayed build object and may provide selections such as, a play selection 976 to play content corresponding with the build object.
  • Heat information 920 may also be rendered for the user's convenience.
  • FIG. 10 shows a flow diagram that illustrates a process 1000 in accordance with5 an embodiment of the present system.
  • the process 1000 may be performed using one or more computers communicating over a network.
  • the process 1000 can include one of more of the following acts. Further, one or more of these acts may be combined and/or separated into sub-acts, if desired.
  • the process may start during act 1001 and then proceed to act 1003.
  • the process may obtain channel information for a current channel.
  • the channel information may correspond with, for example, AVI information provided by a user.
  • the AVI for a given channel may be determined by a user selecting, such as by dragging and dropping one or more content portions from amongst a plurality of content portions, into a channel build area.
  • the present system enables attribute selection by selection of content portions that have associated attributes. For further granularity, a user may select only part of a content portion and only attributes associated with the part of the content portion are selected as attributes for a user channel. In this way, a group of attributes that are associated with a content portion or the selected part of the content portion may be selected to form a query that is utilized for a user channel.
  • the present system provides an intuitive system for building user channels by selection of content portions. A complex set of attributes may be readily added and edited to build one or more user channels of content. The user channels may be provided within a Ul such as discussed herein.
  • the attributes may include heat information relating to the content portion or the selected part of the content portion and/or may include a reduced set of reaction indicators such as disclosed in the '893 application, such as one or more associated emoticons.
  • the process may continue to act 1005.
  • the process may form a query in accordance with the channel information. Accordingly, the query may correspond with previously selected AVI.
  • the process may continue to act 1007. It is also envisioned that the process may determine whether new content is available and if it is determined that new content is available, repeat act 1005.
  • the attributes associated with a channel are utilized to search the new content to identify new content that corresponds to the user channel. In a case wherein new content is identified, the new content may be added to the channel and provided within the Ul as the user channel.
  • results of the query may be rendered on a Ul.
  • the process may continue to act 009.
  • the process may render the requested content.
  • the process may continue to act 1013.
  • the process may update user profile information.
  • the user profile information may include user input information that may be related to attribute information, such as attribute weighting, etc. Further, the user profile information may include other information such as statistical information about the user, time, geolocation, network, etc., as desired.
  • the process may continue to act 1015.
  • attribute matrixes may be maintained for videos, user interests, etc. and also may be maintained within another set of matrixes mapping back to video attribute matrixes.
  • user interests as provided by selection of video content portions may be simply matched against available video content to identify content for the user channel.
  • a user may update genome information such as, for example, interest genome sequence information for a given channel, in accordance with the updated user profile information, updated attributes, etc. After completing act 1015, the process may return to act 1003.
  • genome information such as, for example, interest genome sequence information for a given channel, in accordance with the updated user profile information, updated attributes, etc.
  • the process may determine whether a "build list" edit is requested.
  • a build list edit may be requested when, for example, a user adds, deletes, or otherwise changes a build portion. Accordingly, if it is determined that the build list edit is requested, the process may continue to act 1019. However if it is determined that the build list edit is not requested, the process may continue to act 1023.
  • act 1019 the user may edit and update the build list in accordance with a user's changes. After completing act 1019, the process may continue to act 1021 . During act 1021 , the process may update channel information in accordance with the updated build list. After completing act 1021 , the process may repeat act 1003.
  • the process may determine whether to edit channel information. This may occur when a user edits and saves channel information in, for example, a 5 channel portion. Accordingly, if the process determines to edit channel information, the process may continue to act 1025. However, if the process determines not to edit channel information, the process may repeat act 1009.
  • the process may update and store the channel information in accordance with user selections and save the edited channel information. After 10 completing act 1025, the process may repeat act 1003.
  • FIG. 1 shows a block diagram of a system 00 according to an embodiment of the present system.
  • the system 1 100 may include one or more of a source of video content 1 102, a content aggregator 1 106, a genome analyzer 1 108, a genome memory portion 1 1 10 including one or more video genome sequences 1 1 12 and interest i s genome sequences 1 14.
  • the system further includes a recommendation portion 1 1 16, a profile processor 1 1 18, a vector portion 1 120, a Ul 1 122, and a selection portion 124.
  • the video content 1 102 may include video content available from any suitable source such as a local memory, a remote memory, distributed memories, a SAN, proprietary memories (e.g., belonging to a network), etc.
  • the content may include video0 content, audio/video content, genome information, and/or other forms of content and associated genome information.
  • the content aggregator 1 106 may function to aggregate content from the content sources 1 102 and provide this information to the genome analyzer 1 108 in a desired manner such as in a serial and/or parallel fashion.
  • the genome analyzer 1 108 may include a digital signal processing (DSP) portion
  • the DSP portion 1 1 1 1 1 may use DSP algorithms to analyze the content information and generate genome information, such as attributes, etc., based on content from the content aggregator 1 106.
  • the DSP portion 1 1 1 1 may analyze video information (e.g., video frames, snapshots, etc.) and generate color and/or shape signatures for the purposes of0 matching content to known content and thereby, facilitating association of attributes from known content with new content that is not yet known. Accordingly, the system may match videos based upon, for example, snapshots and/or shape signatures.
  • the analysis information may include, for example, genome information (Gl) such as a video genome sequence (VGS), an interest genome sequence (IGS), etc., which may be formed into a desired array and transmitted to the genome portion 1 100 for storage and/or processing.
  • GRS genome information
  • IGS interest genome sequence
  • the IGS may correspond with input from one or more users such as, for example, user annotations, etc.
  • received content may include attribute, for example, as provided by the content provided. Content may also be manually annotated by as user as may be readily appreciated.
  • the genome memory portion 1 1 10 may include any suitable memory and may store the video content attribute information for later use.
  • the video genome sequences 1 1 12 may include for example, attribute information such as attribute type (e.g., emotion, concept, speed, color, heat, etc.) and one or more attribute values (e.g., happy, soccer, fast, green, etc.).
  • attribute information such as attribute type (e.g., emotion, concept, speed, color, heat, etc.) and one or more attribute values (e.g., happy, soccer, fast, green, etc.).
  • the interest genome sequence 1 1 14 may include information such as, for example, personal preferences for each attribute type, a weight of attention for each attribute type, etc. which may be determined by the system and/or the user.
  • the analysis information may include Gl and IGS information.
  • the genome memory portion 1 1 10 may include one or more memories and may be local and/or remote from a user device.
  • the genome memory portion 1 1 10 may be accessible through a network, such as the Internet or may be local to a device such as a user's device (UD).
  • a network such as the Internet
  • UD user's device
  • content may be classified using a multi- dimensional attribute vector represented in a multidimensional space.
  • a discrete classification for each genome attribute type may be generated and/or provided as the video genome sequences 1 1 12.
  • the present system may also receive an assignment of emoticons, change/edit attributes, etc., in accordance with an embodiment.
  • the present system may also receive annotations from one or more users and base queries and/or results of the queries on the annotations.
  • annotation information e.g., a change/assignment of weighting, etc.
  • the video genome analyzer 1 108 may combine textual meta data, signal processing results (e.g., DSP results) and normalize this information so that it may be used in accordance with the present system. Accordingly, the system may provide an editor for receiving manual annotation (e.g., speed, style, etc.) information and may process information from a plurality of users to annotate content with, for example, corresponding emotion information.
  • manual annotation e.g., speed, style, etc.
  • the system may also include functionality to perform automated annotation extraction so as to determine concept (e.g., via automated concept extraction, for example by doing semantic processing against textual input) and/or color, shape, etc., of video content using signal processing methods as described further herein.
  • automated annotation extraction so as to determine concept (e.g., via automated concept extraction, for example by doing semantic processing against textual input) and/or color, shape, etc., of video content using signal processing methods as described further herein.
  • the present system provides a plurality of video content to the user as shown by the vector portion 1 120.
  • the present system further provides the Ul 1 122 to facilitate a review of the plurality of video content and to facilitate a selection 1 124 of one or more of the plurality of video content for purposes of viewing/creating/editing a video content channel.
  • Attributes of selected video content are received by the profiling processor 1 1 18 for purposes of adjusting a profile for a user channel.
  • the interest genome sequences 1 1 14 for the user channel are stored in the genome memory portion 1 1 10 by the profiling processor 1 1 18.
  • the plurality of video content in response to the adjusted profile, is queried to find video content that corresponds to the adjusted interest genome sequences for a channel and the query results are provided to the recommendation portion 1 1 16 which provides the channel to the Ul 1 122.
  • This enables a user of the Ul 1 122 to further adjust the interest genome sequences for the channel, for example, if the provided video content for the channel is not as desired.
  • FIG. 12 shows a system 1200 in accordance with a further embodiment of the present system.
  • the system 1200 includes a user device 1290 that has a processor 1210 operationally coupled to a memory 1220, a rendering device 1230, such as one or more of a display, speaker, etc., a user input device 1270 and a content server 1280 operationally coupled to the user device 1290.
  • the memory 1220 may be any type of device for storing application data as well as other data, such as video content, video content attributes, (e.g., video genome sequences, interest genome sequences), etc.
  • the application data and other data are received by the processor 1210 for configuring the processor 1210 to perform operation acts in accordance with the present system.
  • the operation acts include controlling at least one of the rendering device 1230 to render one or more of the GUIs and/or to render content.
  • the user input 1270 may include a keyboard, mouse, trackball or other devices, including touch sensitive displays, which may be stand alone or be a part of a system, such as part of a personal computer, personal digital assistant, mobile phone, converged device, or other rendering device for communicating with the processor 1210 via any type of link, such as a wired or wireless link.
  • the user input device 1270 is operable for interacting with the processor 1210 including interaction within a paradigm of a GUI and/or other elements of the present system, such as to enable web browsing, video content selection, such as provided by left and right clicking on a device, a mouse-over, pop-up menu, drag and drop operation, etc., such as provided by user interaction with a computer mouse, etc., as may be readily appreciated by a person of ordinary skill in the art.
  • the rendering device In accordance with an embodiment of the present system, the rendering device
  • the rendering device 1230 may operate as a touch sensitive display for communicating with the processors 1210 (e.g., providing selection of a web browser, a Uniform Resource Locator (URL), portions of web pages, etc.) and thereby, the rendering device 1230 may also operate as a user input device. In this way, a user may interact with the processor 1210 including interaction within a paradigm of a Ul, such as to support content selection, attribute editing, etc.
  • the processors 1210 e.g., providing selection of a web browser, a Uniform Resource Locator (URL), portions of web pages, etc.
  • the user device 1290, the processor 1210, memory 1220, rendering device 1230 and/or user input device 1270 may all or partly be portions of a computer system or other device, and/or be embedded in a portable device, such as a mobile station (MS), mobile telephone, personal computer (PC), personal digital assistant (PDA), converged device such as a smart telephone, etc.
  • MS mobile station
  • PC personal computer
  • PDA personal digital assistant
  • converged device such as a smart telephone, etc.
  • the user device 1290, corresponding user interfaces and other portions of the system 1200 are provided for browsing content, selecting content, creating/editing a user channel, etc., and for transferring the content and interest genome sequences, etc., between the user device 1290 and the content server 1280, which may operate as the genome memory portion 1 1 10.
  • the methods of the present system are particularly suited to be carried out by a computer software program, such program containing modules corresponding to one or more of the individual steps or acts described and/or envisioned by the present system.
  • a computer software program such program containing modules corresponding to one or more of the individual steps or acts described and/or envisioned by the present system.
  • Such program may of course be embodied in a computer-readable medium, such as an integrated chip, a peripheral device or memory, such as the memory 1220 or other memory coupled to the processor 1210.
  • the computer-readable medium and/or memory 1220 may be any recordable medium (e.g., RAM, ROM, removable memory, CD-ROM, hard drives, DVD, floppy disks or memory cards) or may be a transmission medium utilizing one or more of radio frequency (RF) coupling, Bluetooth coupling, infrared coupling etc. Any medium known or developed that can store and/or transmit information suitable for use with a computer system may be used as the computer-readable medium and/or memory 1220.
  • RF radio frequency
  • Additional memories may also be used including as a portion of the content server 1280.
  • the computer-readable medium, the memory 1220, and/or any other memories may be long-term, short-term, or a combination of long-term and short-term memories. These memories configure processor 1210 to implement the methods, operational acts, and functions disclosed herein.
  • the operation acts may include controlling the rendering device 1230 to render elements in a form of a Ul and/or controlling the rendering device 1230 to render other information in accordance with the present system.
  • the memories may be distributed (e.g., such as a portion of the content server 1280) or local and the processor 1210, where additional processors may be provided, may also be distributed or may be singular.
  • the memories may be implemented as electrical, magnetic or optical memory, or any combination of these or other types of storage devices.
  • the term "memory" should be construed broadly enough to encompass any information able to be read from or written to an address in the addressable space accessed by a processor. With this definition, information on a network is still within memory 1220, for instance, because the processor 1210 may retrieve the information from the network for operation in accordance with the present system.
  • a portion of the memory as understood herein may reside as a portion of the content server 1280.
  • the content server 1280 should be understood to include further network connections to other devices, systems (e.g., servers), etc. While not shown for purposes of simplifying the following description, it is readily appreciated that the content server 1280 may include processors, memories, displays and user inputs similar as shown for the user device 1290, as well as other networked servers, such as may host web sites, etc. Accordingly, while the description contained herein focuses on details of interaction within components of the user devices 1290, it should be understood to similarly apply to interactions of components of the content server 1280.
  • the processor 1210 is capable of providing control signals and/or performing operations in response to input signals from the user input device 1270 and executing instructions stored in the memory 1220.
  • the processor 1210 may be an application- specific or general-use integrated circuit(s). Further, the processor 1210 may be a dedicated processor for performing in accordance with the present system or may be a general-purpose processor wherein only one of many functions operates for performing in accordance with the present system.
  • the processor 1210 may operate utilizing a program portion, multiple program segments, or may be a hardware device utilizing a dedicated or multi-purpose integrated circuit.
  • the rendering device 1230 may operate as a touch sensitive display for communicating with the processors 1210 (e.g., providing selection of a web browser, a Uniform Resource Locator (URL), portions of web pages, etc.) and thereby, the rendering device 1230 may also operate as a user input device.
  • the processors 1210 e.g., providing selection of a web browser, a Uniform Resource Locator (URL), portions of web pages, etc.
  • the rendering device 1230 may also operate as a user input device.
  • a user may interact with the processor 1210 including interaction within a paradigm of a Ul, such as to support content selection, input of reaction indications, comments, etc.
  • the user device 1290, the processor 1210, memory 1220, rendering device 1230 and/or user input device 1270 may all or partly be portions of a computer system or other device, and/or be embedded 5 in a portable device, such as a mobile telephone, personal computer (PC), personal digital assistant (PDA), converged device such as a smart telephone, etc.
  • a portable device such as a mobile telephone, personal computer (PC), personal digital assistant (PDA), converged device such as a smart telephone, etc.
  • the user device 1290, corresponding user interfaces and other portions of the system 1200 are provided for l o rendering tasks, browsing content, selecting content, creating/editing user channels, etc., and for transferring the information related to the tasks, content and reaction indications, tallied reaction indications, etc., between the user device 1290 and the content server 1280.
  • the methods of the present system are particularly suited to be carried out by a
  • Such program may of course be embodied in a computer-readable medium, such as an integrated chip, a peripheral device or memory, such as the memory 1220 or other memory coupled to the processor 1210.
  • the computer-readable medium and/or memory 1220 may be any recordable medium (e.g., RAM, ROM, removable memory, CD-ROM, hard drives, DVD, floppy disks or memory cards) or may be a transmission medium utilizing one or more of radio frequency (RF) coupling, Bluetooth coupling, infrared coupling etc. Any medium known or developed that can store and/or transmit information suitable for use with a computer5 system may be used as the computer-readable medium and/or memory 1220.
  • RF radio frequency
  • the computer-readable medium, the memory 1220, and/or any other memories may be long-term, short-term, or a combination of long-term and short-term memories. These memories configure processor 1210 to implement the methods, operational acts, and functions disclosed0 herein.
  • the operation acts may include controlling the rendering device 1230 to render elements in a form of a Ul and/or controlling the rendering device 1230 to render other information in accordance with the present system.
  • the memories may be distributed (e.g., such as a portion of the content server 1280) or local and the processor 1210, where additional processors may be provided, 5 may also be distributed or may be singular.
  • the memories may be implemented as electrical, magnetic or optical memory, or any combination of these or other types of storage devices.
  • the term "memory" should be construed broadly enough to encompass any information able to be read from or written to an address in the addressable space accessed by a processor. With this definition, information on a i o network is still within memory 1220, for instance, because the processor 1210 may retrieve the information from the network for operation in accordance with the present system.
  • a portion of the memory as understood herein may reside as a portion of the content server 1280.
  • the content server 1280 should be understood to include further network connections to other devices, systems (e.g.,
  • the content server 1280 may include processors, memories, displays and user inputs similar as shown for the user device 1290, as well as other networked servers, such as may host web sites, etc. Accordingly, while the description contained herein focuses on details of interaction within components of the user devices0 1290, it should be understood to similarly apply to interactions of components of the content server 1280.
  • the present system may dynamically determine content to include in a channel and/or render associate information.
  • the present system may incorporate wired and/or wireless communication methods and may provide a user with a5 personalized environment. Further benefits of the present system include low cost and scalability. Moreover, community collaboration may be used to recommend content and/or channels to a user so as to eliminate cold start issues.
  • the present system may be provided in a form of a content rendering device, such as a MS.
  • a further embodiment of the present system may provide a Ul that operates as a browser extension, such as a rendered browser toolbar, that can build a content rendering playlist, such as a video playlist.
  • the present system may push predetermined content while a user is browsing the Internet.
  • any of the disclosed elements may be comprised of hardware portions (e.g., including discrete and integrated electronic circuitry), software portions (e.g., computer programming), and any combination thereof; f) hardware portions may be comprised of one or both of analog and digital portions;
  • any of the disclosed devices or portions thereof may be combined together or separated into further portions unless specifically stated otherwise;
  • the term "plurality of an element includes two or more of the claimed element, and does not imply any particular range of number of elements; that is, a plurality of elements may be as few as two elements, and may include an immeasurable i t) number of elements.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A system, method, device and interface for classifying video content and providing selected video content on a user interface (Ul). The method may include one or more acts of: providing a representation of video content items to a user within the Ul of the user device; enabling a selection of one or more video content portions of a plurality of available video content portions, wherein each of the video content portions includes a plurality of attributes; collecting the plurality of attributes for each selected video content portion, the attribute information comprising information related to a video annotation sequence and a user interest sequence; forming channel information based upon the collected attributes; forming a query of available video content portions based upon the channel information; querying video content information in accordance with the channel information; and rendering within the Ul of the device an indication of video content portions corresponding with results of the query.

Description

CONTENT MANAGEMENT SYSTEM AND METHOD OF OPERATION THEREOF
FIELD OF THE PRESENT SYSTEM:
The present system relates to at least one of a system, method, user interface, and apparatus which can dynamically select content in accordance with one or more filtering criterion and, more particularly, to a video content distribution system which can filter and select video content or video channels to be output on a Ul in accordance with one or more filtering criterion which may be selected by the system and/or one or more users.
BACKGROUND OF THE PRESENT SYSTEM:
Content, such as digital audio visual content is pervasive in today's society. Parties are presented with a vast array of sources from which content may be selected included optical media and network provided, such as may be available over the Internet. A major problem exists in that with the vast availability of video content, such as audio visual content, there are a limited number of ways in which the video content is classified and recommended to a user.
One system which has been provided is a genre classification system in which, for example, audio visual content is classified in broad categories, such as drama, comedy, action, etc. While this system does provide some insight into what may be expected while watching the audio visual content, the typical classification is broadly applied to an entire audio visual presentation and as such, does not provide much insight into different segments of the audio visual content. For example, while in general, the entire audio visual presentation may be generally classified as belonging in an action genre, different portions of the audio visual content may be related to comedy, drama, etc. Accordingly, the broad classification of the audio visual content ignores these sub-genres that represent portions of the content and thereby, may fail to attract the attention of a party that may have an interest in these sub-genres.
Recommendation systems have been provided that utilize a broader semantic description, that may be provided by the producers of the audio visual content and/or may be provided by an analysis of the portions of the audio visual content directly. These systems typically compare the semantic description to a user profile to identify particular audio visual content that may be of interest. Other systems, such as U.S. Patent No. 6,173,287 to Eberman, incorporated herein as if set out its entirety, utilizes 5 metadata to automatically and semantically annotate different portions of the audio visual content to enable retrieval of portions of the audio visual content that may be of interest. Problems exist with this system in that the analysis of audio and visual portions of the audio visual content is very complex and oftentimes produces less than satisfactory results. Generally, due to wide differences in terms applied to the semantic l o annotation, search results tend to be erratic depending on the particular terms utilized for annotation and search. For example, a sequence relating to and annotated with "automobile," may not be retrieved by a search term of "car" since searches tend to be literal.
Other systems have provided tools to annotate portions of audio visual content i s using elements such as timestamps, closed-captioned text, editor supplied "most- important" portion indications, etc., but these systems have all suffered from the vast variety of descriptive terms associated with content (e.g., audio, audio visual, text, etc.) and also utilized for content retrieval. The music genome project has attempted to classify audio content by identifying over 400 attributes, termed genes that may be0 applied to describe an entire song. A given number of genes represented as a vector are utilized for each song. Given a vector for a song utilized as a searching seed, similar songs are identified using a distance function wherein a distance function from the seed song is utilized to identify the similar songs. While this system simplifies elements (genes) that may be used to identify a song, the system still utilizes a complex5 classification system associated with songs that make it impossible for users to participate in the classification. It is for this reason that the system utilizes professional technicians to apply genes to each song. Further, this system also applies genes to the entire song and thereby provides no ability to identify different portions of the song that may diverge from the general classification applied to the entire song. Further, although0 systems such as Pandora™ and Clerkdogs™ can recommend content to a user, these systems require professional technicians to apply genes to each song and therefore suffer from high operating costs and lack scalability. Further, these systems do not allow a user to fully customize their search criteria and may suffer from cold starts which can occur before professional technicians apply genes to a song or before the system is trained on a user.
None of these prior systems provides a system, method, user interface and/or device to build a video content channel in a simple and intuitive manner.
SUMMARY OF THE PRESENT SYSTEM:
It is an object of the present system to overcome disadvantages and/or make improvements in the prior art.
According to the present system, there is disclosed a method of providing content on a user interface (Ul). The method may include one or more acts of: populating a build list comprising one or more objects each of which corresponds with a content portion of a plurality of content portions; determining attribute information for each object in the build list, the attribute information comprising information related to a video annotation sequence and a user interest sequence; forming channel information based upon the attribute information; forming a query based upon the channel information; querying content information in accordance with the channel information; rendering content portions corresponding with results of the query; selecting a content portion; and rendering at least part of the selected content portion on the Ul.
The method may further include acts of: selecting certain attribute information of the attribute information; and/or updating the channel information in accordance with the selected certain attribute information of the attribute information.
The method may further include an act of storing the query with corresponding channel information. The method may further include an act of rendering the plurality of content portions before the act of populating the build list. According to the system, the method may also include an act of collecting emotion information in accordance with the content. The method may further include an act of creating an attribute vector based upon the user emotion information. According to the present system, there is also disclosed a system which may provide content on a user interface (Ul). The system may include a controller which: populates a build list comprising one or more objects each of which corresponds with a content portion of a plurality of content portions, determines attribute information for 5 each object in the build list, the attribute information comprising information related to a video annotation sequence and a user interest sequence, forms channel information based upon the attribute information, forms a query based upon the channel information, and queries content information in accordance with the channel information. The system may further include a user interface which may: render content l u portions corresponding with results of the query, and may render at least part of a selected content portion on the Ul. The controller may also receive one or more selections, from the user, corresponding with certain attribute information of the attribute information. Then, the controller may update the channel information in accordance with the selected certain attribute information.
l ? The system may also include a memory to store the query with corresponding channel information. According to the system, the controller may render the plurality of content portions on the Ul before the controller populates the build list. Further, the system may include a user input device to receive, from the user, emotion information which corresponds with the content. According to the system, the controller may create0 an attribute vector based upon the user emotion information.
There is also disclosed a computer program stored on a computer readable memory medium, the computer program configured to provide a user interface (Ul) to accomplish a task, the computer program may include a program portion configured to: populate a build list comprising one or more objects each of which corresponds with a5 content portion of a plurality of content portions; determine attribute information for each object in the build list, the attribute information comprising information related to a video annotation sequence and a user interest sequence; form channel information based upon the attribute information; form a query based upon the channel information; query content information in accordance with the channel information; render content portions corresponding with results of the query; and/or render at least part of a selected content portion on the Ul.
BRIEF DESCRIPTION OF THE DRAWINGS:
The invention is explained in further detail, and by way of example, with reference to the accompanying drawings wherein:
FIG. 1 shows a Ul 100 in accordance with an embodiment of the present system;
FIG. 2 shows a Ul in accordance with an embodiment of the present system;
FIG. 3 shows a Ul in accordance with an embodiment of the present system;
FIG. 4 shows a Ul in accordance with an embodiment of the present system;
FIG. 5 shows a Ul in accordance with an embodiment of the present system;
FIG. 6A shows a Ul in accordance with an embodiment of the present system;
FIG. 6B shows a Ul in accordance with an embodiment of the present system;
FIG. 7A shows a Ul in accordance with an embodiment of the present system;
FIG. 7B shows a Ul in accordance with an embodiment of the present system;
FIG. 7C shows a Ul in accordance with an embodiment of the present system;
FIG. 8 shows a Ul in accordance with an embodiment of the present system;
FIG. 9A shows a Ul in accordance with an embodiment of the present system;
FIG. 9B shows a Ul in accordance with an embodiment of the present system;
FIG. 9C shows a Ul in accordance with an embodiment of the present system;
FIG. 9D shows a Ul in accordance with an embodiment of the present system;
FIG. 10 shows a flow diagram that illustrates a process in accordance with an embodiment of the present system;
FIG. 1 1 shows a block diagram of a communication system 1 100 according to an embodiment of the present system; and
FIG. 12 shows a system in accordance with a further embodiment of the present system.
DETAILED DESCRIPTION OF THE PRESENT SYSTEM: The following are descriptions of illustrative embodiments that when taken in conjunction with the following drawings will demonstrate the above noted features and advantages, as well as further ones. In the following description, for purposes of explanation rather than limitation, illustrative details are set forth such as architecture, interfaces, techniques, element attributes, etc. However, it will be apparent to those of ordinary skill in the art that other embodiments that depart from these details would still be understood to be within the scope of the appended claims. Moreover, for the purpose of clarity, detailed descriptions of well known devices, circuits, tools, techniques and methods are omitted so as not to obscure the description of the present system. It should be expressly understood that the drawings are included for illustrative purposes and do not represent the scope of the present system. In the accompanying drawings, like reference numbers in different drawings may designate similar elements.
For purposes of simplifying a description of the present system, the terms "operatively coupled", "coupled" and formatives thereof as utilized herein refer to a connection between devices and/or portions thereof that enables operation in accordance with the present system. For example, an operative coupling may include one or more of a wired connection and/or a wireless connection between two or more devices that enables a one and/or two-way communication path between the devices and/or portions thereof. For example, an operative coupling may include a wired and/or wireless coupling to enable communication between a content server and one or more user devices. A further operative coupling, in accordance with the present system may include one or more couplings between two or more user devices, such as via a network source, such as the content server, in accordance with an embodiment of the present system.
The term rendering and formatives thereof as utilized herein refer to providing content, such as digital media, such that it may be perceived by at least one user sense, such as a sense of sight and/or a sense of hearing. For example, the present system may render a user interface on a display device so that it may be seen and interacted with by a user. Further, the present system may render audio visual content on both of a device that renders audible output (e.g., a speaker, such as a loudspeaker) and a device that renders visual output (e.g., a display). To simplify the following discussion, the term content and formatives thereof will be utilized and should be understood to include audio content, visual content, audio visual content, textual content and/or other content types, unless a particular content type is specifically intended, as may be readily appreciated.
The system, device(s), method, user interface, etc., described herein address problems in prior art systems. In accordance with an embodiment of the present system, a system, method, device, computer program, and interface for rendering a Ul for a users convenience. The Ul may include one or more applications which are necessary to complete an assigned task. Further, the present system may collect other statistics related to the user and/or user device (e.g., a MS) in accordance with the present system, such as a relative time of an action, geo-location, position, acceleration, speed, azimuth, network, detected content item, etc.
The user interaction with and manipulation of the computer environment is achieved using any of a variety of types of human-processor interface devices that are operationally coupled to the processor controlling the displayed environment. A common interface device for a user interface (Ul), such as a graphical user interface (GUI) is a mouse, trackball, keyboard, touch-sensitive display, etc. For example, a mouse may be moved by a user in a planar workspace to move a visual object, such as a cursor, depicted on a two-dimensional display surface in a direct mapping between the position of the user manipulation and the depicted position of the cursor. This is typically known as position control, where the motion of the depicted object directly correlates to motion of the user manipulation.
An example of such a GUI in accordance with an embodiment of the present system is a GUI that may be provided by a computer program that may be user invoked, such as to enable a user to select and/or classify/annotate content as is described in U.S. Application No. 61/099,893 entitled "Content Classification Utilizing A Reduced Description Palette To Simplify Content Analysis," filed on September 24, 2008 (hereinafter the '893 application) incorporated herein as if set forth in its entirety. In accordance with a further embodiment, the user may be enabled within a visual environment, such as the GUI, to classify content utilizing a reduced description palette to simplify content analysis, presentation, sharing, etc. of separate content portions in accordance with the present system. To facilitate manipulation (e.g., content selection, annotation, sharing, etc.) of the content, the GUI may provide different views that are directed to different portions of the present process.
For example, the GUI may present a typical Ul including a windowing environment and as such, may include menu items, pull-down menu items, pop-up windows, etc., that are typical of those provided in a windowing environment, such as may be represented within a Windows™ Operating System GUI as provided by Microsoft Corporation and/or an OS X™ Operating System GUI, such as provided on an iPhone™, MacBook™, iMac™, etc., as provided by Apple, Inc., and/or another operating system. The objects and sections of the GUI may be navigated utilizing a user input device, such as a mouse, trackball, finger, virtual locator, and/or other suitable user input. Further, the user input may be utilized for making selections within the GUI such as by selection of menu items, window items, radio buttons, pop-up windows, containers, for example, in response to a mouse-over operation, and other common interaction paradigms as understood by a person of ordinary skill in the art.
Similar interfaces may be provided by a device having a touch sensitive screen that is operated on by an input device such as a finger of a user or other input device such as a stylus. The present system may also incorporate a virtual display capability which can detect a virtual location of a user or of the device itself. In this environment, a cursor may or may not be provided since location of selection is directly determined by the location of interaction with the touch sensitive screen. Although the GUI utilized for supporting touch sensitive inputs or virtual inputs may be somewhat different than a GUI that is utilized for supporting, for example, a computer mouse input, however, for purposes of the present system, the operation is similar. Accordingly, for purposes of simplifying the foregoing description, the interaction discussed is intended to apply to either of these systems or others that may be suitably applied.
FIGs. 1 through 12 will be discussed below to facilitate a discussion of illustrative embodiments of the present system. FIG. 1 shows a Ul 100 in accordance with an embodiment of the present system. The Ul 100 may be provided by an application such as, for example, a browser such as an Internet browser like, for example, Internet Explorer™, Mozilla™, Firefox™, etc., and may include one or more of windows, subwindows, frames, subframes, toolbars, widgets, instances, menu items, submenu items, toolbars, containers, etc., as may be used in a widowing environment or proprietary Ul (e.g., a mobile station (MS) display, etc.) and may be associated with one or more channels that may be formed in accordance with the present system. Accordingly, the Ul 100 may include a main window 102, one or more content portions 104, one or more menu items 106, a build portion 108, a value/attribute (VA) portion 1 10, and a scroll portion 1 1 1.
The main window 102 may include one or more subwindows, frames, subframes, containers, subcontainers, text boxes, selection boxes, pop-up menus, etc., which may be rendered for a user's convenience.
The one or more menus items 106 may include one or more menus, submenus, tabs, and/or other selections which may be selected by a user. For example, the one or more menu items 106 may include tabs such as, for example, an editor's choice menu item 106A, a community menu item 106B, a my channels menu item 106c, recommended menu item 106d, and a search menu item 106e. The user may select one or more of menu items 106A-106E to access corresponding user interfaces.
Each content portion 104 may display or otherwise communicate information or objects associated with content that may be accessed, played back, and/or downloaded using the Ul 100. The content portion 104 may include textual and/or graphic information such as, for example, graphical depictions of one or more portions, thumbnails, titles, ratings, duration, views, etc., which may be indicative of certain parts of the associated content and may include one or more selectable portions for selection by a user. For example, the content portion 104 may include a graphic representation of content such as a thumbnail portion 1 12 which may provide a graphical representation of one or more parts (e.g., frames or time periods) of the associated content. Upon detecting that a user has performed a mouse over operation (i.e., scrolled) over the content portion 104 (or parts thereof e.g., the thumbnail portion 1 12), the system may play back selected portions of the associated content and/or render one or more options which are available to a user. For example, upon determining that a user has scrolled over or otherwise selected (e.g., via a mouse click) a thumbnail portion 1 12, the system may select graphic representations of the associated content 5 which may, correspond with certain parts of the heat information and render thumbnails which correspond with the selected graphic representations.
In an embodiment of the present system, upon detecting that a user scrolled/hovered over the thumbnail portion 1 12, the system may sequentially display one or more images (e.g., thumbnails, or larger images in frames) as thumbnails which l o correspond with predetermined sections or frames of the associated content. However, it is also envisioned that the one or more images may be displayed in a separate frame. The predetermined sections may be selected by the system based upon one or more criteria such as time (e.g., at 5 minute intervals of content during play), frame numbers (e.g., every 1000th frame), heat information which may correspond with heat map is information and associated meta information as disclosed in the '893 application, such as reaction indicators (e.g., emoticons) provided by a user and associated by a user with a portion of the content that is being rendered at the time of receiving the user selection. Thus, for example, when a user scrolls over a thumbnail portion 1 12, the system may sequentially render thumbnails which correspond with portions of the0 associated content that have corresponding heat information, such as positive interest over the number of respondents that provided a reaction. In accordance with one embodiment of the '893 application, the heat information may correspond to an interest profile of a plurality of users that tracks points in the related content that attracted one or more comments and/or annotations. The heat map is a graphical representation of5 numbers of comments per parts/portions of content, such as against a content running time. The reduced set of reaction indicators provide an indication of what type of interest is provided (e.g., happy, mad, etc.) by the content at a given portion of the content. It is significant that in accordance with an embodiment of the present system, the heat information may be provided as an attribute of a content portion and thereby, utilized to build a channel as described further herein.
The predetermined section may also correspond with information such as, for example, genre, subgenre, metadata, etc. associated with the content. Thus, frames which are associate with certain portions of the content which have a certain subgenre (e.g., action) in content having a main genre which is, for example, sports, or any other genre may be shown.
Each content portion 104 may also include one or more of a title information portion 1 14, a time information portion 116, a view information portion 1 18, and a heat information portion 120. Upon detecting that a user has scrolled over any of these portions of scrolled over these portions for a predetermined period of time, the system may provide additional information in, for example, a container for the user's convenience. For example, upon detecting that the use has scrolled over the title information portion for two second, the system may render a container including information (e.g., year, date, actors, attributes, awards, genre, etc.) about the associated content.
Each of the associated content may have corresponding attribute information (Al). The attribute value information may include, for example, attributes such as genre, title, duration, meta information, heat information, etc.
The title information portion 1 14 may include title information which corresponds with a title or other identifying feature of associated content information. The time information portion 1 16 may include information related to duration of the associated content. The view information portion 18 may include information related to a number of views of the associated content. The heat information 120 may include information related to the associated content such as heat map information described in the '893 application.
The scroll portion 1 1 1 may include one or more scroll bars, or the like, which may function to scroll portions of the Ul 100. Accordingly, the scroll portion 1 1 1 may include a horizontal scroll bar to scroll horizontal portions of the Ul 100 and/or a vertical scroll bar to scroll vertical portions of the Ul 100. A pop-up 203 may be rendered when a user selects the scroll portion 1 1 1 and may indicate current location status (e.g., 21-42 of 2000 results), where 2000 refers to total number of results matching a search criteria as described further herein below. For example, as shown in FIG. 2, a Ul 200 may be provided as a result of a search to build a channel as described herein based on the content portion 104.
The VA portion 1 10 may include channel information which may include for example, attribute value information (AVI) which may be added, deleted, set, selected, and/or deselected by the system, user, or community depending upon channel properties. The channel information may also include identifying information which may be used to identify a channel of a plurality of channels. The AVI may be added to the channel information automatically by, for example, adding attributes to the VA portion 1 10. For example, attributes may be added by dragging a content portion 104 to the build portion 108. In response to a drag/drop operation including the content portion, attributes associated with the content portion are added to the AVI. In response, the system may then determine attribute information of an associated content portion 104 and add this information to the AVI. Thus, the AVI may correspond with the attribute information of content portions 104 in the Ul 100. Accordingly, the AVI may have values (hereinafter AVI which correspond with information related to a video annotation sequence such as attribute type (e.g., emotion, see, emoticon 1 15, concept, speed, color, etc.), attribute value (e.g., happy, soccer, fast, green, etc.); and/or a user interest sequence such as, for example, user preferences and/or associated weight information. The user interest sequence may be based upon user inputs and/or selections. For example, user Joe may select a content portion of the "Dukes of Hazard" television show which may have associated attributes of "Surprising" "Comedy" "excited (emoticon)" and "Car Racing". In this way, in accordance with an embodiment of the present system, a user channel may be created for Joe of content that has the same or similar attributes as one or more selected (e.g., drag/drop operation) content portions. Similarly, a user Anny may select one content portion having associated attributes of "Sad" "Drama" and another content portion having associated attributes of "happy" "Hollywood film". Selection of these two content portions may build a channel based on available content and the associated attributes. In prior systems, oftentimes individual selection of content attributes leads to a loss of a real connection to the subtleties of what a given user may really appreciate. In accordance with the present system, 5 multiple selections of content portions may assist a user in tuning a channel for the type of content that the user desires. Additional tuning of the attributes may always be performed thereafter.
The VA portion 1 10 may also include functionality which enables a user to add, delete, change, select, deselect etc., certain values in the AVI. Accordingly, the VA l o portion 1 0 may include an add menu item 1 13 which may be used to insert and/or edit values in the AVI for later use by the system and/or user. For example, when the user selects the add menu item 113, the system may enter an editing mode to edit the AVI in the VA portion 1 10. Further, each value of AVI may have an associated weight value which may be used to indicate a weight which is given to associated information. Each i s weight value may be selected by the system and/or a user and may be used to weigh, for example, an importance or relevance, of associated information. Thus, information that is assigned a weight of, for example, 10, may be determined to be more important or relevant than information that is assigned a weight of, for example, 1. Depending upon system configuration, the system and/or the user may determine and/or weigh0 information, as desired. The system may use the channel information to generate corresponding attribute queries. Further, each channel may be used by the user and/or shared with one or more communities (e.g., work, sci-fi, dramas etc.) who may subscribe to the channel. The system may also provide a cost system which may generate cost information and charge/refund monies to one or more users. Thus, high-profile users5 such as, for example, actors, may form their own channels and share this information with a community who subscribes to this channel. The actor may be paid with revenues generated from subscription fees paid to join the community.
The VA portion 1 10 may also include associated title information 122 which may be used to identify a channel of a plurality of channels. Thus, for example, the title0 information 122 may be assigned a name such as "Most Popular" which may be stored in association with corresponding channel information. The title information may correspond with a title which may be input by the user and/or may be automatically assigned by the system. Thus, for example, upon determining that the user desires a new channel, the system may assign a new channel with an arbitrary designation (e.g., "user channel") which may thereafter be changed by a user. Accordingly, the user may accept the arbitrary designation or may or edit it, as desired. System resources may be conserved by storing channel information and/or queries for each channel. Further, user convenience may be enhanced by not having to manage videos directly. The system may also include functionality to push new content which matches the query to a user when the new content becomes available.
The build portion 108 may have an associated build list which may include build objects which correspond with content information of one more of the content portions 104. The build list may include build objects which are added or linked to the build list using any suitable method such as, for example, copy/paste, drag/drop, etc. commands. Additionally, build objects may be linked to the build list. The user may add build objects to the build list, by for example, selecting a content portion 104 of a plurality of content portions 104 and inserting the selected content portion 104 in the build list. The user may delete build objects from the build list using any suitable command such as, for example, delete, move, and/or cut commands.
With regard to the build list, the system may include information that is related to the build objects that the user has added to the build portion 108, this information may include, for example, content identification (e.g., code, time, address, pointer, links, identifying information such as, for example, title information, etc.) of corresponding content portions 104 or the like that may be used to identify corresponding content. Upon detecting that the user has added or deleted build objects to the build list, the system may edit the corresponding channel build list in accordance with the user's selection. Each channel may have its own build portion 108 and corresponding build list. However, it is also envisioned that a user may copy/paste a build list from one channel to another channel to enable a user to build channels. A process of adding build objects using a drag/drop operation is better illustrated with reference to FIG. 2 which shows a Ul 200 in accordance with an embodiment of the present system. In Ul 200, the system may render a play menu item 221 and/or a move menu item 223 in response to a mouse over operation of a content portion 204.
In a drag/drop operation, a user may drag a frame 217 corresponding to a build object of the associated content portion 204 in response to a user input. Upon determining that the frame 217, or portions thereof, have been dropped in, or added to, the build portion 208, the system may modify the build list in accordance with the corresponding build object. Accordingly, the system may populate the build list with build objects selected by the user.
The system may render menu items such as a play menu item 221 and/or a move menu item 223 for selection by the user when it is determined that the user has scrolled over a particular content portion such as the content portion 204. Accordingly, if the system determines that the user has selected the play menu item 221 , the system may retrieve and/or play one or more portions of content associated with the selected content portion 204. Similarly, if the system determines that the user has selected the move menu item 223, the system may render an object such as the frame 217 for manipulation by a user.
It is also envisioned that the user may form and/or edit the build list by, for example, manually adding/deleting content identification associated with selected content to/from a corresponding build list manually (e.g., using an input device such as a keyboard, a mouse, etc.) or by using a copy/paste command. Accordingly, the system may provide a user with one or more menus to add/remove information from a corresponding build list. For example, the system may provide a user with a pop-up menu to add content to, or delete content from, a corresponding channel build list.
FIG. 3 shows a Ul 300 in accordance with an embodiment of the present system. A pop-up container 330 may be generated and rendered by the system in response to user request or when a user scrolls over a content portion 304 for a predetermined period of time. The pop-up container 330 may include additional information about a specific content portion 304. FIG. 4 shows a Ul 400 in accordance with an embodiment of the present system. The Ul 400 may include menu items such as tabs 406A -406E which may correspond with tabs 106A - 106E of FIG. 1 , respectively. The Ul 400A illustrates a container which includes a plurality of content portions 403, each of which corresponds with content 5 included in, for example, a channel such as an "Editor's Choice" channel. This content may be selected by the system based upon, for example, content which is determined to be most watched over a certain period of time such as a week or content which is determined to have attributes which correspond with perceived user desires. A user may add a content portion 403 to a build portion 408 of a desired channel. The build l o portion 408 and a VA portion 410 correspond with the build portion 108 and the VA portion 1 10, respectively, may be minimized to minimize clutter and may be maximized by the system in response to a user's selection.
FIG. 5 shows a Ul 500 in accordance with an embodiment of the present system. The Ul 500 may include menu items such as tabs 506A - 506E which may correspond
1 5 with tabs 106A - 106E, respectively. Ul 500 illustrates a search query 532 which may be generated and rendered in response to a user's selection of the search menu item 506e. The user may enter a desired search term in the search query and the system may query one or more databases in response thereto. The system may then return results of the query in a return frame for the user's consideration and/or selection. The0 user may then view selected content and/or may add the selected content to a build portion 508. A search term of the query may correspond with an identifying feature (e.g., a title) of the desired content. The system may also provide recommended query terms for the user's convenience. In one embodiment of the present system, the search may be provided as a "live search" in that the search results are updated in live time as5 the search is entered with the search being modified as each new search term is added.
Upon determining that one of the community 506b or recommended 506d menu tabs is selected, the system may generate and/or display content which corresponds with a community channel or channels or a recommended channel or channels, respectively. Similarly, when the channels menu tab 506c is selected, the system may0 display information which corresponds with one or more of the user's channels for the users selection. Accordingly, in response to the user's selection, the system may then render a Ul which may correspond with a selected channel (e.g., Ul 100).
FIG. 6A shows a Ul 600A in accordance with an embodiment of the present system. The Ul 600A may include a VA portion 610 which may be similar to the VA 5 portion 1 10 however, the VA portion 610 may correspond with a channel such as, for example "Lakers vs. Magic" channel that was built by the user and/or may include graphic attribute values 636 which may correspond with a build list of the present channel. When a user selects an AVI value such as, for example, "Concept (4): Winning," the system may generate an expanded information set associated with one or l o more AVI values as will be shown with reference to VA portion 610b of FIG. 6B which shows a Ul 600B in accordance with an embodiment of the present system. The Ul 600B is similar to the Ul 600A, however, the VA portion 610b has been expanded to show detailed information which corresponds with the VA portion 610. For example, AVI values 636 and 638 are expanded to 636B and 638B, respectively.
15 The user may also change/edit AVI values by selecting, deselecting, adding and/or deleting attribute values of the attribute information in VA portions 610 and/or 610B so as to customize a channel. The system may then use the changed/edited attribute information to dynamically select corresponding content to be displayed in a corresponding channel. The portions indicated as "value" in portion 638B are provided0 to indicate that more than one attribute may be provided within a given group, such as "concept" although more than one need not be provided. As may be readily appreciated, when an attribute is not provided or all attributes are deleted, a grouping of attributes may be removed from the Ul.
FIG. 7A shows a Ul 700A in accordance with an embodiment of the present5 system. A user may select one or more AVI values for the corresponding channel and add these AVI values to an attribute set for the current channel. For example, the user may select the "court" AVI value 751 using any suitable method (e.g., double clicking, etc.). Accordingly, the system may then modify the AVI information to reflect the users selection. For example, the system may fade the selected AVI value from green to0 black for a value that has been added to the attribute set and may also move the one or more selected values to the a top portion of the attribute list so that the selected one or more AVI values may be grouped with the other selected values (e.g., see, highlighted values, FIG. 7A), thus shifting the rest of the values that are displaying horizontally and thereafter vertically. The user may also adjust corresponding weighting for an attribute 5 value such as increasing or decreasing a weighting to reflect a corresponding increased or decreased interest for the attribute to the user. The user may also deselect attribute values and thereafter the system may remove the deselected values from an attribute set and may update the attribute information accordingly. Thus, the attribute set comprises selected AVI values for a channel.
i t) FIG. 7B shows a Ul 700B in accordance with an embodiment of the present system. The Ul 700B may be similar to the Ul 700A however, selection menu items 744 may be provided for a user to perform one or more desired actions on AVI information 740 in a VA portion 710. Accordingly, a user may go back to a previous VA portion which may include previous AVI values by selecting a "back" menu selection; undo i s current attribute values (e.g., selections/deselections) by selecting an "undo" menu item; and/or save current AVI values (e.g., including selected/deselected attribute values) in a current VA area 710 by selecting a "save" menu item.
FIG. 7C shows a Ul 700C in accordance with an embodiment of the present system. The Ul 700C may be similar to the Uls 700A and 700B however, Ul 700C0 illustrates a user entering a search query in the query box 742 of a VA portion 744.
Upon detecting that a query is being entered in the search box, the system may provide corresponding results such as results 750 (e.g., see, 750A and 750B) for a user's consideration and/or selection. The search may be limited to current AVI values in the current VA portion 744 or may include other data which may be retrieved by the system5 (e.g., an Internet search). The results of the query may have a suitable order (e.g., alphabetic, title, gender, most searched, etc.) and may be placed in a suitable area such as, for example, the VA portion 744, as desired.
FIG. 8 shows a Ul 800 in accordance with an embodiment of the present system. The Ul 800 may be similar to the Ul 700A however, Ul 800 illustrates a title editing0 process to change a title 846 of a corresponding channel. The user may edit the title at any time. Although the title 846 of a channel is shown in a VA portion 850, it may be rendered at any suitable location as desired. "Save" and "Undo" menu items 852 and 848 may be provided for a user to undo or save selections, respectively. Further, a save as menu item may be provided to save the title and associated information as a new channel. Accordingly, when a user selects the save a new menu item, the system may save the title as a new title with the associated AVI.
FIG. 9A shows a Ul 900A in accordance with an embodiment of the present system. The Ul 900A illustrates a build portion 908A including a plurality of build objects that were added (i.e., inserted) into the build portion 908A. The build portion 908A is shown in an expanded state which may be generated by the system when the user selects the build portion 908A for example, by scrolling over the build portion 908A for a predetermined amount of time. The system may resize and collapse the build portion 908A upon detecting a user request such as may occur when a user no longer scrolls over the build portion 908A and/or when a user selects a minimization element (e.g., see, downward arrow 971 ). The system may represent each build object using a suitable representation such as, for example, a thumbnail 970 indicative or one or more portions of the associated content. When the system detects that a user has selected a thumbnail (e.g., by scrolling over the thumbnail 970), the system may render a window and/or display actions selections such as, for example, a play selection, a move selection, etc., that the user may select. Accordingly, upon determining that a user has selected to, for example, play content associated with the corresponding thumbnail 970, the system may retrieve and render the content in for the user's convenience.
FIG. 9B shows a Ul 900B in accordance with an embodiment of the present system. In Ul 900B, a thumbnail corresponding selected content has been selected (e.g., see, highlighted thumbnail 956) by the system and/or the user in a build portion 908B. Accordingly, AVI information 958 which is associated with the selected content 956 of the build area 908B is distinguished (e.g., using highlights, colors, etc.) for a user's convenience. Accordingly, a user may conveniently determine and/or select/deselect one or more AVI values associated with the selected content so as to build a customized channel. The user may also select another object in the build portion 908B and view AVI values content which are associated with this object. The user may then select some or all of the AVI values associated with one or more selected objects so as customize a channel.
FIG. 9C shows a Ul 900C in accordance with an embodiment of the present 5 system. In Ul 900C, a VA area 910C includes AVI values with dark highlights which correspond with attributes that have been selected by a user and stored in association with a corresponding build area 908C of a current channel (e.g., "Lakers vs. Magic - Mine: Edit Concept (12)"). The VA area 910C may also include a query box 960 for receiving queries from a user and /or providing recommendations. The queries may
10 correspond with information of current AVI values in the current VA area 910C or may correspond with AVI values of other channels or databases (e.g., the content database). Accordingly, the system may return query results which are limited to the current VA area, the current channel, multiple channels (e.g., selected channels), or one or more content databases or parts thereof.
i s FIG. 9D shows a Ul 900D in accordance with an embodiment of the present system. The Ul 900D is similar to the Ul 900A however, a build portion 908D is shown in a collapsed state and may display build objects such as, for example, thumbnails 970, in a sequential order. A number of build objects within the build portion 908D may be indicated by icon 972. When a user selects (e.g., by scrolling over) a build object0 such as, for example, thumbnail 970, the system may render a representation 970 which is associated with a currently displayed build object and may provide selections such as, a play selection 976 to play content corresponding with the build object. Heat information 920 may also be rendered for the user's convenience.
FIG. 10 shows a flow diagram that illustrates a process 1000 in accordance with5 an embodiment of the present system. The process 1000 may be performed using one or more computers communicating over a network. The process 1000 can include one of more of the following acts. Further, one or more of these acts may be combined and/or separated into sub-acts, if desired. In operation, the process may start during act 1001 and then proceed to act 1003. During act 1003, the process may obtain channel information for a current channel. The channel information may correspond with, for example, AVI information provided by a user. In accordance with the present system, the AVI for a given channel may be determined by a user selecting, such as by dragging and dropping one or more content portions from amongst a plurality of content portions, into a channel build area. In contrast with prior systems, the present system enables attribute selection by selection of content portions that have associated attributes. For further granularity, a user may select only part of a content portion and only attributes associated with the part of the content portion are selected as attributes for a user channel. In this way, a group of attributes that are associated with a content portion or the selected part of the content portion may be selected to form a query that is utilized for a user channel. The present system provides an intuitive system for building user channels by selection of content portions. A complex set of attributes may be readily added and edited to build one or more user channels of content. The user channels may be provided within a Ul such as discussed herein. The attributes may include heat information relating to the content portion or the selected part of the content portion and/or may include a reduced set of reaction indicators such as disclosed in the '893 application, such as one or more associated emoticons.
After completing act 1003, the process may continue to act 1005. During act 1005, the process may form a query in accordance with the channel information. Accordingly, the query may correspond with previously selected AVI. After completing act 1005, the process may continue to act 1007. It is also envisioned that the process may determine whether new content is available and if it is determined that new content is available, repeat act 1005. In accordance with an embodiment of the present system, as new content is offered, the attributes associated with a channel are utilized to search the new content to identify new content that corresponds to the user channel. In a case wherein new content is identified, the new content may be added to the channel and provided within the Ul as the user channel. In this way, a user does not need to repeatedly request a new search since the search is continuously provided as new content becomes available. During act 1007, results of the query may be rendered on a Ul. After completing act 1007, the process may continue to act 009. During act 1009, it may be determined whether a user requested to play content. Accordingly, if it is determined that the user has requested to play content, the process may continue to act 101 1 . However, if it is determined that the user has not requested to play content related to a current channel, the process may continue to act 1017.
During act 101 1 , the process may render the requested content. After completing act 101 1 , the process may continue to act 1013. During act 1013, the process may update user profile information. The user profile information may include user input information that may be related to attribute information, such as attribute weighting, etc. Further, the user profile information may include other information such as statistical information about the user, time, geolocation, network, etc., as desired. After completing act 1013, the process may continue to act 1015.
For example, attribute matrixes may be maintained for videos, user interests, etc. and also may be maintained within another set of matrixes mapping back to video attribute matrixes. When the system needs to find a recommendation for a user, in accordance with an embodiment of the present system, user interests as provided by selection of video content portions may be simply matched against available video content to identify content for the user channel.
During act 1015, a user may update genome information such as, for example, interest genome sequence information for a given channel, in accordance with the updated user profile information, updated attributes, etc. After completing act 1015, the process may return to act 1003.
During act 1017, the process may determine whether a "build list" edit is requested. A build list edit may be requested when, for example, a user adds, deletes, or otherwise changes a build portion. Accordingly, if it is determined that the build list edit is requested, the process may continue to act 1019. However if it is determined that the build list edit is not requested, the process may continue to act 1023.
During act 1019, the user may edit and update the build list in accordance with a user's changes. After completing act 1019, the process may continue to act 1021 . During act 1021 , the process may update channel information in accordance with the updated build list. After completing act 1021 , the process may repeat act 1003.
During act 1023, the process may determine whether to edit channel information. This may occur when a user edits and saves channel information in, for example, a 5 channel portion. Accordingly, if the process determines to edit channel information, the process may continue to act 1025. However, if the process determines not to edit channel information, the process may repeat act 1009.
During act 1025, the process may update and store the channel information in accordance with user selections and save the edited channel information. After 10 completing act 1025, the process may repeat act 1003.
FIG. 1 shows a block diagram of a system 00 according to an embodiment of the present system. The system 1 100 may include one or more of a source of video content 1 102, a content aggregator 1 106, a genome analyzer 1 108, a genome memory portion 1 1 10 including one or more video genome sequences 1 1 12 and interest i s genome sequences 1 14. The system further includes a recommendation portion 1 1 16, a profile processor 1 1 18, a vector portion 1 120, a Ul 1 122, and a selection portion 124.
The video content 1 102 may include video content available from any suitable source such as a local memory, a remote memory, distributed memories, a SAN, proprietary memories (e.g., belonging to a network), etc. The content may include video0 content, audio/video content, genome information, and/or other forms of content and associated genome information.
The content aggregator 1 106 may function to aggregate content from the content sources 1 102 and provide this information to the genome analyzer 1 108 in a desired manner such as in a serial and/or parallel fashion.
5 The genome analyzer 1 108 may include a digital signal processing (DSP) portion
1 1 1 1 which may use DSP algorithms to analyze the content information and generate genome information, such as attributes, etc., based on content from the content aggregator 1 106. The DSP portion 1 1 1 1 may analyze video information (e.g., video frames, snapshots, etc.) and generate color and/or shape signatures for the purposes of0 matching content to known content and thereby, facilitating association of attributes from known content with new content that is not yet known. Accordingly, the system may match videos based upon, for example, snapshots and/or shape signatures. The analysis information may include, for example, genome information (Gl) such as a video genome sequence (VGS), an interest genome sequence (IGS), etc., which may be formed into a desired array and transmitted to the genome portion 1 100 for storage and/or processing. The IGS may correspond with input from one or more users such as, for example, user annotations, etc. In addition, received content may include attribute, for example, as provided by the content provided. Content may also be manually annotated by as user as may be readily appreciated.
Regardless of where attribute information of content is acquired, the video genome sequences are provided to the genome memory portion 1 1 10 as the video genome sequences 1 1 12. The genome memory portion 1 1 10 may include any suitable memory and may store the video content attribute information for later use. The video genome sequences 1 1 12 may include for example, attribute information such as attribute type (e.g., emotion, concept, speed, color, heat, etc.) and one or more attribute values (e.g., happy, soccer, fast, green, etc.). The interest genome sequence 1 1 14 may include information such as, for example, personal preferences for each attribute type, a weight of attention for each attribute type, etc. which may be determined by the system and/or the user. As discussed above, the analysis information may include Gl and IGS information. The genome memory portion 1 1 10 may include one or more memories and may be local and/or remote from a user device. For example, the genome memory portion 1 1 10 may be accessible through a network, such as the Internet or may be local to a device such as a user's device (UD).
According to the present system, content may be classified using a multi- dimensional attribute vector represented in a multidimensional space. A discrete classification for each genome attribute type may be generated and/or provided as the video genome sequences 1 1 12. The present system may also receive an assignment of emoticons, change/edit attributes, etc., in accordance with an embodiment. The present system may also receive annotations from one or more users and base queries and/or results of the queries on the annotations. Thus, for example, the present system may receive annotation information (e.g., a change/assignment of weighting, etc.) from one or more users and may store the annotation information corresponding with content.
The video genome analyzer 1 108 may combine textual meta data, signal processing results (e.g., DSP results) and normalize this information so that it may be used in accordance with the present system. Accordingly, the system may provide an editor for receiving manual annotation (e.g., speed, style, etc.) information and may process information from a plurality of users to annotate content with, for example, corresponding emotion information.
The system may also include functionality to perform automated annotation extraction so as to determine concept (e.g., via automated concept extraction, for example by doing semantic processing against textual input) and/or color, shape, etc., of video content using signal processing methods as described further herein.
In operation, the present system provides a plurality of video content to the user as shown by the vector portion 1 120. The present system further provides the Ul 1 122 to facilitate a review of the plurality of video content and to facilitate a selection 1 124 of one or more of the plurality of video content for purposes of viewing/creating/editing a video content channel. Attributes of selected video content are received by the profiling processor 1 1 18 for purposes of adjusting a profile for a user channel. The interest genome sequences 1 1 14 for the user channel are stored in the genome memory portion 1 1 10 by the profiling processor 1 1 18. In one embodiment of the present system, in response to the adjusted profile, the plurality of video content is queried to find video content that corresponds to the adjusted interest genome sequences for a channel and the query results are provided to the recommendation portion 1 1 16 which provides the channel to the Ul 1 122. This enables a user of the Ul 1 122 to further adjust the interest genome sequences for the channel, for example, if the provided video content for the channel is not as desired.
FIG. 12 shows a system 1200 in accordance with a further embodiment of the present system. The system 1200 includes a user device 1290 that has a processor 1210 operationally coupled to a memory 1220, a rendering device 1230, such as one or more of a display, speaker, etc., a user input device 1270 and a content server 1280 operationally coupled to the user device 1290. The memory 1220 may be any type of device for storing application data as well as other data, such as video content, video content attributes, (e.g., video genome sequences, interest genome sequences), etc. The application data and other data are received by the processor 1210 for configuring the processor 1210 to perform operation acts in accordance with the present system. The operation acts include controlling at least one of the rendering device 1230 to render one or more of the GUIs and/or to render content. The user input 1270 may include a keyboard, mouse, trackball or other devices, including touch sensitive displays, which may be stand alone or be a part of a system, such as part of a personal computer, personal digital assistant, mobile phone, converged device, or other rendering device for communicating with the processor 1210 via any type of link, such as a wired or wireless link. The user input device 1270 is operable for interacting with the processor 1210 including interaction within a paradigm of a GUI and/or other elements of the present system, such as to enable web browsing, video content selection, such as provided by left and right clicking on a device, a mouse-over, pop-up menu, drag and drop operation, etc., such as provided by user interaction with a computer mouse, etc., as may be readily appreciated by a person of ordinary skill in the art.
In accordance with an embodiment of the present system, the rendering device
1230 may operate as a touch sensitive display for communicating with the processors 1210 (e.g., providing selection of a web browser, a Uniform Resource Locator (URL), portions of web pages, etc.) and thereby, the rendering device 1230 may also operate as a user input device. In this way, a user may interact with the processor 1210 including interaction within a paradigm of a Ul, such as to support content selection, attribute editing, etc. Clearly the user device 1290, the processor 1210, memory 1220, rendering device 1230 and/or user input device 1270 may all or partly be portions of a computer system or other device, and/or be embedded in a portable device, such as a mobile station (MS), mobile telephone, personal computer (PC), personal digital assistant (PDA), converged device such as a smart telephone, etc. The system and method described herein address problems in prior art systems. In accordance with an embodiment of the present system, the user device 1290, corresponding user interfaces and other portions of the system 1200 are provided for browsing content, selecting content, creating/editing a user channel, etc., and for transferring the content and interest genome sequences, etc., between the user device 1290 and the content server 1280, which may operate as the genome memory portion 1 1 10.
The methods of the present system are particularly suited to be carried out by a computer software program, such program containing modules corresponding to one or more of the individual steps or acts described and/or envisioned by the present system. Such program may of course be embodied in a computer-readable medium, such as an integrated chip, a peripheral device or memory, such as the memory 1220 or other memory coupled to the processor 1210.
The computer-readable medium and/or memory 1220 may be any recordable medium (e.g., RAM, ROM, removable memory, CD-ROM, hard drives, DVD, floppy disks or memory cards) or may be a transmission medium utilizing one or more of radio frequency (RF) coupling, Bluetooth coupling, infrared coupling etc. Any medium known or developed that can store and/or transmit information suitable for use with a computer system may be used as the computer-readable medium and/or memory 1220.
Additional memories may also be used including as a portion of the content server 1280. The computer-readable medium, the memory 1220, and/or any other memories may be long-term, short-term, or a combination of long-term and short-term memories. These memories configure processor 1210 to implement the methods, operational acts, and functions disclosed herein. The operation acts may include controlling the rendering device 1230 to render elements in a form of a Ul and/or controlling the rendering device 1230 to render other information in accordance with the present system.
The memories may be distributed (e.g., such as a portion of the content server 1280) or local and the processor 1210, where additional processors may be provided, may also be distributed or may be singular. The memories may be implemented as electrical, magnetic or optical memory, or any combination of these or other types of storage devices. Moreover, the term "memory" should be construed broadly enough to encompass any information able to be read from or written to an address in the addressable space accessed by a processor. With this definition, information on a network is still within memory 1220, for instance, because the processor 1210 may retrieve the information from the network for operation in accordance with the present system. For example, a portion of the memory as understood herein may reside as a portion of the content server 1280. Further, the content server 1280 should be understood to include further network connections to other devices, systems (e.g., servers), etc. While not shown for purposes of simplifying the following description, it is readily appreciated that the content server 1280 may include processors, memories, displays and user inputs similar as shown for the user device 1290, as well as other networked servers, such as may host web sites, etc. Accordingly, while the description contained herein focuses on details of interaction within components of the user devices 1290, it should be understood to similarly apply to interactions of components of the content server 1280.
The processor 1210 is capable of providing control signals and/or performing operations in response to input signals from the user input device 1270 and executing instructions stored in the memory 1220. The processor 1210 may be an application- specific or general-use integrated circuit(s). Further, the processor 1210 may be a dedicated processor for performing in accordance with the present system or may be a general-purpose processor wherein only one of many functions operates for performing in accordance with the present system. The processor 1210 may operate utilizing a program portion, multiple program segments, or may be a hardware device utilizing a dedicated or multi-purpose integrated circuit.
In accordance with an embodiment of the present system, the rendering device 1230 may operate as a touch sensitive display for communicating with the processors 1210 (e.g., providing selection of a web browser, a Uniform Resource Locator (URL), portions of web pages, etc.) and thereby, the rendering device 1230 may also operate as a user input device. In this way, a user may interact with the processor 1210 including interaction within a paradigm of a Ul, such as to support content selection, input of reaction indications, comments, etc. Clearly the user device 1290, the processor 1210, memory 1220, rendering device 1230 and/or user input device 1270 may all or partly be portions of a computer system or other device, and/or be embedded 5 in a portable device, such as a mobile telephone, personal computer (PC), personal digital assistant (PDA), converged device such as a smart telephone, etc.
The system and method described herein address problems in prior art systems. In accordance with an embodiment of the present system, the user device 1290, corresponding user interfaces and other portions of the system 1200 are provided for l o rendering tasks, browsing content, selecting content, creating/editing user channels, etc., and for transferring the information related to the tasks, content and reaction indications, tallied reaction indications, etc., between the user device 1290 and the content server 1280.
The methods of the present system are particularly suited to be carried out by a
15 computer software program, such program containing modules corresponding to one or more of the individual steps or acts described and/or envisioned by the present system. Such program may of course be embodied in a computer-readable medium, such as an integrated chip, a peripheral device or memory, such as the memory 1220 or other memory coupled to the processor 1210.
0 The computer-readable medium and/or memory 1220 may be any recordable medium (e.g., RAM, ROM, removable memory, CD-ROM, hard drives, DVD, floppy disks or memory cards) or may be a transmission medium utilizing one or more of radio frequency (RF) coupling, Bluetooth coupling, infrared coupling etc. Any medium known or developed that can store and/or transmit information suitable for use with a computer5 system may be used as the computer-readable medium and/or memory 1220.
Additional memories may also be used. The computer-readable medium, the memory 1220, and/or any other memories may be long-term, short-term, or a combination of long-term and short-term memories. These memories configure processor 1210 to implement the methods, operational acts, and functions disclosed0 herein. The operation acts may include controlling the rendering device 1230 to render elements in a form of a Ul and/or controlling the rendering device 1230 to render other information in accordance with the present system.
The memories may be distributed (e.g., such as a portion of the content server 1280) or local and the processor 1210, where additional processors may be provided, 5 may also be distributed or may be singular. The memories may be implemented as electrical, magnetic or optical memory, or any combination of these or other types of storage devices. Moreover, the term "memory" should be construed broadly enough to encompass any information able to be read from or written to an address in the addressable space accessed by a processor. With this definition, information on a i o network is still within memory 1220, for instance, because the processor 1210 may retrieve the information from the network for operation in accordance with the present system. For example, a portion of the memory as understood herein may reside as a portion of the content server 1280. Further, the content server 1280 should be understood to include further network connections to other devices, systems (e.g.,
15 servers), etc. While not shown for purposes of simplifying the following description, it is readily appreciated that the content server 1280 may include processors, memories, displays and user inputs similar as shown for the user device 1290, as well as other networked servers, such as may host web sites, etc. Accordingly, while the description contained herein focuses on details of interaction within components of the user devices0 1290, it should be understood to similarly apply to interactions of components of the content server 1280.
Accordingly, the present system may dynamically determine content to include in a channel and/or render associate information. The present system may incorporate wired and/or wireless communication methods and may provide a user with a5 personalized environment. Further benefits of the present system include low cost and scalability. Moreover, community collaboration may be used to recommend content and/or channels to a user so as to eliminate cold start issues.
Finally, the above discussion is intended to be merely illustrative of the present system and should not be construed as limiting the appended claims to any particularo embodiment or group of embodiments. For example, the present system may be provided in a form of a content rendering device, such as a MS. A further embodiment of the present system may provide a Ul that operates as a browser extension, such as a rendered browser toolbar, that can build a content rendering playlist, such as a video playlist. In addition, the present system may push predetermined content while a user is browsing the Internet.
Thus, while the present system has been described with reference to exemplary embodiments, including user interfaces, it should also be appreciated that numerous modifications and alternative embodiments may be devised by those having ordinary skill in the art without departing from the broader and intended spirit and scope of the present system as set forth in the claims that follow. Further, while exemplary user interfaces are provided to facilitate an understanding of the present system, other user interfaces may be provided and/or elements of one user interface may be combined with another of the user interfaces in accordance with further embodiments of the present system.
The section headings included herein are intended to facilitate a review but are not intended to limit the scope of the present system. Accordingly, the specification and drawings are to be regarded in an illustrative manner and are not intended to limit the scope of the appended claims.
In interpreting the appended claims, it should be understood that:
a) the word "comprising" does not exclude the presence of other elements or acts than those listed in a given claim;
b) the word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements;
c) any reference signs in the claims do not limit their scope;
d) several "means" may be represented by the same item or hardware or software implemented structure or function;
e) any of the disclosed elements may be comprised of hardware portions (e.g., including discrete and integrated electronic circuitry), software portions (e.g., computer programming), and any combination thereof; f) hardware portions may be comprised of one or both of analog and digital portions;
g) any of the disclosed devices or portions thereof may be combined together or separated into further portions unless specifically stated otherwise;
5 h) no specific sequence of acts or steps is intended to be required unless specifically indicated; and
i) the term "plurality of an element includes two or more of the claimed element, and does not imply any particular range of number of elements; that is, a plurality of elements may be as few as two elements, and may include an immeasurable i t) number of elements.

Claims

Claims What is claimed is:
1. A method of providing content on a user interface (Ul) of a user device, the method comprising acts of:
5 providing a representation of video content items to a user within the Ul of the user device;
enabling a selection of one or more video content portions of a plurality of available video content portions, wherein each of the video content portions includes a plurality of attributes;
i t) collecting the plurality of attributes for each selected video content portion, the attribute information comprising information related to a video annotation sequence and a user interest sequence;
forming channel information based upon the collected attributes;
forming a query of available video content portions based upon the channel 1 5 information;
querying video content information in accordance with the channel information; and
rendering within the Ul of the device an indication of video content portions corresponding with results of the query.
0
2. The method of claim , further comprising an act of editing attributes of the collected plurality of attributes.
3. The method of claim 2, further comprising an act of updating the channel information5 in accordance with the collected plurality of attributes including the edited attributes.
4. The method of claim 1 , further comprising acts of: storing the query with corresponding channel information; and
publishing the channel information for consumption of the video content portions corresponding with the results of the query by a further user.
5 5. The method of claim 1 , wherein the act of enabling a selection comprises an act of enabling a drag/drop operation of the one or more video content portions within the Ul of the user device.
6. The method of claim 1 , wherein at least a portion of the collected plurality of l o attributes comprises emotion information associated with the selected one or more video content portions.
7. The method of claim 6, further comprising an act of creating an attribute vector based upon the emotion information.
15
8. A system which provides content on a user interface (Ul) of a user device, the system comprising:
a controller which:
provides a representation of video content items to a user within the Ul of the0 user device;
enables a selection of one or more video content portions of a plurality of available video content portions, wherein each of the video content portions includes a plurality of attributes;
collects the plurality of attributes for each selected video content portion, the5 attribute information comprising information related to a video annotation sequence and a user interest sequence;
forms channel information based upon the collected attributes; forms a query of available video content portions based upon the channel information;
queries video content information in accordance with the channel information; and
renders within the Ul of the device an indication of video content portions corresponding with results of the query.
9. The system of claim 8, wherein the controller receives edits of attributes of the collected plurality of attributes.
10. The system of claim 9, wherein the controller updates the channel information in accordance with the collected plurality of attributes including the edited attributes.
1 1. The system of claim 8, further comprising a memory to store the query with corresponding channel information for the user and for use by third parties.
12. The system of claim 8, wherein the controller enables the selection of one or more video content portions by enabling a drag/drop operation of the one or more video content portions within the Ul of the user device.
13. The system of claim 8, wherein at least a portion of the collected plurality of attributes comprises emotion information associated with the selected one or more video content portions.
14. The system of claim 13, wherein the controller creates an attribute vector based upon the emotion information.
15. A computer program stored on a computer readable memory medium, the computer program configured to provide a user interface (Ul) to accomplish a task, the computer program comprising:
a program portion configured to provide a representation of video content items 5 to a user within the Ul of the user device;
a program portion configured to enable a selection of one or more video content portions of a plurality of available video content portions, wherein each of the video content portions includes a plurality of attributes;
a program portion configured to collect the plurality of attributes for each selected l o video content portion, the attribute information comprising information related to a video annotation sequence and a user interest sequence;
a program portion configured to form channel information based upon the collected attributes;
a program portion configured to form a query of available video content portions 15 based upon the channel information;
a program portion configured to query video content information in accordance with the channel information; and
a program portion configured to render within the Ul of the device an indication of video content portions corresponding with results of the query.
0
PCT/IB2010/003432 2009-11-30 2010-11-29 Content management system and method of operation thereof WO2011064674A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US26534509P 2009-11-30 2009-11-30
US61/265,345 2009-11-30

Publications (2)

Publication Number Publication Date
WO2011064674A2 true WO2011064674A2 (en) 2011-06-03
WO2011064674A3 WO2011064674A3 (en) 2011-07-21

Family

ID=43855993

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2010/003432 WO2011064674A2 (en) 2009-11-30 2010-11-29 Content management system and method of operation thereof

Country Status (1)

Country Link
WO (1) WO2011064674A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012077000A1 (en) * 2010-12-10 2012-06-14 Nokia Corporation Method and apparatus for registering a content provider channel for recommendation of content segments
CN104391960A (en) * 2014-11-28 2015-03-04 北京奇艺世纪科技有限公司 Video annotation method and system
WO2016189072A1 (en) * 2015-05-28 2016-12-01 Thomson Licensing Selection method for at least one item, terminal, computer program product and corresponding storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6173287B1 (en) 1998-03-11 2001-01-09 Digital Equipment Corporation Technique for ranking multimedia annotations of interest

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6505209B1 (en) * 1999-11-02 2003-01-07 Monkeymedia, Inc. Poly vectoral reverse navigation
US20050071736A1 (en) * 2003-09-26 2005-03-31 Fuji Xerox Co., Ltd. Comprehensive and intuitive media collection and management tool

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6173287B1 (en) 1998-03-11 2001-01-09 Digital Equipment Corporation Technique for ranking multimedia annotations of interest

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012077000A1 (en) * 2010-12-10 2012-06-14 Nokia Corporation Method and apparatus for registering a content provider channel for recommendation of content segments
CN104391960A (en) * 2014-11-28 2015-03-04 北京奇艺世纪科技有限公司 Video annotation method and system
WO2016189072A1 (en) * 2015-05-28 2016-12-01 Thomson Licensing Selection method for at least one item, terminal, computer program product and corresponding storage medium

Also Published As

Publication number Publication date
WO2011064674A3 (en) 2011-07-21

Similar Documents

Publication Publication Date Title
US8434024B2 (en) System and method for automatically and graphically associating programmatically-generated media item recommendations related to a user's socially recommended media items
EP1415242B1 (en) Method and apparatus for realizing personalized information from multiple information sources
US10031649B2 (en) Automated content detection, analysis, visual synthesis and repurposing
US8745513B2 (en) Method and apparatus for use in accessing content
US8473982B2 (en) Interface for watching a stream of videos
US8533175B2 (en) Temporal and geographic presentation and navigation of linked cultural, artistic, and historic content
US8739051B2 (en) Graphical representation of elements based on multiple attributes
US8316299B2 (en) Information processing apparatus, method and program
US8626732B2 (en) Method and system for navigating and selecting media from large data sets
JP5118283B2 (en) Search user interface with improved accessibility and usability features based on visual metaphor
US20060253547A1 (en) Universal music apparatus for unifying access to multiple specialized music servers
US8661364B2 (en) Planetary graphical interface
US20090158146A1 (en) Resizing tag representations or tag group representations to control relative importance
US9699490B1 (en) Adaptive filtering to adjust automated selection of content using weightings based on contextual parameters of a browsing session
US20080163056A1 (en) Method and apparatus for providing a graphical representation of content
KR100714727B1 (en) Browsing apparatus of media contents using meta data and method using the same
JP2010532059A (en) Center fixed list
CN104813256A (en) Gathering and organizing content distributed via social media
TW200921497A (en) Method, apparatus and computer program product for hierarchical navigation with respect to content items of a media collection
JP2006157899A (en) Indication of data associated with data items
US20130110803A1 (en) Search driven user interface for navigating content and usage analytics
KR20140100940A (en) Start page for a user's personal music collection
KR102499652B1 (en) A method and apparatus for search query formulation
US7730067B2 (en) Database interaction
WO2011064674A2 (en) Content management system and method of operation thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10816317

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10816317

Country of ref document: EP

Kind code of ref document: A2