WO2014142758A1 - An interactive system for video customization and delivery - Google Patents

An interactive system for video customization and delivery Download PDF

Info

Publication number
WO2014142758A1
WO2014142758A1 PCT/SG2014/000126 SG2014000126W WO2014142758A1 WO 2014142758 A1 WO2014142758 A1 WO 2014142758A1 SG 2014000126 W SG2014000126 W SG 2014000126W WO 2014142758 A1 WO2014142758 A1 WO 2014142758A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
metadata
content
data
visual patterns
Prior art date
Application number
PCT/SG2014/000126
Other languages
French (fr)
Inventor
Karel Paul Stephan
Jurjen SOHNE
Original Assignee
Rocks International Group Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rocks International Group Pte Ltd filed Critical Rocks International Group Pte Ltd
Publication of WO2014142758A1 publication Critical patent/WO2014142758A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2668Creating a channel for a dedicated end-user group, e.g. insertion of targeted commercials based on end-user profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4622Retrieving content or additional data from different sources, e.g. from a broadcast channel and the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • H04N21/4725End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content using interactive regions of the image, e.g. hot spots
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot

Definitions

  • the invention is related to methods and systems for inserting virtual video content into digital video data.
  • an interactive system of video customization and delivery may offer one or more of the following: (1) real-time opportunities for viewers to interact with video streams through responsive advertising, more information links, viewing options, and other means; (2) real-time opportunities for viewers to interact with one another through the integration of social media into the viewing experience; (3) ongoing social media interaction relating to video content post-viewing by a user, such as sharing clips, "liking" things or people from within the video stream, sharing via Facebook or Twitter, or some other social media platform, and commenting on aspects of the video; and (4) ongoing commercialization opportunities relating to video content.
  • a method of displaying content comprising the steps of defining a set of visual patterns and tagging metadata to the set of visual patterns.
  • the method further comprises the steps of detecting the set of visual patterns within a digital image, wherein the detection of the set of visual patterns is robust to appearance variations caused by lighting changes, camera angle and partial occlusion; aggregating the tagged metadata with third party metadata; and displaying content along with the digital image, wherein the content is based on the aggregated metadata.
  • the step of displaying the content along with the digital image comprises superimposing the content over the set of visual patterns on the digital image to augment the digital image.
  • the digital image is part of a sequence of digital images.
  • the content is a HyperText Markup Language layer which is clickable and responsive to user input.
  • the content is selected from the group consisting of : a message for a betting service, an advertising message, a social-media message, a message concerning the detected set of visual patterns, a message of historical data concerning the detected set of visual patterns, and a message of comments about an event depicted by the sequence of digital images.
  • the method further comprises the step of generating a detection model to detect the set of visual patterns, wherein the detection model is trained to recognize and identify the set of visual patterns in the sequence of digital images.
  • the sequence of digital images represents a sporting event and the message of historical data concerning the detected set of visual patterns comprises information concerning past performance of a player.
  • the message of comments comprises comments from an audience at a sporting event.
  • the method further comprises the step of estimating a time for which the content is displayed along with the digital image.
  • the step of detecting the set of visual patterns within the digital image is performed by a web crawler.
  • the step of tagging metadata to the set of visual patterns is performed by manual or automatic means.
  • the tagged metadata comprises metadata concerning an identity of the set of visual patterns.
  • a system for displaying content comprising at least one processor programmed to implement a detection and recognition module to detect a set of visual patterns within a digital image, wherein the detection of the set of visual patterns is robust to appearance variations caused by lighting changes, camera angle and partial occlusion; an automated metadata tagging module to automatically tag metadata to the set of visual patterns.
  • the at least one processor is further programmed to implement an administrated metadata tagging module to allow manual tagging of metadata to the set of visual patterns; an metadata aggregation module that aggregates the tagged metadata with third party metadata; and an image selection and generation module to display content along with the digital image, wherein the content is based on the aggregated metadata.
  • the image selection and generation module displays the content along with the digital image by superimposing the content over the set of visual patterns on the digital image to augment the digital image.
  • the digital image is part of a sequence of digital images.
  • the content is a HyperText Markup Language layer which is clickable and responsive to user input, and the at least one processor is further programmed to implement a feedback management module to interpret the user input.
  • the content is selected from the group consisting of : a message for a betting service, an advertising message, a social-media message, a message concerning the detected set of visual patterns, a message of historical data concerning the detected set of visual patterns, and a message of comments about an event depicted by the sequence of digital images.
  • the at least one processor is further programmed to generate a detection model to detect the set of visual patterns, wherein the detection model is trained to recognize and identify the set of visual patterns in the sequence of digital images.
  • the sequence of digital images represents a sporting event and the message of historical data concerning the detected set of visual patterns comprises information concerning past performance of a player.
  • the message of comments comprises comments from an audience at a sporting event.
  • the at least one processor is further programmed to estimate a time for which the content is displayed along with the digital image.
  • the detection and recognition module is a web crawler.
  • the tagged metadata comprises metadata concerning an identity of the set of visual patterns.
  • the present invention may provide a method and system for using three-dimensional simulation to quantify the spatial alteration of a region of a two-dimensional digital video image caused by movement of the region between a first and a second video frame for at least the purposes of inserting a virtual video content item into a digital video feed.
  • a virtual advertising platform may receive a two-dimensional digital video data feed and construct a three-dimensional simulation of the two-dimensional digital video data feed within a simulation environment based at least in part on applying geometric surfaces over a plurality of spatial regions within frames of the two-dimensional digital video data feed, wherein the plurality of spatial regions are defined at least in part by a coordinate mapping of the two-dimensional digital video data feed.
  • the virtual advertising platform may map a spatial region, among the plurality of spatial regions, within a first video frame to the spatial region's location with a second video frame, wherein the second video frame was captured in a time subsequent to the first frame, by performing the steps of: Step One: selecting the spatial region within the first video frame based at least in part on mapping coordinates of the spatial region within the two-dimensional video data feed; Step Two: identifying geometric changes to the spatial region within the second video frame by quantifying the differences between the applied geometric surfaces of the spatial region within the first video frame and the applied geometric surfaces in the second video frame; and Step Three: summarizing the quantified differences as a three-dimensional mapping metric.
  • the virtual advertising platform may iteratively process each of a plurality of video frames within the two-dimensional digital video feed by performing Steps One, Two, and Three to create a plurality of three-dimensional mapping metrics, and may summarize quantitative associations among the plurality of three-dimensional mapping metrics as a three-dimensional mapping algorithm, wherein the three-dimensional mapping algorithm defines at least in part three-dimensional geometric position data that enables application of geometric changes to the spatial region inherent in the plurality of video frames to a virtual digital video image that is not present in the two-dimensional digital video data feed
  • the virtual digital video image may be an advertisement that is inserted into the spatial region of the two-dimensional digital data feed, replacing the spatial region, and the two-dimensional digital video image is recomposited as a new virtual digital video feed.
  • the digital video feed may derive from an infrared camera.
  • the digital video feed may be received from a live event.
  • the digital video feed may be received from a stored digital video medium, such as but not limited to a DVD.
  • the digital video feed may be received from the Internet.
  • the selection within the virtual advertising platform of the spatial region may be further based on a correlation between mapping coordinates of the spatial region with a known spatial characteristic that is stored within a data facility that is associated with the three-dimensional simulation environment.
  • the known spatial characteristic may be an advertising logo, an article of clothing, or some other type of spatial characteristic.
  • the virtual advertising platform may use a three- dimensional mapping algorithm to insert a virtual image within an internet-based video stream.
  • the virtual advertising platform may receive a request from a user to view a two- dimensional digital video data feed from the Internet, and select a virtual digital image.
  • the virtual advertising platform may apply a three-dimensional mapping algorithm to the virtual digital image, wherein the three-dimensional mapping algorithm causes the virtual digital image to be recomposited within a plurality of frames within the two-dimensional digital data feed in place of a spatial region within the two-dimensional data feed, and wherein the three-dimensional mapping algorithm enables application of analogous geometric changes to the virtual digital image that are present in the spatial region within the plurality of video frames within the two-dimensional digital video data feed, and may send the recomposited digital data feed for display to the user, wherein the recomposited digital data feed is a virtualized digital data feed that includes the virtual digital image in place of the spatial region.
  • the request is accompanied by at least one datum relating to a characteristic of the user and the selection of the virtual digital image is based at least in part on a relevance to the datum.
  • the virtual digital image may be an item of sponsored content, including but not limited to an advertisement.
  • the virtual digital image may be an advertising logo that is relevant to at least a portion of the two-dimensional digital video feed.
  • the relevance of the advertising logo may be based at least in part on a stored association between the advertising logo and a second logo that is recognized in the two-dimensional digital video feed, wherein detection of the second logo is based at least in part on a quantified match between an image recognized in the two-dimensional digital video feed and a logo that is stored in a database.
  • the relevance may be further based on a geographic location associated with the two-dimensional digital video feed.
  • the relevance may be further based on a geographic location associated with a client device to which the recomposited digital video feed will be transmitted.
  • the virtual advertising platform may use a three- dimensional mapping algorithm to interpolate video data to replace corrupted digital video data and insert a virtual image within a two-dimensional digital video feed.
  • the virtual advertising platform may receive a two-dimensional digital video data feed wherein a spatial region within the plurality of frames within the two-dimensional video data feed includes a partial depiction of an advertisement due to corrupted digital video data, and use an image metrics algorithm to compute a relevance of uncorrupted digital video data within the spatial region to a set of stored digital video images.
  • the virtual advertising platform may identify a stored digital video image based at least in part on the computed relevance, and select a virtual digital image based at least in part on the identified stored digital video image.
  • the virtual advertising platform may apply a three- dimensional mapping algorithm to the virtual digital image, wherein the three- dimensional mapping algorithm causes the virtual digital image to be recomposited within a plurality of frames within the two-dimensional digital data feed in place of the spatial region within the two-dimensional data feed, and wherein the three-dimensional mapping algorithm enables application of analogous geometric changes to the virtual digital image that are present in the spatial region within the plurality of video frames within the two-dimensional digital video data feed, and the virtual advertising platform may send the recomposited digital data feed for display to a user, wherein the recomposited digital data feed is a virtualized digital data feed that includes the virtual digital image in place of the spatial region.
  • the virtual digital image may be a completed version of the partial image, wherein the virtual digital image is created based at least in part on interpolated digital video data using the stored digital video image.
  • the corrupted digital video data may be caused at least in part by a physical deformation of a object depicted within the two-dimensional digital video data feed.
  • FIG. 1 depicts a simplified architecture including a virtual advertising platform and related facilities.
  • Fig. 2 illustrates an embodiment of image capture and recognition that may be used by the virtual advertising platform.
  • Fig. 3 illustrates an embodiment of video image mapping within a three- dimensional environment that may be used by the virtual advertising platform.
  • Fig. 4 illustrates an augmentation process that may be used for recompositing a video data to include a virtual video content within the virtual advertising platform.
  • FIG. 5 illustrates a simplified method and system for developing and testing algorithms within the virtual advertising platform.
  • Fig. 6 depicts a simplified flowchart of the interactions among selected components in the process of tagging metadata and integrating content into a video feed.
  • Fig. 7 depicts the modules in a preferred embodiment of the invention.
  • FIG. 8 depicts a simplified flowchart of the process of tagging and aggregating metadata and placing content on the digital images.
  • Fig. 9 depicts an embodiment of inserting a live pop-up into a video stream based on pattern recognition of the number and text appearing on a player shirt.
  • Fig. 10 depicts an embodiment of a static logo being inserted onto a fixed surface using a fixed camera video feed.
  • Fig. 11 depicts an embodiment of a static logo being inserted in place of an existing logo from the source video feed generated using a moving camera.
  • Fig. 12 depicts an embodiment of a logo being integrated into a video feed on a moving target.
  • Fig. 13 depicts an embodiment of the insertion of a logo into a live video feed, such that the logo displays on a moving target using coded targets.
  • a virtual advertising platform 120 is provided in a simplified video broadcasting context in which the virtual advertising platform 120 may be used to insert virtual video content within a digital video data feed 118 that is received by a virtual advertising platform 120 to create a virtualized digital video feed 142.
  • a digital video feed 118 may originate with a camera at a live event 104 that is recording the live event 102 in real time, or broadcasting the live event 102 with a broadcasting delay.
  • a digital video feed 118 may also originate from rebroadcast programming 108, such as that from a network affiliate rebroadcasting previously recorded studio recordings, such as a sitcom, or a previously recorded sports event, such as an international football match.
  • a digital video feed 118 may originate from a stored digital video medium 110, such as a DVD, camcorder, mobile device, computer, or some other medium that is capable of storing digital video.
  • a digital video feed 118 may originate from an internet-based video platform, such as a website, email attachment, live video streaming (e.g., a webcam or an internet telephony program, such as Skype), computer user upload to the internet (e.g., to a website such as www.YouTube.com), or some other means of internet-based video transmission.
  • the virtual advertising platform 120 may receive the digital video feed 118.
  • the receipt of the digital video feed 118 may be passive, as in the embodiment of a third party actively sending the digital video feed 118 to the virtual advertising platform 120 that passively receives the digital video feed 118, or the virtual advertising platform 120 may actively seek out and obtain a digital video feed 118, including actively seeking to obtain a digital video feed 118 or plurality of digital video feeds that meet a criterion.
  • the virtual advertising platform 120 may be programmed to compare a dataset against a datum or data relating to digital video feeds, such as keywords, locations, broadcast locations, or some other criteria.
  • the virtual advertising platform 120 may include a search and retrieval facility that is enabled to search among available digital video feeds 118 according to a criterion or criteria. For example, the virtual advertising platform 120 may search a website for a digital video feed 118 that is associated with a keyword of "music video," and retrieve the video, for example via download, for further rendering and recompositing within the virtual advertising platform 120.
  • a digital video feed 118 may be received by the virtual advertising platform
  • the virtual video content may be a wholly new element of video or it may be an improvement or enhancement of an item of video content found in the originally received digital video feed 118, such as a new video enhancement which corrects corrupted video data and/or video data in the original digital video feed 118 that was obscured in some manner.
  • an image-processing platform associated or within the virtual advertising platform 120 may be responsible for analyzing an incoming digital video feed 118 (also referred to herein as a "video,” “video content,” “video stream,” and the like) in real-time, performing the detection of logos or other video content, including but not limited to advertising content, recovering geometrical and appearance parameters for the detected logos or detected content, and transmitting encoded metadata required for later augmentation with replacement logos.
  • an incoming digital video feed 118 also referred to herein as a "video,” “video content,” “video stream,” and the like
  • the process may begin with the virtual advertising platform 120 decoding an incoming digital video feed 118, using a current frame 204 or a frame previous 202 to a current frame, thus extracting raw color pixels for analysis.
  • the process may be used to select a prototype logo 208, herein referred to as Logo N, for detection in the current frame 204 and/or previous frame 202 based at least in part by accessing detection databases consisting of any number of prototype logos 212 for a particular event, as well as an optional database consisting of event-specific prototype images of objects 210 upon which logos are present (background targets).
  • the a logo within the digital video feed 118 may be detected based at least in part on a partial match or recognition of a logo or other type of video content within the digital video feed 118.
  • the prototype images in the detection databases may undergo an image analysis step in which the information, including but not limited to the following, is extracted in order to form a unique representation of the logo (and optionally the background targets):
  • the virtual advertising platform 120 may enable detection of salient features
  • the salient regions consist of heterogeneous regions within an otherwise generally symmetrical or homogenous image.
  • the virtual advertising platform 120 may enable detection of spatial patterns
  • the virtual advertising platform 120 may enable detection of spectral distribution 224, where the spectral distribution consists of a summary of color and intensity information.
  • the virtual advertising platform 120 may enable a zoom level comparison
  • An incoming video image may undergo a similar extraction of saliency, spatial patterns and spectral distribution, followed by a comparison between these characteristics and those of prototype logos (detection).
  • the detection phase may carry out comparisons between the various features at multiple scales of zoom, and may be able to detect multiple instances of the same logo.
  • the same process may be carried out for each logo in the database in order to determine a match 230 between the video image in the received video stream 118 and the stored image or logo (e.g., Logo N) in the prototype logos 212 and/or prototype objects 210 databases.
  • Temporal smoothing of detection results may be based on storage of detections from previous image frames, making use of physical constraints and predictive filtering to reduce jitter 232.
  • the detection phase may indicate locations and identities of logos in the scene, which as shown in Fig. 3, may be followed by a pose-estimation algorithm process in which the geometrical positioning of the detected logo in the scene is ascertained within a 3-D environment 128. This accounts for detecting logo translations 302, logo scalings 308, logo rotations 304, shearing 312 and warping 310 of a detected logo as compared to a database prototype, resulting in metadata 320 encoding of these spatial parameters.
  • a detected and geometry-corrected logo may undergo an alignment procedure in order to reconstruct a pixel to pixel mapping between the two logos, such as Logo N 218 from the digital video feed 118 and a Logo X 404 obtained from a replacement logos database 400 for the purpose of selecting the replacement logo 402 to insert as a virtual content item into the digital video feed 118 in place of Logo N (see Fig. 4).
  • the Logo X may be aligned 316 and major discrepancies between the aligned pair of Logo N and Logo X may be used to construct an occlusion mask 314, based at least in part on applying geometric features 410 of the logos to the alignment step, and thus accounting for partial occlusions that exist between the camera and target, and partial obscuration due to viewing angle, or some other type of viewing obstruction or occlusion (e.g., due to folds in fabric, such as a player's jersey, or light reflection from the side of an object).
  • the occlusion mask may be encoded into an outgoing metadata 320 structure for the augmented phase and applied 412 as part of the augmentation process.
  • color-differences between a recovered logo and a prototype may be assessed, and encoded as a color transformation matrix 318 and applied 414 during the augmentation process for later correction of the augmented logo.
  • the transformation parameters may be added to the metadata 320 structure.
  • specular (as opposed to uniform) lighting effects may be accounted for by detecting anomalous lighting patterns in the aligned image pair.
  • This information may be encoded into the metadata structure for specular reflection compensation in the augmentation phase.
  • a blending algorithm 418 may involve extracting pixel properties in the vicinity of the detected logo. These properties may be encoded into the metadata 320 structure to allow for a natural blending at the augmentation phase, particularly at the edges of replacement logos 400.
  • the blending algorithm may be used to create an augmented video stream 420.
  • the augmented video stream may be a virtualized digital video stream 142 that may be transmitted to other entities and client devices 158 for viewing.
  • the virtual advertising platform 120 may include an algorithm testing and learning facility that may rank, prioritize, and optimize the performance of the algorithms used by the virtual advertising platform 120 for the placement of virtual video content within a digital video feed.
  • an algorithm testing and learning facility may rank, prioritize, and optimize the performance of the algorithms used by the virtual advertising platform 120 for the placement of virtual video content within a digital video feed.
  • a detected algorithm 500 may be tested against a criterion and its performance scored 502, ranked or otherwise evaluated for its value in recognizing and detecting a target logo 504.
  • a virtualized digital video feed may be created and distributed by the virtual advertising platform 120.
  • the virtualized digital video feed may be distributed to entities such as, but not limited to, a master control booth 114, such as that associated with a network broadcaster, a regional broadcaster 152, such as a local affiliate of a network broadcaster, the internet 154, such as a website, or some other entity capable of receiving a video distribution.
  • Data and/or metadata may be associated with the virtualized digital video feed 142 including, but not limited to tracking data 144, such as cookies 148 or pixel tracking 150 data, that permits the distribution of the virtualized digital video feed 142 to be tracked, recorded, and shared with parties, including the virtual advertising platform 120.
  • An entity such as a regional broadcaster 152, internet 154 website, or some other entity may receive the virtualized digital video feed 142 and transmit it to a client device 158 including, but not limited to, an internet-enabled device 160, TV 162, phone 164, or some other device capable of displaying a digital video.
  • a user of the client device 158 may then view an instance of the virtualized digital video feed 168, and data confirming this viewing instance may be further transmitted, for example on the basis of the tracking data, to an entity, such as the virtual advertising platform 120.
  • the virtual advertising platform 120 may receive and store this user viewing data, along with a plurality of users' viewing data, and use this information at least in part for the purposes of determining a relevancy for a type of virtual video content to insert within a digital video feed 118.
  • User data 170 such as demographic 172, economic 174, and usage history, relating to a user may be associated with a client device 158, and this data may also be received and stored by the virtual advertising platform 120, along with a plurality of users' data, and use this information at least in part for the purposes of determining a relevancy for a type of virtual video content to insert within a digital video feed 118.
  • virtual video content that is used by the virtual advertising platform 120 to include within a virtualized digital video feed 168 may be sponsored content 180, such as an advertisement.
  • Sponsored content 180 may be further associated with an ad exchange 182 or ad network within which advertisers 188 may place bids using a bidding platform 184 for the right to have a given sponsored content 180 placed as a virtual content within a virtualized digital video feed 168.
  • the virtual advertising platform may be ad exchange 182 or ad network within which advertisers 188 may place bids using a bidding platform 184 for the right to have a given sponsored content 180 placed as a virtual content within a virtualized digital video feed 168.
  • V 120 may be used to insert virtual content other than advertisements or sponsored content, including but not limited to, entertainment video, amateur video, special effects, of some other type of non-advertising content.
  • the 120 may be used to insert virtual video content into a three-dimensional digital video data feed.
  • the virtual advertising platform [70] In embodiments of the present invention, the virtual advertising platform
  • the virtual advertising platform 120 may receive a digital video data feed.
  • a digital video data feed may derive from a 2D camera, a 3-D camera, an infrared camera, a stereoscopic camera, or some other type of camera.
  • the virtual advertising platform 120 may map a region within a first video frame of the digital video data feed to the region within a second video frame by performing the steps of: (i) selecting the region within the first video frame based at least in part on recognition of data (e.g., pixel data, steganographic data) within the region matching that of a video data criterion (e.g., indexed image/video segments of known advertisements); (ii) selecting the region from a second video frame within the digital video data feed, captured by the stereoscopic camera at a time subsequent to the first frame, and associating the first location of the region in three-dimensional video space in the first video frame with the second location of the region in three-dimensional video space in the second video frame, wherein the association is based at least in part
  • the virtual advertising platform 120 may segment the region into a plurality of region segments, and iteratively processing the plurality of region segments within the region by performing Steps i, ii, and iii for each region segment to create a plurality of three-dimensional mapping metrics, wherein each three-dimensional mapping metric summarizes a location within the three-dimensional space for each of the plurality of region segments across each of the frames within the digital video data feed.
  • the virtual advertising platform may summarize the association among the plurality of three-dimensional mapping metrics as a three-dimensional mapping algorithm, and a replacement video region may be mapped to the region within the first video frame, wherein the mapping is a quantitative association of data (e.g., pixel data, steganographic data) within the replacement video region and the region within the first video frame.
  • the virtual advertising platform may manipulate video data of the replacement video region, based at least in part on the application of the three- dimensional mapping algorithm, to render a second version of the replacement video region suitable for placement within the second video frame, wherein the rendering of the replacement video region is visually and/or quantitatively equivalent to the alteration in three-dimensional space of the region in the first and second frames that is summarized by the three-dimensional mapping metric.
  • the virtual advertising platform [71] In embodiments of the present invention, the virtual advertising platform
  • the virtual advertising platform 120 may receive a digital video data feed.
  • a digital video data feed may derive from a 2D camera, a 3-D camera, an infrared camera, a stereoscopic camera, or some other type of camera.
  • the virtual advertising platform 120 may map a region within a first video frame of the digital video data feed to the region within a second video frame by performing the steps of: (i) selecting the region within the first video frame based at least in part on recognition of data (e.g., pixel data, steganographic data) within the region matching that of a video data criterion (e.g., indexed image/video segments of known advertisements); (ii) selecting the region from a second video frame within the digital video data feed, captured by the stereoscopic camera at a time subsequent to the first frame, and associating the first location of the region in three-dimensional video space in the first video frame with the second location of the region in three-dimensional video space in the second video frame, wherein the association is based at least in part
  • the virtual advertising platform 120 may segment the region into a plurality of region segments, and iteratively processing the plurality of region segments within the region by performing Steps i, ii, and iii for each region segment to create a plurality of three-dimensional mapping metrics, wherein each three-dimensional mapping metric summarizes a location within the three-dimensional space for each of the plurality of region segments across each of the frames within the digital video data feed.
  • the virtual advertising platform may summarize the association among the plurality of three-dimensional mapping metrics as a three-dimensional mapping algorithm, and a replacement video region may be mapped to the region within the first video frame, wherein the mapping is a quantitative association of data (e.g., pixel data, steganographic data) within the replacement video region and the region within the first video frame.
  • the virtual advertising platform may manipulate video data of the replacement video region, based at least in part on the application of the three-dimensional mapping algorithm, to render a second version of the replacement video region suitable for placement within the second video frame, wherein the rendering of the replacement video region is visually and/or quantitatively equivalent to the alteration in three-dimensional space of the region in the first and second frames that is summarized by the three-dimensional mapping metric.
  • the virtual advertising platform 120 may iteratively manipulate video data of a plurality of replacement video regions, based at least in part on the application of the three-dimensional mapping algorithm, wherein the iterative manipulation produces a plurality of replacement video regions, each of which corresponds to one frame of a series of series of frames within the digital video data feed.
  • the virtual advertising platform 120 may aggregate each of the plurality of replacement video regions to create a plurality of composite replacement video images, wherein each of the plurality of composite replacement video images corresponds to each of the frames of the series of frames within the digital video data feed.
  • Each of the composite replacement video images may be validated against a criterion replacement image, wherein the validation is summarized as a quantitative validity metric, and the three-dimensional mapping algorithm may iteratively adjust to optimize the predictive validity of the quantitative validity metric.
  • the virtual advertising platform [72] In embodiments of the present invention, the virtual advertising platform
  • the virtual advertising platform 120 may recomposite the digital data feed into a new digital data feed in which the placement of the composite replacement video images is substituted for content within the digital video feed, and rebroadcast the new digital data feed.
  • the virtual advertising platform may recomposite the digital data feed into a new digital data feed in which the placement of the composite replacement video images is substituted for content within the digital video feed, and rebroadcast the new digital data feed.
  • 120 may enable data interpolation to fill in missing video imagery due to obfuscation from, for example, sun reflection or dimmed lighting, folded clothing, blocked images, obfuscated images and the like.
  • the virtual advertising platform [74] In embodiments of the present invention, the virtual advertising platform
  • tracking data may be insert into a recomposited virtual video feed so that downstream usage may be tracked (e.g., internet-streamed content).
  • the virtual advertising platform [75] In embodiments of the present invention, the virtual advertising platform
  • a distributed computing environment may use a distributed computing environment and receive video data at a server from a digital video data feed (e.g., from a Master Control Booth) and segment the video data into a plurality of video data segments, distributing the plurality of video data segments to a plurality of servers (wherein the plurality of servers are within a distributed computing environment).
  • a digital video data feed e.g., from a Master Control Booth
  • the virtual advertising platform [76] In embodiments of the present invention, the virtual advertising platform
  • a virtual video content for placement within a digital video data feed (using the methods described herein), wherein the selection is based at least in part on information relating to at least one of (i) a broadcast affiliate, (ii) a regional code associated with a distribution destination, and/or (iii) a device on which the digital video feed will be displayed (e.g., a cable settop box or cell phone).
  • the virtual advertising platform [77] In embodiments of the present invention, the virtual advertising platform
  • a virtual video content into a video data feed based at least in part on the selection of a virtual video content from a dictionary in which video content stored within the dictionary is associated with metadata that describes in part a mapping onto known advertisements for which the video in the dictionary may be substituted.
  • an ad exchange 182 may be associated with the virtual advertising platform 120, as described herein, may present a mode of enabling advertisements through various online portals such as websites by creating a platform for integrating various entities involved in the preparation and delivery of sponsored content 180, such as advertisements. It may act as a single platform for enabling transactions between the advertiser and publishers.
  • the integration of various services in a single platform may facilitate bidding of advertisements, for example using a bidding platform 184 in real-time, dynamic pricing, customizable reporting capabilities, identification of target advertisers and market niches, rich media trafficking, algorithms for scalability, yield management, data enablement, and the like.
  • API's for interfacing with other platforms e.g.
  • An ad exchange 182 may be implemented through various electronic and communication devices that may support networking. Some examples of such devices may include but are not limited to desktops, palmtops, laptops, mobile phones, cell phones, and the like. It may be understood by a person ordinarily skilled in the art that various wired or wireless techniques may be employed to support networks of these devices with external communication platforms such as Cellular, WIFI, LAN, WAN, MAN, Internet and the like.
  • a complete system of an ad exchange 182 hereinafter referred to as an ad exchange 182 for descriptive purposes may include entities such as ad exchange 182 servers, ad inventories, ad networks, ad agencies, advertisers, publishers, virtual advertising platform 120 facilities, and the like. The detailed description of some of these entities is provided herein, separately for simplicity of the description.
  • An ad exchange 182 server may include one or more servers that may be configured to provide web services or other kind of services for facilitating placement of sponsored content 180, such as insertion of sponsored content 180 on websites.
  • an ad exchange 182 server may be a computer server, such as a web server, that may perform the tasks of storing online advertisements and delivering the advertisements to website users or viewers, mobile network providers, other platforms such as a virtual advertising platform 120, and the like.
  • the ad exchange 182 server may facilitate display of relevant advertisements and information each time a visitor or a user visits a webpage using a web browser or refreshes the web page.
  • the advertisements may be in the form of virtual video content, banner ads, contextual ads, behavioral ads, interstitial ads and the like.
  • the ad exchange 182 server may perform the task of keeping a log of the number of impressions and clicks, record traffic data number of users, IP address of the users for identifying spam and the like. Logs may be utilized for creating statistical graphs for analyzing traffic flow of packets, routing paths and the like. Further, a database may be maintained by the ad exchange 182 server to store information related to the users of webpages and client devices 158 and to store their behavioral and contextual information. This behavioral and contextual information may be used by the ad exchange 182 server, and by the virtual advertising platform 120, to present relevant advertisements to the user in the form of virtual video content that is inserted into a digital video feed 118.
  • contextual information relating to a client device 158 may indicate that a language setting on the device is set so that the default language is "English.”
  • This contextual information may be used at least in part by the virtual advertising platform 120 to select virtual video content that is based on English for insertion within a digital video feed 118 in place of non-English elements present in the digital video feed 118.
  • the database may be updated by the ad exchange 182 server periodically or when triggered by an ad exchange 182 server owner.
  • the database may be a standalone database or may be a distributed database, and may be further associated with the virtual advertising platform 120.
  • a publisher may be an owner of an ad exchange 182 server.
  • Such a deployment may be called a local ad exchange 182 server since the ad exchange 182 server is controlled and maintained by the publisher and the ad exchange 182 server may serve only the publisher.
  • an ad exchange 182 serve may also be deployed and hosted by a third party.
  • Such a deployment is called a third party server or a remote server since the owner of the ad exchange 182 server and the web server are different.
  • a direct link may be maintained between the ad exchange 182 server owner (third party) and the publisher to keep the publisher updated regarding online advertisements on the web page and any transaction therein.
  • the ad exchange 182 server may serve numerous domains owned by various publishers differently. [81] In accordance to various embodiments of the present invention, several other tasks may be performed by the ad exchange 182 server.
  • the ad exchange 182 server may assist in uploading advertisements or any other similar content on the web page, including loading content, such as digital video feeds 118 to the virtual advertising platform 120.
  • the ad exchange 182 server may also facilitate in downloading a downloadable content of the advertisements or a portion of the advertisements as defined by the restrictions imposed by advertisers.
  • an ad exchange 182 server may also be utilized in avoiding ads trafficking on a web page or web pages. The trafficking may be avoided based on the defined criteria and parameters regarding business and commercial viabilities and importance.
  • an ad exchange 182 server may apply a cap or a limit to the number of times a sponsored content, such as virtual video content, is displayed, thereby setting a limit on the usage based on the money invested for online advertisements.
  • an ad exchange 182 server may disable the display of certain advertisements based on the user's context and behavior.
  • the time period for displaying advertisements to users may be controlled by the ad exchange 182 server, and this information used by the virtual advertising platform 120 for the purposes of selecting the type of virtual content to include in a virtualized digital video feed 142.
  • the time period may be set uniform for all users or may vary for various users based on the behavioral, contextual, or other information gathered by the ad exchange 182 server or contextual information previously stored in a database, including a database that is associated with the virtual advertising platform 120.
  • the ad exchange 182 server may inform the sequence of advertisements used by the virtual advertising platform 120 for placing virtual content within a digital video feed, base at least in part on the interests and user data 170 of a user.
  • owners of ad inventories may be advertisers who desire to display the content such as virtual content that is placed by the virtual advertising platform 120 within a digital video feed 118, and the like.
  • the advertisers may purchase a portion of space within a digital video feed 118 for the display of their inventories and advertisements.
  • Ad inventories may be stored in or accessed by an ad exchange 182 server, or the virtual advertising platform 120, from where the inventories may be fetched for displaying within a digital video feed 118. These inventories may then be added to the allocated space within a digital video feed by the virtual advertising platform 120, and /or the owner of a digital video feed 118.
  • the allocation of space to the ad inventories and display of the content of the ad inventories may be governed by the ad exchange 182 Server and/or by parameters within the virtual advertising platform 120.
  • an advertiser is a buying entity in an ad exchange 182 that may provide sponsored content 180, advertisements and other similar content to parties that are capable of placing the content, including the virtual advertising platform 120 that is enabled to place video content within a digital video feed 118.
  • ad exchange 182 may be connected to numerous publishers through ad exchange 182 servers. The advertisers might not be directly linked to an ad exchange 182 server, but rather an intermediate system such as an ad network or an ad agency may be provided where various advertisers may be linked or represented in the ad exchange 182.
  • Advertisers may place bids for purchasing a defined space within a virtualized digital video feed 142 for the placement of sponsored content 180, such as advertisement and the space required for the advertisements with other relevant details.
  • the advertisements may be classified by the virtual advertising platform 120 according to the criteria defined in an intermediate system such as costs, contexts, relevancy of the content relative to digital video feed 118 content and the like.
  • Classified sponsored content 180 may then be sorted and prioritized by the virtual advertising platform 120 and the advertiser with the highest bid may be provided the required space to place sponsored content 180 within a virtualized digital video feed 142.
  • Advertisers may also opt for purchasing the space from several publishers of digital video feeds 118.
  • a publisher may be a seller who owns or operates display locations, such as websites, that are able to display digital video feeds 118 and virtualized digital video feeds 142, and to sell a defined space to the advertisers on a, for example, web page.
  • Advertisers may interact with publishers through the intermediate system, such as an ad network, an ad exchange 182, through the virtual advertising platform 120, and the like as described herein for buying or selling purposes.
  • Publishers may allocate the space within a digital video feed 118, and the virtual advertising platform 120 may add sponsored content, video inventory or advertisement content, in the form of video, within the allocated space.
  • Publishers and/or the virtual advertising platform 120 may forecast the number of impressions that may occur during a particular period of time such as a day or a month on a specific web page. With this forecasted information and the information related to the already allocated space, publishers and/or the virtual advertising platform 120 may predict the amount of space that may further be sold.
  • the publisher and/or the virtual advertising platform 120 may also classify the inventories and video media based on several criteria as defined herein.
  • the categorization may be performed either manually or using an automated system such as by a programmed algorithm.
  • the manual classification may involve persons that may review and analyze the video content of digital video feeds 118 and based on the review and analysis the video content may be classified into various categories.
  • the classification may also be performed through automated system such as through a virtual advertising platform 120 as described herein.
  • a programmed algorithm for automated classification may be stored in the virtual advertising platform 120 which is enabled to review and analyze digital video feeds 118 and classify the digital video feeds 118 into defined categories.
  • the classification technique may provide an additional advantage to the publishers and advertisers in several ways. For example, by estimating the level of relevancy of a video inventory, sponsored video content, or advertising content to a given digital video feed 118, the publisher may demand higher charges since the probability of gaining interest in the advertising content is higher while the user views a virtualized digital video feed 142 in which a relevant sponsored video content 180 is inserted. Similarly, a digital video feed 118 may be prioritized as more relevant if it is more probabilistic that a greater number of video viewings will be made per unit time. Data relating to the viewing history of digital video feeds 118 may be collected, stored and analyzed by the virtual advertising platform 120.
  • an ad exchange 182 may implement algorithms, which may allow the publishers to price ad impressions during bidding in real-time. Apart from selecting the bidder on predefined criteria, an ad exchange 182 may ensure that the bids submitted by the advertisers may neither be undervalued nor overvalued. An ad exchange 182 may automatically generate maximum return on every impression. In addition, the reporting of sales data may be presented to the publishers in a simplified format for easy understanding. The publishers may be authorized to identify the brands and/or product preferred by them for placement of ad impressions. Likewise, an ad exchange 182 may allow the publishers to restrict certain brands, contents, formats, and like based to their preferences.
  • ad networks may be a group of publishers and/or advertisers that are connected together.
  • An ad network may be an organization or an entity that may connect web sites that want to host advertisements with advertisers who want to run advertisements.
  • An ad network may be of categorized as a representative network, blind network, and targeted network. Representative networks allow full transparency of content to the advertisers.
  • blind networks may provide a low price to advertisers at the cost of the freedom to decide the placement of advertisements on the publisher's web page.
  • Target networks may be directed to specific targeting technologies including analyzing behavioral or contextual information of the users, as described herein, and as may be collected, stored and analyzed by the virtual advertising platform 120.
  • the virtual video content that is available through the virtual advertising platform 120, created within the virtual advertising platform 120, or associated with the virtual advertising platform 120 may be syndicated to users' client devices 158 using a syndication facility based at least in part on using contextual information associated with the virtual video content, and/or content similar to the virtual video content and including sponsored content 180, in order to determine the relevancy, using a relevancy determination facility, of virtual video content to a criterion, such as a keyword associated with the virtual video content, usage history of the content, and/or metadata that is associated with the virtual video content. Relevancy may be based at least in part on a relevancy between contextual data associated with the virtual video content and user data 170.
  • a virtual video content selection criterion may be derived by the virtual advertising platform 120 based at least in part on a user's prior usage of, and behaviors within, a client device 158, including but not limited to prior video viewings made on the client device 158, such as an internet-enabled device 160.
  • a user viewing a virtualized digital video feed 142 on a client device 158 that derives from the virtual advertising platform 120 may have previously searched, retrieved, used, or interacted with virtual video content that is associated with metadata, such as an URL indicating an amateur video posting website like YouTube, or a keyword such as "The World Cup.”
  • An automated program running within or in association with the virtual advertising platform 120, or affiliated with the virtual advertising platform 120 may recognize keywords, metadata, or other material within this prior-viewed video content that indicates a relevance to the virtual video content genre "Sports.” Based at least in part on this, the user may be associated with a user profile datum/data indicating that the user is interested in sports.
  • the selection of the virtual video content to syndicate to a user may be based at least in part on the user data and user profile data indicating an interest in sports may be automated, initiated individually by the virtual advertising platform 120, and/or initiated by the user or creator of virtual video content that is available within or associated with the virtual advertising platform 120, for example video content that is submitted to a website for manipulation, such as YouTube.
  • Contextual information that may be associated with video content may also include keywords, terms, or phases located within or associated with the video content and/or virtual video content, the links to video content, the links from video content, click patterns and clickthroughs associated with prior use of the video content (including click patterns and clickthroughs associated with sponsored content appearing in association with the virtual video content), metadata, video content usage patterns including time, duration, depth and frequency of video content usage, the video content's origination host, genre(s) relating to the video content, and other indicia of video content context.
  • the relevancy of the contextual information associated with video content may be indicated through the use of a relevancy score.
  • the relevancy score may be a numerical summary of the statistical association between, for example, contextual video and virtual video content parameters (e.g., genre of video content) and user parameters (e.g., genres of video content previously downloaded by user).
  • the relevancy score may be a proprietary score assigned to a video content or virtual video content by the virtual advertising platform 120, or third party service provider that is associated with the virtual advertising platform 120.
  • the relevancy scores of a syndicated virtualized video content may be stored in a virtual video content relevance dictionary.
  • usage patterns may be obtained from a database of user data 170 and/or metadata relating to users of the virtual advertising platform 120.
  • a wide range of usage patterns may be used to assist with formation of queries (implicit and explicit) and with retrieval and organization of video content search results, such as that presented to a recipient of a virtualized video data feed 142 from the virtual advertising platform 120.
  • An algorithm facility may include one or more modules or engines suitable for analyzing usage patterns to assist with such functions in forming a query- For example, an algorithm facility may analyze usage patterns based on time of day, day of week, day of month, day of year, work day patterns, holiday patterns, time of hour, patterns surrounding transactions, patterns surrounding incoming and outgoing video content, patterns of clicks and clickthroughs, patterns of communications, and any other patterns that can be discerned from data that is used within, or in association with the user data 170 and/or a client device 158 on which video content is viewed. Usage patterns may be analyzed using various predictive algorithms, such as regression techniques (least squares and the like), neural net algorithms, learning engines, random walks, Monte Carlo simulations, and others as described herein. [92] In embodiments, an API, or plurality of API's may be provided to enable and facilitate the management and use of user data, such as user profiles, and the operation of syndication within or in association with the virtual advertising platform 120.
  • the determination of relevance, relevancy, associations, correspondence, and other measures of correlation and relationships between virtual video content, users, and metadata associated with both, as described herein, may be made based at least in part on statistical analysis.
  • Statistical analysis may include, but is not limited to techniques such as liner regression, logistic regression, decision tree analysis, Bayes techniques (including naive Bayes), K nearest neighbors analysis, collaborative filtering, data mining, and other statistical techniques may be used.
  • linear regression analysis may be used to determine the relationship between one or more independent variables, such as user profile data, and another dependent variable, such as a datum associated with a virtual video content (e.g., author name, genre, and the like), and modeled by a least squares function, called a linear regression equation.
  • This function is a linear combination of one or more model parameters, called regression coefficients.
  • Bayes theorem may be used to analyze user profile and/or video content data and virtual video content, such as contextual data that is associated with video content, data relating to virtualized digital video feeds 142 that are created by the virtual advertising platform 120, or some other type of data used within the virtual advertising platform 120.
  • conditional probabilities may be assigned to, for example, user profile variables, where the probabilities estimate the likelihood of a video content or virtual video content viewing and may based at least in part on prior observations of the users' prior interactions with video content.
  • Naive Bayes classifiers may also be used to analyze video content and virtual video content data.
  • a naive Bayes classifier is a probabilistic classifier based on applying Bayes' theorem with strong (naive) independence assumptions.
  • a naive Bayes classifier assumes that the presence (or lack of presence) of a particular feature of a class is unrelated to the presence (or lack of presence) of any other feature. For example, a user of video content may be classified in a user profile as interested in video content related to Singapore if he has previously searched for, retrieved, downloaded, used, and/or interacted with video content, or other type of content, related to Singapore, responded to virtual video content related to Singapore, and the like.
  • a Bayes classifier may consider properties, such as prior use of Singapore video content, prior downloads of Singapore-related video content, searches for Singapore video content and the like to independently contribute to the probability that this user is interested in Singapore-related video content and may respond well to virtual video content, such as advertisements, that are related to Singapore and organizations within Singapore.
  • the user's information may be stored and shared by the virtual advertising platform 120 (e.g., sending the data to.an ad server where the classification "Singapore Fan" may be used to select Singapore-related sponsored content, such as virtual video content to be used by the virtual advertising platform 120, to deliver to the user is association with the delivery and presentation of Singapore, or other subject-related content).
  • a single user profile may include a plurality of classifiers.
  • the Singapore fan's user profile may also include classifiers indicating that the user is a "native English speaker," or an “online poker player,” and so forth, using the data that is associated with the user's actions and behaviors within the user dataset 170, and any other data sources as described herein.
  • An advantage of the naive Bayes classifier is that it may require a small amount of training data to estimate the parameters (means and variances of the variables) necessary for classification. Because independent variables are assumed, only the variances of the variables for each class need to be determined and not the entire covariance matrix. This characteristic of naive Bayes may enable the classification.
  • a behavioural data analysis algorithm may be used for developing behavioral profiles for use by the virtual advertising platform 120 for the selection of video content and virtual video content to be included in a virtualized digital video feed 142 that will be presented to a client device 158. Behavioral profiles may then be used for targeting advertisements and other virtual video content.
  • a behavioral profile may include a summary of a user's video viewing activity, including the types of content and applications accessed, and other behavioral properties.
  • the user's activity summary may include searches, browses, purchases, clicks, impressions with no response, or some other activity as described herein.
  • the behavioral properties may be summarized as continuous interest scores of a video content category, past responsiveness to virtual video content (e.g., transactions made following viewing of advertising virtual video content), or some other property.
  • Continuous frequency scores and continuous recency scores may be considered as behavioral properties for use in constructing a behavioral profile.
  • a user's activity summary and the behavioral properties may be categorized using the analytic techniques as described herein (e.g., naive Bayes classifiers).
  • Video content data, and its characteristics, and sponsored content, such as virtual video content, that may be associated with the presentation of video content to a user may also be used for the generation of a behavioral profile. For example, data such as advertisement identity, ad tag, user identity, advertisement spot identity, date, and user response may be used.
  • content categories may be used for targeting virtualized digital video feeds 142 to users' client devices and/or advertisements based on a behavioral profile, or portion of a behavioural profile. Further, content categories may be associated with each search, browse, download, purchase, or other online behavioral activities and/or transactions.
  • a program of automatically syndicating virtual video content to a user may be based at least in part upon the relevance of contextual data associated with the video content and information known about a user, or group of users (e.g., user profile data, as described herein).
  • the automation of syndicating video content and virtualized digital video feeds 142 may be based at least in part on associating metadata with the virtual video content. Contained within the metadata may be information regarding the relevance of the virtual video content to various users and/or user groups.
  • An example of only one of the many examples of how a metadata may contain relevance information may include: metadata indicating the relevance of a virtual video content to a user based at least in part on the user data 170, data related to the user's client device (e.g., a video playback capability), or some other type of data and/or metadata indicating the average relevancy score that is associated with a video content viewing by a user from a given user category, and the like.
  • a server application, or plurality of server applications, designed for retrieving video content using the virtual advertising platform 120 may read search websites, syndication feeds, or other content and/or data looking for video content to use as part of creating a virtualized digital video feed 142 using the virtual advertising platform 120.
  • the virtual advertising platform 120 may be associated with a database or plurality of databases in which the URLs or other data corresponding to and identifying video content and entities having stored video content.
  • a tag may be provided by any number of different entities or sources.
  • the tag may be provided by the virtual advertising platform 120, a third party tagging service, or some other tagging provider.
  • an automated video syndication program and/or virtualized digital video feed 142 syndication program may derive revenue, for example through a flat fee, revenue sharing, or no-fee service program offered to an advertiser, website, ad exchange, ad network, publisher, television broadcaster, or some other entity.
  • parties such as a user of a client device 158 that is capable of playing video may be required to pay a usage fee to access, create, aggregate, and/or interact with virtual content and the creation, use, distribution, and viewing of virtualized digital video feeds 142.
  • sponsored content such as an advertisement may be presented to a user in conjunction with the presentation of virtual video content.
  • the owner of the sponsored content, or other interested party, may be required to pay a fee for the right to present the sponsored content to a user's client device in the form of a virtualized digital video feed 142 that is created by the virtual advertising platform 120.
  • This revenue may be shared among the virtual advertising platform 120 and a third party (e.g., website owner).
  • Revenue may be derived from sponsors of virtual video content participating in the automated syndication program.
  • Fees may be derived from the sponsors of virtual video content, a competitive bidding process, auction, flat fee service, or the like.
  • the fee structure and bidding may be based at least in part on a relevancy score associated with a virtual video content.
  • the disclosure concerns a system and method for content-triggered visual tagging of people and objects in live sports, in videos and in reality TV entertainment shows.
  • the method applies to predefined patterns occurring within cluttered scenes.
  • Automated tagging is based on an automated image-processing method in which specific operators are trained to detect a set of predefined visual patterns.
  • the method is invariant to minor pattern variations due to motion, perspective distortion, and non-rigid deformation.
  • Live tagging metadata streams are generated and may be amalgamated with 3rd party metadata streams/services to generate derived, higher-level metadata for numerous applications. For example, a video signal is acquired from a live broadcast via an internet-derived service, such as a streaming video.
  • an internet-derived service such as a streaming video.
  • an interactive system of video customization and delivery (the "interactive video system") is provided that is enabled to facilitate the streaming of interactive customized video where components of the video stream may be personalized to the needs and interests of viewers and where viewers may interact with content and with one another in relation to the viewing experience.
  • the interactive video system may enable video providers to include advertisements targeted to the interests of a given viewer in the video stream being watched by that viewer and such advertisements may be clickable, selectable, or otherwise responsive to the viewer and enabled for interaction by the viewer, such that the viewer may prompt the inclusion of additional advertising-related content by selecting the advertisement or otherwise expressing interest in it.
  • Such advertising- related content may include the ability to complete a transaction while watching the video, possibly in a pop-up window.
  • the interactive video system may offer viewers one or more of the following features: (1) the real-time insertion of interactive pop-ups, possibly based on video content-based triggers or a combination of viewer-specific data and content-based triggers; (2) the ability to click on or otherwise select an object in the video stream to generate custom or personalized content, such as pop-up boxes with more information, advertising links, comments from other viewers, related videos, and other related images and information; (3) virtual camera controls, allowing viewers to switch camera angles, to zoom in or out, and to replay portions of the video stream; (4) social media features that allow viewers to interact with one another, such as by liking, sharing, commenting, gaming (e.g., video games, contests, or the like), betting, and tagging; and/or (5) access to social-network-generated content, such as comments, video responses, and Twitter "tweets" related to the video, or some other type of social media interaction or content.
  • social media features that allow viewers to interact with one another, such as by liking, sharing, commenting, gaming (
  • an embodiment of the disclosure may include one or more of the following components:
  • core technology 601 that may include image processing, pattern recognition, and real-time computation tasks
  • predefinition of sets of visual patterns 604 for example logos
  • capturing of logos using logo capture 605 and chameleonator 606 interfaces and storing in logo database 607 automated generation of detection models that may be based on the nature of the predefined patterns established in a training phase
  • Coded-target player identity detector 614 feeds the metadata into metadata engine 615
  • metadata engine 615 triggering of metadata messages by metadata engine 615 from metadata database 616 based on the detection of predefined patterns, such metadata containing numerous data points, including time, location, scale, and identity information;
  • the transmission of metadata via metadata service 617 (for example, broadcast infrastructure, internet, or other data transmission method); the metadata can also be used for metadata statistics 620;
  • a preferred embodiment of the invention may include a number of modules and other functional elements, including, but not limited to: a detection and recognition module 701.
  • Detection and recognition module 701 may identify logos and other elements in a source video feed or video stream (which is a sequence of digital images) or static imagery (digital image), based at least in part on the methods and systems as described herein.
  • Detection and recognition module 701 can be a web crawler or web spider or the like, implemented with Node.js, PHP, Python, Ruby or any other suitable programming languages and platforms.
  • the invention may also include an automated metadata tagging module 702 that assigns metadata tags to such identified elements, such metadata tagging module enabled to cross-reference the identified video elements with external databases, and assign codes to image regions.
  • This form of Automated Content Recognition (ACR) using metadata tags work as an enhancement over current available ACR methods as these coded tags (meta-tags) would act as the digital markers to automatically recognise specific content in the video stream.
  • the invention may also include an administered metadata tagging module 703 enabling an administrator to designate elements of a video feed to be tagged with metadata codes, where such administered metadata tagging module may include automated features that may calculate the area of the image in each frame to be tagged and the frames that match the administrator's tagging instructions.
  • the invention may also include a metadata aggregation module 704 to aggregate the automatically tagged metadata and the administrated tagged metadata, with third party metadata.
  • Third party metadata means metadata obtained from other sources or third party services. Third party services non-exhaustively including social media, betting services, live statistics, advertising services apparel/product, contextualised advertising, historical data, crowdsourced metadata.
  • the aggregation of metadata is important as the integration of 3rd party metadata services or sources combined with the tagged metadata would enable contextualised, interactive and social metadata.
  • This subsequent aggregated metadata would allow for the augmentation and live updating of broadcast and internet video for purposes of implementing (a) betting and/or gaming services 633, (b) advertising services, (c) interactive apparel and product placement for shopping 634, (d) social media services 635; (e) live player statistics 636 as depicted in figure 6.
  • the metadata aggregation module 704 can also aggregate the tagged metadata with any inherent metadata from the digital image.
  • the invention may also include a static image selection and generation module 705 enabled to replace or superimpose a static image within video content, such as a logo, with another static image, such a module enabled to make use of viewer- specific data in this replacement process.
  • a static image selection and generation module 705 enabled to replace or superimpose a static image within video content, such as a logo, with another static image, such a module enabled to make use of viewer- specific data in this replacement process.
  • the invention may also include a dynamic image selection and replacement module 706 enabled to replace or superimpose dynamic images, such as a waving flag or a logo on a jersey, with another dynamic image, such a module enabled to make use of viewer-specific data in the replacement process.
  • a dynamic image selection and replacement module 706 enabled to replace or superimpose dynamic images, such as a waving flag or a logo on a jersey, with another dynamic image, such a module enabled to make use of viewer-specific data in the replacement process.
  • the invention may also include a video stream integration module 707 enabled to integrate a logo, button, or other image into a source video, such that the integrated content is located in a logical and consistent place and does not obstruct other important elements of the video more than necessary.
  • a video stream integration module 707 enabled to integrate a logo, button, or other image into a source video, such that the integrated content is located in a logical and consistent place and does not obstruct other important elements of the video more than necessary.
  • the static image selection and generation module 705, dynamic image selection and replacement module 706 and video stream integration module 707 can be part of an image selection and generation module.
  • the invention may also include a feedback management module 708 enabled to interpret and respond to viewer input, for example, by changing database values in response to viewer inputs and then looping back to the image selection modules to reassess the need for customized content.
  • a feedback management module 708 enabled to interpret and respond to viewer input, for example, by changing database values in response to viewer inputs and then looping back to the image selection modules to reassess the need for customized content.
  • step 801 the administrator defines and designates the set of visual patterns for recognition prior to the video stream broadcast.
  • This set of visual patterns can be a logo.
  • This video stream broadcast can be a live broadcast or a rebroadcast of a previously recorded video.
  • step 802 the automated metadata tagging module 702 assigns metadata tags to the set of visual patterns.
  • step 803 the administrator uses the administered metadata tagging module 703 to manually assign metadata tags to the set of visual patterns.
  • the metadata aggregation module 704 aggregates the automatically tagged metadata and the administrated tagged metadata, with third party metadata.
  • the detection and recognition module 701 detects and identifies the set of visual patterns in the video stream. This detection and identification may be accomplished (a) by recognition of distinct markings, including jersey numbers for athletes, (b) based on coded patterns placed on clothing, (c) based on object or face recognition algorithms, or (d) based on administrator or director designation.
  • the image selection and generation module displays and places content on the digital images in the video stream (a video stream is a sequence of digital images), utilizing geographic- and user-specific knowledge (i.e. metadata) to customize the nature of this augmented content.
  • the content being displayed is customized or based upon the aggregated metadata.
  • the image selection and generation module superimposes the content over the set of visual patterns. This can be done by overlaying the content over the geo-spatial coordinates (which is part of the tagged metadata) of the set of visual patterns.
  • the content is interactive.
  • the image selection and generation module can comprise static image selection and generation module 705, dynamic image selection and replacement module 706 and video stream integration module 707.
  • the curated triggering of integrated content occurs when an administrator determines that tagging an object or event is appropriate or when such an administrator deems it appropriate to trigger the release of content based on existing tags.
  • athletes may wear jerseys with special markings that facilitate identification regardless of whether the player's face, jersey number, or name is visible.
  • a product logo may be pre- designated to appear on the jersey of a given viewer's favourite player.
  • a message may be designated to appear when a goal or point is scored, such message possibly including the option for the viewer to get more information on the player who scored the goal or point, to bet on that player, or to purchase a product associated with that player, such as a jersey.
  • a director or administrator may determine that a given moment in a match has a high level of excitement or emotion, and may then tag characteristics of the video stream as they exist at that moment or may trigger the release of messages that have been pre- designated or calculated to be appropriate for such moments.
  • the disclosure may include a video tagging process, which may include one or more of the following sub-processes: (1) designation by administrators of logos or other images for recognition prior to video stream broadcast, where video stream broadcast refers either to a live broadcast or to a rebroadcast of previously recorded video; (2) generation of models of logos or other images prior to video stream broadcast; (3) detection and identification of people and objects in the video stream, which detection may be accomplished (a) by recognition of distinct markings, including jersey numbers for athletes, (b) based on coded patterns placed on clothing, (c) based on object or face recognition algorithms, or (d) based on administrator or director designation; (4) curated triggering of integrated content when an administrator or director determines that tagging an object or event is appropriate or when such an administrator or director deems it appropriate to trigger the release of content based on existing tags.
  • athletes may wear jerseys with special markings that facilitate identification regardless of whether the player's face, jersey number, or name is visible.
  • a product logo may be pre-designated to appear on the jersey of a given viewer's favorite player.
  • a message may be designated to appear when a goal or point is scored, such message possibly including the option for the viewer to get more information on the player who scored the goal or point, to bet on that player, or to purchase a product associated with that player, such as a jersey.
  • a director or administrator may determine that a given moment in a match has a high level of excitement or emotion, and may then tag characteristics of the video stream as they exist at that moment or may trigger the release of messages that have been pre-designated or calculated to be appropriate for such moments.
  • image recognition may be accomplished through the use of automatically generated detection models for identifying pre-defined spatial patterns in colour images extracted from video sequences, as established in an independent training phase.
  • Different algorithmic approaches can be taken to identify the spatial patterns, utilizing approaches as discussed in Zitova and . Flusser, 2003 [Zutova and Flusser, October 2003, Image registration methods: a survey, Image and Vision Computing, 21(11) 977-1000].
  • One approach is to use a prototypical example of a targeted pattern as the training model, comprising a 2-dimensional array of pixel values distributed across three colour channels. This may be achieved by cropping the prototype pattern from a video sequence.
  • Metadata may be referred to as "TV metadata" and may include pieces of information and images that can be used to describe the content of video, such as subtitles, actors, characters, plot elements, duration, broadcast quality, reviews (for non-live video), and whether or not the video is being broadcast live.
  • TV metadata to customize content delivery may have the effect of making the video viewing experience more engaging, more social, and more rewarding for viewers and may create new business opportunities for content creators, broadcasters, and other players in the video production and distribution chain.
  • Such manipulation of TV metadata may take a number of forms, including one or more of the following: (1) creation of metadata through automated means, administrator tagging, or viewer input; (2) the aggregation of various types of metadata from various sources, including combining live video content-based metadata with metadata from third party services; (3) transmission and communication of metadata; (4) processing of metadata; (5) editing of metadata; and (6) other use of metadata that may help to facilitate the functionality of the disclosure.
  • viewer profile information may be combined with viewer inputs to create new metadata.
  • tagging of video may refer to linking of TV metadata with external sources of data through the use of interactive tags.
  • the web-streaming infrastructure may include a number of components and interfaces, including but not limited to (1) servers; (2) buffering technologies and resources; (3) communications interfaces; (4) processing arrays; and (5) gateways.
  • video tagging applications may tag video, enable administrator tagging, interpret user inputs, calculate appropriate content for insertion, and insert content into video streams for delivery to viewers.
  • Such applications may take a number of forms, which may include one or more of the following: (1) technology- demonstrator form designed to demonstrate functionality; (2) commercialized form to accomplish specific tasks, and (3) generalized form to accept third-party plug-ins, modules, and inputs.
  • clickable zones may have one or more of the following characteristics: (1) they may be generated based on metadata, including a combination of viewer profile metadata and video-specific metadata; (2) they may incorporate an augmented HTML layer onto the screen superimposed on the video feed, such that elements of the video become clickable or otherwise subject to triggering by user input; (3) they may be selectable by a viewer using a mouse, trackpad, touch screen, gesture, voice command, stylus, or other viewer input mechanism; (4) they may be adjusted in size, shape, and position in real time as objects in the video move and camera angles change; (5) they may involve simultaneous viewing of a video feed on a television and on an internet-enabled computer, tablet, or portable device in cases where the clickable area is on the computer, tablet or portable device rather than the television; (6) when clicked or otherwise selected, they may result in changes to metadata and initiate changes to the video stream, such as the insertion of pop-up messages, integration of new customized content, or other
  • team logos on the playing field may be made clickable, such that a viewer watching the sporting event on an iPad may touch one of the team logos to get additional information and user interface options.
  • additional information and options may include, but is not limited to, one or more of the following: (1) team statistics; (2) player profiles; (3) ecommerce interfaces enabling the viewer to purchase team-branded products; (4) online betting interfaces enabling the viewer to place wagers on team members of the entire team; (5) schedules and ticket sales for future matches by that team; and (6) any other information or options determined to be appropriate by algorithms processing relevant metadata.
  • the insertion and management of customized viewer selectable augmented advertising into video streams may be enabled.
  • Such customized advertising may have one or more of the following characteristics: (1) it may involve the automatic triggering of the annotation and augmentation of interactive, Internet Protocol- based advertisements through a predefined set of business rules; (2) it may be generated based on metadata; (3) it may involve the use of clickable zones, as defined herein; (4) it may be targeted to the needs and interests of individual viewers; (5) it may be viewable on a computer, tablet, or portable device, such as a mobile phone, iPad, iPod, laptop, desktop, or other internet-connected computing device; (6) it may incorporate an augmented HTML layer onto the screen superimposed on the video feed, such that elements of the video become clickable or otherwise subject to triggering by user input; and (7) it may involve simultaneous viewing of a video feed on a television and on an internet-enabled computer, tablet, or portable device.
  • clickable advertising may be integrated into the video stream on player jerseys, such that viewers may click on the jerseys to call up ecommerce interfaces enabling the viewer to purchase player jerseys and other team-branded products.
  • contextualized (localized and personalized) advertisements may be automatically triggered when a predefined logo on a player jersey is automatically detected above a designated scale range.
  • data may be transmitted wirelessly from a sensor built into an athlete's shoe to determine the speed at which the athlete is running and that information may be inserted into the video feed of viewers whose metadata settings indicate an interest in knowing the performance statistics of that athlete.
  • data on an athlete's pulse rate may be transmitted wirelessly from a heart rate monitor worn by the player and that information may be inserted into the video feed of viewers whose metadata settings indicate an interest in knowing the vital statistics of that athlete.
  • Such process of providing real-time data augmentation may also include integrated advertising that may feature brand placement, product ordering links, or other commerce-related features.
  • data on the speed that an athlete wearing a shoe sensor is running might include the logo of the shoe manufacturer or a link to purchase a pair of similar shoes.
  • social media integration may be enabled having at least one or more of the following features: (1) the aggregation of video content-based metadata with social media network APIs tagged to targeted elements of a video broadcast and made interactive through augmentation with clickable pop-ups; (2) integration into the video stream of links to other social media sites inserted in viewer- selectable images, such as Facebook "like” buttons or Twitter “tweet” buttons; (3) ability of viewers to enter comments on people, objects, events, video zones, video segments, and other identified content into the video stream; (4) ability of viewers to chose to view comments of other viewers generally, other viewers in their social networks only, or no comments at all; (5) ability of viewers to share thoughts, preferences, and reactions using a proprietary social network, such as by clicking on augmented "startags," where a startag is a clickable or otherwise selectable image integrated into the video stream whose selection by a viewer may make changes to that viewer's metadata tags; (6) integration of viewer input into live video broadcasts; and/or (7) tracking and reporting of social
  • viewers may click on their favorite players, tagging those players using startags on the proprietary network or using Facebook "like" buttons, may tweet about players or plays using a tweet button, and may make comments about plays with such comments being made available to certain other viewers in real time and possibly also to other people through various social networks. For example, a viewer could comment about a call made by a referee.
  • a contestant may be able to ask home viewers for their advice, with such advice being provided by viewers through user-interface tools, aggregated by servers, and transmitted back to the studio.
  • elements of video streams may be annotated with links for more information, which may be available for viewers to access by clicking or otherwise selecting the links.
  • Such annotations may include additional information on people, objects, or other elements of the video stream.
  • Such links may be associated with augmented logos or may be available by clicking on non-augmented elements of the video stream, such as people and objects.
  • a clickable button may be integrated into the video stream that allows a viewer to click for statistics on a certain player.
  • a viewer may click on a player for more information about that player, including statistics, without having been prompted by a button or other inserted content.
  • a viewer may click on a goal (net) to get information on all the goals scored during the game including links to view video segments of those goals being scored.
  • viewers may select from a menu of earlier segments of a video to replay.
  • a viewer may be able to click on a button labeled "top plays" to view a list of earlier segments of the match tagged as having the highest viewer interest with each segment on the list launching the replay of that segment of the video stream when clicked, possibly offering the viewer the option to view the segment from multiple camera angles or to zoom in while viewing the segment.
  • Such replayed video segments may also list advertising sponsors and may offer links to information about the advertisers or to ecommerce product order pages.
  • a range of advertising options may be enabled, including one or more of the following: (1) integration of localized advertising into video streams, where "localized advertising” refers to advertising for products and services that match preferences identified by a given viewer's metadata tags, such advertising either appearing on existing elements of the video stream— including people, objects, and logos— or appearing in the form of pop-up advertisements added to the video stream based on viewer metadata tags; (2) sponsorship of enhanced features, such as live statistics, supplemental content, or segment replays; (3) lead generation based on viewer metadata; (4) revenue sharing based on paid services solicited by advertisers; (5) cost-per-click advertising; (6) insertion of social media campaigns into broadcasts; and (7) direct marketing to viewers by email, text message, or other means based on viewer contact preferences and metadata.
  • localized advertising refers to advertising for products and services that match preferences identified by a given viewer's metadata tags, such advertising either appearing on existing elements of the video stream— including people, objects, and logos— or appearing in the form of pop-up advertisements added to the video stream based
  • Such localized advertising may include ecommerce capabilities consisting of contextualized and clickable advertisements offering viewers the choice of making instant purchases, saving a potential purchase in a shopping cart to buy later, or adding possible purchases to a wish list by clicking a "want" button.
  • ecommerce capabilities may be automatically triggered using the annotation and augmentation of interactive, Internet Protocol-based products through a predefined set of business rules.
  • an advertiser may have its logo inserted into a live video feed onto the uniforms of players, onto the field, or elsewhere in the video, such that its logo may be clicked by viewers to get more information about the advertiser or its products or to order its products through an ecommerce interface.
  • a betting module may be enabled that allows viewers to place bets on aspects of a video feed, such as a live video broadcast of a sporting event. Such betting module may involve the aggregation of video content-based metadata with real-time metadata streams from sports betting platforms and may be tagged to targeted viewers and objects and made interactive through the augmentation of clickable pop-ups.
  • Such bets may be placed through the use of clickable buttons that identify a predetermined betting option; or may include customized bets that may be entered through various user-interface options, such as typing or voice commands. Initiating the process of placing a bet may cause a pop-up betting screen to be integrated into the video stream.
  • a viewer may click a button that says, "Click here to bet $10 on the home team to win.”
  • a viewer may place a bet by voice command, such as, "I bet $10 that Smith will score a goal in the next ten minutes.”
  • the betting odds and potential return on the bet may be displayed in a pop-up box that is integrated into the video stream and which may include a confirm bet button, as well as options for setting betting preferences, editing credit card information, and disabling the betting interface.
  • the betting popup box may be used by viewers to track their bets live during the sporting event.
  • real-time statistics may be integrated into live video broadcasts and may have one or more of the following characteristics: (1) statistics may be displayed to viewers based on their viewing preferences, as indicated by viewer- specific metadata tags, or may be made available through buttons or other viewer- selectable links; (2) statistics may be displayed in text, graphically, or both; (3) statistics may be aggregated and calculated within the interactive video system or may be acquired from third-party sources or through the use of third-party technology; (4) statistics may include data relating only to the video being broadcast or may include historical information on past events; and (5) statistics may include both retrospective data and prospective probability calculations.
  • the video stream may include augmented "more information" buttons on each player that, when clicked or otherwise selected, display statistical information on that player or the odds that the player will achieve a particular objective within a set time frame, such as scoring a goal within the next ten minutes or scoring the first goal of the match.
  • a live popup insertion 901 may be integrated into the video stream based on pattern recognition of the number and text appearing on a player shirt.
  • a static logo 1001 may be inserted onto a fixed surface in either a live or a recorded video.
  • a static logo 1101 may be inserted in place of a logo in the source video feed in either a live or a recorded video.
  • a logo may be integrated into a video feed on a moving target in the source video feed in either a live or a recorded video.
  • an animation tool may be used to facilitate the integration of an inserted graphical element onto the surface of a moving object.
  • such an animation tool may be used to make a logo appearing on a player jersey appear to flex and shift with the surface of the jersey, as the jersey moves in response to player motion, with changing camera angles, and in response to wind and other external factors.
  • a live popup insertion 1301 based on identification of on-player coded targets is shown.
  • a logo may be integrated into a live video feed, such that the logo displays on a moving target using coded targets.
  • targets may be indicated on the jersey of a player and these targets may be used to insert the image of a logo of a product of interest to a given viewer onto the jersey of that player in the video feed being transmitted to that viewer.
  • the disclosure may enable the automatic triggering of annotation and augmentation tagged to targeted elements of a video feed allowing the creation and transmission of crowdsourced metadata.
  • viewers may create their own annotations tagged to a detected player, where the viewer-created annotations may be shared with other viewers according to viewer preferences or other settings contained in metadata.
  • live estimation of advertising statistics may be enabled. Such estimation may involve the tracking of predefined special patterns during a live sports or reality-TV event, the translation of those tracking data into duration statistics normalized based on the event duration, and may thereby be used to calculate the percentage of the broadcast during which the advertisement was visible to viewers. In examples of these embodiments involving sporting events, the amount of time a logo on an athlete's shirt is visible and clickable during the course of a game may be calculated and transmitted to the advertiser.
  • data integration techniques and methods may be used as part of the virtual advertising platform 120, as described herein, to collect, join, merge, validate, analyze, and perform other data processing operations for digital video data, virtual video content data, user data, client device data (e.g., applications used to interact with virtual video content), and other data types as described herein.
  • Data integration techniques and methods may be used to take the information collected from a plurality of digital video data sources in order to draw an inference from the collected information, identifying a potential change to a database based on newly received information, and validating the change to the database based on the inference.
  • data integration techniques and methods may be used to extract information from a plurality of digital video data sources, and the like, the data sources having a plurality of distinct data types, transforming the data from the data sources into a data type that can be represented in, for example, a database to be used by a virtual advertising platform 120, the database thereby integrating information from the distinct data types.
  • the distinct data types may be selected from a group consisting of content data, user data, contextual information relating to video content and virtual video content, user behavioral information (including user profiles), demographic information, usage history, and other data sources and types as described herein.
  • data integration techniques and methods may be used to apply rules, such as by a rules engine, in connection with creation, updating and maintenance of a data set, such as one stored or used in association with a virtual advertising platform 120.
  • a rules engine may be applied to secondary change data, that is, data that comes from one or more data sources and that indicates that a change may be required in a data set or to inference data, that is, data derived by inferences from one or more data sets.
  • a rule may indicate that a change in a data set will be made if a secondary data source confirms an inference, or if an inference is consistent with data indicated by a data source.
  • a rule might require multiple confirmations, such as requiring more than one data source or more than one inference before confirming a change to a data set (or creation of a new feature or attribute in the data set).
  • Rules may require any fixed number of confirmations > whether by other data sets or by inferences derived from those data sets.
  • Rules may also embody various processes or work flows, such as requiring a particular person or entity to approve a change of a given type or a change to a particular type of data.
  • data integration techniques and methods may be used to extract information from a plurality of digital video data sources, the data sources having a plurality of distinct data types, storing the data in a common data set, considering a change request associated with a database, such as a database that is associated with a virtual advertising platform 120, and using the common data set to validate the change request.
  • data integration techniques and methods may be used to extract information from a plurality of digital video data sources, the data sources having a plurality of distinct data types, storing the data in a common data set, considering the common data set to identify potential changes to a database, such as a database that is associated with an virtual advertising platform 120, and initiating a change request based on the common data set.
  • a data integration facility may be used to integrate data from a plurality of digital video data sources, the data sources including attributes relevant to an virtual advertising platform 120, wherein the data integration facility is selected from the group consisting of an extraction facility, a data transformation facility, a loading facility, a message broker, a connector, a service oriented architecture, a queue, a bridge, a spider, a filtering facility, a clustering facility, a syndication facility, and a search facility.
  • a data integration facility may be used to integrate data from a plurality of digital video data sources, taking an inference drawn from analysis of data collected by a plurality of data sources, applying a data integration rule to determine the extent to which to apply the inference, and updating a data set based on the application of the rule.
  • a data integration facility may be used to integrate data from a plurality of digital video data sources, taking an inference drawn from analysis of data collected by a plurality of data sources, applying a data integration rule hierarchy to determine the extent to which to apply the inference, and updating a data set based on the application of the rule.
  • a data integration facility may provide a rule hierarchy to determine a data type to use in a data set related to a system, such as an virtual advertising platform 120, the rule hierarchy applying a rule based on at least one of a data item, the richness of a data item, the reliability of a data item, the freshness of a data item, and the source of a data item and representing the rule hierarchy in a data integration rule matrix, wherein the matrix facilitates the application of a different rule hierarchy to a different type of data.
  • a data integration facility may be used to integrate data from a plurality of digital video data sources, taking an inference drawn from analysis of data collected by a data sources, applying a data integration rule matrix to determine the extent to which to apply the inference, and updating a data set based on the application of the rule.
  • a data integration facility may be used in association with a system, such as a virtual advertising platform 120, to iteratively collect and make inferences about data that is collected for use in the virtual advertising platform 120. Iteration may be performed a plurality of times, or continuously, as an on-going process to collect and make inferences about data attributes. Iteration may be a function of the entire data set (e.g., an entire virtual video content usage history of a user), or a function of specific data segments (e.g., virtual video content usage history ⁇ 24 hours). Data attributes may be stored for subsequent comparison to previously collected data inference attributes. In embodiments, this process may be continuous, and represent an ongoing comparison of inferred attributes for the purpose of detecting differences over time.
  • a system such as a virtual advertising platform 120
  • the data integration facility may include at least one of a bridge, a message broker, a queue and a connector. Therefore, a useful data source may be associated with a data integration facility via computer code, hardware, or both, that establishes a connection between the source and the data integration facility.
  • the bridge may include code that takes data in a native data type (such as data in a mark-up language format), extracts the relevant portion of the data, and transforms the data into a different format, such as a format suitable for storing the data for use in an virtual advertising platform 120, or by users of the virtual advertising platform 120.
  • the message broker may extract data from a data source (e.g., website), place the data in a queue or storage location for delivery to a target location (e.g., virtual advertising platform 120 server), and deliver the data at an appropriate time and in an appropriate format for the target location (e.g., to a user of the virtual advertising platform 120).
  • the target location may be a virtual advertising platform 120 database, a data mart, a metadata facility, or a facility for storing or associating an attribute within the virtual advertising platform 120.
  • the connector may comprise an application programming interface or other code suitable to connect source and target data facilities, with or without an intermediate facility such as a data mart or a data bag.
  • the connector may, for example, include AJAX code, a SOAP connector, a Java connector, a WSDL connector, or the like.
  • the data integration facility may be used to integrate data from a plurality of digital video data sources, the data sources including attributes relevant to, for example the virtual advertising platform 120.
  • the data integration facility may include a syndication facility.
  • the syndication facility may publish information in a suitable format for further use by computers, services, or the like, such as in aid of creating, updating or maintaining a virtual advertising platform 120 database, such as one related to user behavioral profiles, publishers, or some other type of data used by the virtual advertising platform 120, as described herein.
  • the syndication facility may publish relevant data in RSS, XML, OPML or similar format, such as user data, wireless operator data, ad conversion data, publisher data, and many other types of information that may be used by the virtual advertising platform 120.
  • the syndication facility may be configured by the data integration facility to feed data directly to a virtual advertising platform 120 database, such as a user profile database, in order to populate relevant fields of the database with data, to populate attributes of the database, to populate metadata in the database, or the like.
  • a virtual advertising platform 120 database such as a user profile database
  • the syndicated data may be used in conjunction with a rules engine, such as to assist in various inferencing processes, to assist in confirming other data, or the like.
  • the data integration facility may include a services oriented architecture facility.
  • the services oriented architecture facility one or more data integration steps may be deployed as a service that is accessible to various computers and services, including services that assist in the development, updating and maintenance of a virtual advertising platform 120 database, such as a user profile database, or the like.
  • Services may include services to assist with inferences, such as by implementing rules, hierarchies of rules, or the like, such as to assist in confirmation of data from various sources. Services may be published in a registry with information about how to access the services, so that various data integration facilities may use the services.
  • Access may be APIs, connectors, or the like, such as using Web Services Definition Language, enterprise Java beans, or various other codes suitable for managing data integration in a services oriented architecture.
  • the data integration facility may include at least one of a spidering facility, a web crawler, a clustering facility, a scraping facility and a filtering facility.
  • the spidering facility, or other similar facility may thus search for data, such as available from various domains, services, operators, publishers, and sources, available on the Internet or other networks, extract the data (such as by scraping or clustering data that appears to be of a suitable type), filter the data based on various filters, and deliver the data, such as to a virtual advertising platform 120 database.
  • the data integration facility may find relevant data, such as user behavioral data, contextual data relating to content, publisher data, and many other types (of the types variously described herein) of information.
  • the relevant data may be used to draw inferences, to support inferences, to contradict inferences, or the like, with the inference engine, such as to assist in creation, maintenance or updating of an virtual advertising platform 120 database.
  • the data may also be used to populate data fields directly, to populate attributes associated with data items, or provide metadata.
  • the methods and systems described herein may be deployed in part or in whole through network infrastructures.
  • the network infrastructure may include elements such as computing devices, servers, routers, hubs, firewalls, clients, personal computers, communication devices, routing devices and other active and passive devices, modules and/or components as known in the art.
  • the computing and/or non- computing device(s) associated with the network infrastructure may include, apart from other components, a storage medium such as flash memory, buffer, stack, RAM, ROM and the like.
  • the processes, methods, program codes, instructions described herein and elsewhere may be executed by one or more of the network infrastructural elements.
  • the methods, program codes, and instructions described herein and elsewhere may be implemented on a cellular network having multiple cells.
  • the cellular network may either be frequency division multiple access (FDMA) network or code division multiple access (CDMA) network.
  • FDMA frequency division multiple access
  • CDMA code division multiple access
  • the cellular network may include mobile devices, cell sites, base stations, repeaters, antennas, towers, and the like.
  • the cell network may be a GSM, GPRS, 3G, EVDO, mesh, or other networks types.
  • the mobile devices may include navigation devices, cell phones, mobile phones, mobile personal digital assistants, laptops, palmtops, netbooks, pagers, electronic books readers, music players and the like. These devices may include, apart from other components, a storage medium such as a flash memory, buffer, RAM, ROM and one or more computing devices.
  • the computing devices associated with mobile devices may be enabled to execute program codes, methods, and instructions stored thereon. Alternatively, the mobile devices may be configured to execute instructions in collaboration with other devices.
  • the mobile devices may communicate with base stations interfaced with servers and configured to execute program codes.
  • the mobile devices may communicate on a peer to peer network, mesh network, or other communications network.
  • the program code may be stored on the storage medium associated with the server and executed by a computing device embedded within the server.
  • the base station may include a computing device and a storage medium.
  • the storage device may store program codes and instructions executed by the computing devices associated with the base station.
  • the computer software, program codes, and/or instructions may be stored and/or accessed on machine readable media that may include: computer components, devices, and recording media that retain digital data used for computing for some interval of time; semiconductor storage known as random access memory (RAM); mass storage typically for more permanent storage, such as optical discs, forms of magnetic storage like hard disks, tapes, drums, cards and other types; processor registers, cache memory, volatile memory, non-volatile memory; optical storage such as CD, DVD; removable media such as flash memory (e.g.
  • RAM random access memory
  • mass storage typically for more permanent storage, such as optical discs, forms of magnetic storage like hard disks, tapes, drums, cards and other types
  • processor registers cache memory, volatile memory, non-volatile memory
  • optical storage such as CD, DVD
  • removable media such as flash memory (e.g.
  • USB sticks or keys floppy disks, magnetic tape, paper tape, punch cards, standalone RAM disks, Zip drives, removable mass storage, off-line, and the like; other computer memory such as dynamic memory, static memory, read/write storage, mutable storage, read only, random access, sequential access, location addressable, file addressable, content addressable, network attached storage, storage area network, bar codes, magnetic ink, and the like.
  • the methods and systems described herein may transform physical and/or or intangible items from one state to another.
  • the methods and systems described herein may also transform data representing physical and/or intangible items from one state to another.
  • the methods and/or processes described above, and steps thereof, may be realized in hardware, software or any combination of hardware and software suitable for a particular application.
  • the hardware may include a general purpose computer and/or dedicated computing device or specific computing device or particular aspect or component of a specific computing device.
  • the processes may be realized in one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable device, along with internal and/or external memory.
  • the processes may also, or instead, be embodied in an application specific integrated circuit, a programmable gate array, programmable array logic, or any other device or combination of devices that may be configured to process electronic signals. It will further be appreciated that one or more of the processes may be realized as a computer executable code capable of being executed on a machine readable medium.
  • the computer executable code may be created using a structured programming language such as C, an object oriented programming language such as C++, or any other high-level or low-level programming language (including assembly languages, hardware description languages, and database programming languages and technologies) that may be stored, compiled or interpreted to run on one of the above devices, as well as heterogeneous combinations of processors, processor architectures, or combinations of different hardware and software, or any other machine capable of executing program instructions.
  • a structured programming language such as C
  • an object oriented programming language such as C++
  • any other high-level or low-level programming language including assembly languages, hardware description languages, and database programming languages and technologies
  • each method described above and combinations thereof may be embodied in computer executable code that, when executing on one or more computing devices, performs the steps thereof.
  • the methods may be embodied in systems that perform the steps thereof, and may be distributed across devices in a number of ways, or all of the functionality may be integrated into a dedicated, standalone device or other hardware.
  • the means for performing the steps associated with the processes described above may include any of the hardware and/or software described above. All such permutations and combinations are intended to fall within the scope of the present disclosure.

Abstract

A method of displaying content is described, the method comprising the steps of defining a set of visual patterns and tagging metadata to the set of visual patterns. The method further comprises the steps of detecting the set of visual patterns within a digital image, wherein the detection of the set of visual patterns is robust to appearance variations caused by lighting changes, camera angle and partial occlusion; aggregating the tagged metadata with third party metadata; and displaying content along with the digital image, wherein the content is based on the aggregated metadata.

Description

AN INTERACTIVE SYSTEM FOR VIDEO CUSTOMIZATION AND
DELIVERY
FIELD OF INVENTION
[1] The invention is related to methods and systems for inserting virtual video content into digital video data.
BACKGROUND
[2] Digital advertising and video manipulation methods exist in various forms.
Television systems have been developed where digital content is presented on static green screens in stadiums for example. These techniques are limited in a variety of ways and thus there exists a need for improving them.
[3] Distribution of video has expanded beyond television to computers and mobile devices, but has remained a largely one-way technology in which the viewer may change the station or the volume, but not interact with the content of the video. In recent years, this paradigm has shifted slightly in that viewers of videos on computers and mobile devices may now rate or comment on videos, but the prior art does not permit a viewer to interact with a video while the video is playing. The increasing interactivity of web sites and mobile applications has raised consumer expectations to the point where ordinary non-interactive viewing may no longer be satisfactory. Viewers may wish to control aspects of their viewing experience and to share that experience with others through interactive tools. Further, the economic model on which the production and distribution of television programming is generally based relies on advertisers paying to have their commercial messages delivered during television broadcasts, but advertisers now have choices that allow them to target their advertising based on demographics and interests. [4] Therefore, a need exists for an interactive system of video customization and delivery that may offer one or more of the following: (1) real-time opportunities for viewers to interact with video streams through responsive advertising, more information links, viewing options, and other means; (2) real-time opportunities for viewers to interact with one another through the integration of social media into the viewing experience; (3) ongoing social media interaction relating to video content post-viewing by a user, such as sharing clips, "liking" things or people from within the video stream, sharing via Facebook or Twitter, or some other social media platform, and commenting on aspects of the video; and (4) ongoing commercialization opportunities relating to video content.
[5] Currently, there exist methods and systems for projecting virtual video content within a fixed region of a video frame as a replacement for the content originally recording in a digital video data feed. However these methods and systems typically use a fixed-field, with defined perimeter, such as an area of a sport event's field of play, to determine the region of the digital video data feed in which to insert the virtual video content. Such methods and systems do not enable the insertion of virtual video content into a spatial region of a digital video feed that experiences area changes to that spatial region over a plurality of video frames within the digital video data feed due to, for example, the movement of a human athlete's sports jersey that is recorded in the digital video data feed.
[6] Accordingly, there exists a need for methods and systems for inserting virtual video content into a digital video data feed where the virtual video content is spatially altered in a manner analogous to the spatial region of the digital video data feed in which the virtual content is to be placed within a recomposited digital video data feed.
SUMMARY OF INVENTION
[7] According to a first aspect of the invention, a method of displaying content is described, the method comprising the steps of defining a set of visual patterns and tagging metadata to the set of visual patterns. The method further comprises the steps of detecting the set of visual patterns within a digital image, wherein the detection of the set of visual patterns is robust to appearance variations caused by lighting changes, camera angle and partial occlusion; aggregating the tagged metadata with third party metadata; and displaying content along with the digital image, wherein the content is based on the aggregated metadata.
[8] Preferably, the step of displaying the content along with the digital image comprises superimposing the content over the set of visual patterns on the digital image to augment the digital image.
[9] Preferably, the digital image is part of a sequence of digital images.
[10] Preferably, the content is a HyperText Markup Language layer which is clickable and responsive to user input.
[11] Preferably, the content is selected from the group consisting of : a message for a betting service, an advertising message, a social-media message, a message concerning the detected set of visual patterns, a message of historical data concerning the detected set of visual patterns, and a message of comments about an event depicted by the sequence of digital images.
[12] Preferably, the method further comprises the step of generating a detection model to detect the set of visual patterns, wherein the detection model is trained to recognize and identify the set of visual patterns in the sequence of digital images.
[13] Preferably, the sequence of digital images represents a sporting event and the message of historical data concerning the detected set of visual patterns comprises information concerning past performance of a player.
[14] Preferably, the message of comments comprises comments from an audience at a sporting event. [15] Preferably, the method further comprises the step of estimating a time for which the content is displayed along with the digital image.
[16] Preferably, the step of detecting the set of visual patterns within the digital image is performed by a web crawler.
[17] Preferably, the step of tagging metadata to the set of visual patterns is performed by manual or automatic means.
[18] Preferably, the tagged metadata comprises metadata concerning an identity of the set of visual patterns.
[19] According to a second aspect of the invention, a system for displaying content is described, the system comprising at least one processor programmed to implement a detection and recognition module to detect a set of visual patterns within a digital image, wherein the detection of the set of visual patterns is robust to appearance variations caused by lighting changes, camera angle and partial occlusion; an automated metadata tagging module to automatically tag metadata to the set of visual patterns. The at least one processor is further programmed to implement an administrated metadata tagging module to allow manual tagging of metadata to the set of visual patterns; an metadata aggregation module that aggregates the tagged metadata with third party metadata; and an image selection and generation module to display content along with the digital image, wherein the content is based on the aggregated metadata.
[20] Preferably, the image selection and generation module displays the content along with the digital image by superimposing the content over the set of visual patterns on the digital image to augment the digital image.
[21] Preferably, the digital image is part of a sequence of digital images. [22] Preferably, the content is a HyperText Markup Language layer which is clickable and responsive to user input, and the at least one processor is further programmed to implement a feedback management module to interpret the user input.
[23] Preferably, the content is selected from the group consisting of : a message for a betting service, an advertising message, a social-media message, a message concerning the detected set of visual patterns, a message of historical data concerning the detected set of visual patterns, and a message of comments about an event depicted by the sequence of digital images.
[24] Preferably, the at least one processor is further programmed to generate a detection model to detect the set of visual patterns, wherein the detection model is trained to recognize and identify the set of visual patterns in the sequence of digital images.
[25] Preferably, the sequence of digital images represents a sporting event and the message of historical data concerning the detected set of visual patterns comprises information concerning past performance of a player.
[26] Preferably, the message of comments comprises comments from an audience at a sporting event.
[27] Preferably, the at least one processor is further programmed to estimate a time for which the content is displayed along with the digital image.
[28] Preferably, the detection and recognition module is a web crawler.
[29] Preferably, the tagged metadata comprises metadata concerning an identity of the set of visual patterns.
[30] In embodiments, the present invention may provide a method and system for using three-dimensional simulation to quantify the spatial alteration of a region of a two-dimensional digital video image caused by movement of the region between a first and a second video frame for at least the purposes of inserting a virtual video content item into a digital video feed. In embodiments, a virtual advertising platform may receive a two-dimensional digital video data feed and construct a three-dimensional simulation of the two-dimensional digital video data feed within a simulation environment based at least in part on applying geometric surfaces over a plurality of spatial regions within frames of the two-dimensional digital video data feed, wherein the plurality of spatial regions are defined at least in part by a coordinate mapping of the two-dimensional digital video data feed. The virtual advertising platform may map a spatial region, among the plurality of spatial regions, within a first video frame to the spatial region's location with a second video frame, wherein the second video frame was captured in a time subsequent to the first frame, by performing the steps of: Step One: selecting the spatial region within the first video frame based at least in part on mapping coordinates of the spatial region within the two-dimensional video data feed; Step Two: identifying geometric changes to the spatial region within the second video frame by quantifying the differences between the applied geometric surfaces of the spatial region within the first video frame and the applied geometric surfaces in the second video frame; and Step Three: summarizing the quantified differences as a three-dimensional mapping metric. The virtual advertising platform may iteratively process each of a plurality of video frames within the two-dimensional digital video feed by performing Steps One, Two, and Three to create a plurality of three-dimensional mapping metrics, and may summarize quantitative associations among the plurality of three-dimensional mapping metrics as a three-dimensional mapping algorithm, wherein the three-dimensional mapping algorithm defines at least in part three-dimensional geometric position data that enables application of geometric changes to the spatial region inherent in the plurality of video frames to a virtual digital video image that is not present in the two-dimensional digital video data feed
[31] In embodiments, the virtual digital video image may be an advertisement that is inserted into the spatial region of the two-dimensional digital data feed, replacing the spatial region, and the two-dimensional digital video image is recomposited as a new virtual digital video feed. The digital video feed may derive from an infrared camera. The digital video feed may be received from a live event. The digital video feed may be received from a stored digital video medium, such as but not limited to a DVD. The digital video feed may be received from the Internet.
[32] In embodiments, the selection within the virtual advertising platform of the spatial region may be further based on a correlation between mapping coordinates of the spatial region with a known spatial characteristic that is stored within a data facility that is associated with the three-dimensional simulation environment. The known spatial characteristic may be an advertising logo, an article of clothing, or some other type of spatial characteristic.
[33] In embodiments, the virtual advertising platform may use a three- dimensional mapping algorithm to insert a virtual image within an internet-based video stream. The virtual advertising platform may receive a request from a user to view a two- dimensional digital video data feed from the Internet, and select a virtual digital image. The virtual advertising platform may apply a three-dimensional mapping algorithm to the virtual digital image, wherein the three-dimensional mapping algorithm causes the virtual digital image to be recomposited within a plurality of frames within the two-dimensional digital data feed in place of a spatial region within the two-dimensional data feed, and wherein the three-dimensional mapping algorithm enables application of analogous geometric changes to the virtual digital image that are present in the spatial region within the plurality of video frames within the two-dimensional digital video data feed, and may send the recomposited digital data feed for display to the user, wherein the recomposited digital data feed is a virtualized digital data feed that includes the virtual digital image in place of the spatial region. In embodiments, the request is accompanied by at least one datum relating to a characteristic of the user and the selection of the virtual digital image is based at least in part on a relevance to the datum.
[34] The virtual digital image may be an item of sponsored content, including but not limited to an advertisement. The virtual digital image may be an advertising logo that is relevant to at least a portion of the two-dimensional digital video feed. The relevance of the advertising logo may be based at least in part on a stored association between the advertising logo and a second logo that is recognized in the two-dimensional digital video feed, wherein detection of the second logo is based at least in part on a quantified match between an image recognized in the two-dimensional digital video feed and a logo that is stored in a database. The relevance may be further based on a geographic location associated with the two-dimensional digital video feed. The relevance may be further based on a geographic location associated with a client device to which the recomposited digital video feed will be transmitted.
[35] In embodiments, the virtual advertising platform may use a three- dimensional mapping algorithm to interpolate video data to replace corrupted digital video data and insert a virtual image within a two-dimensional digital video feed. The virtual advertising platform may receive a two-dimensional digital video data feed wherein a spatial region within the plurality of frames within the two-dimensional video data feed includes a partial depiction of an advertisement due to corrupted digital video data, and use an image metrics algorithm to compute a relevance of uncorrupted digital video data within the spatial region to a set of stored digital video images. The virtual advertising platform may identify a stored digital video image based at least in part on the computed relevance, and select a virtual digital image based at least in part on the identified stored digital video image. The virtual advertising platform may apply a three- dimensional mapping algorithm to the virtual digital image, wherein the three- dimensional mapping algorithm causes the virtual digital image to be recomposited within a plurality of frames within the two-dimensional digital data feed in place of the spatial region within the two-dimensional data feed, and wherein the three-dimensional mapping algorithm enables application of analogous geometric changes to the virtual digital image that are present in the spatial region within the plurality of video frames within the two-dimensional digital video data feed, and the virtual advertising platform may send the recomposited digital data feed for display to a user, wherein the recomposited digital data feed is a virtualized digital data feed that includes the virtual digital image in place of the spatial region. [36] In embodiments, the virtual digital image may be a completed version of the partial image, wherein the virtual digital image is created based at least in part on interpolated digital video data using the stored digital video image. The corrupted digital video data may be caused at least in part by a physical deformation of a object depicted within the two-dimensional digital video data feed.
[37] These and other systems, methods, objects, features, and advantages of the present invention will be apparent to those skilled in the art from the following detailed description of the preferred embodiment and the drawings. All documents mentioned herein are hereby incorporated in their entirety by reference.
BRIEF DESCRIPTION OF THE FIGURES
[38] These and other systems, methods, objects, features, and advantages of the present invention will be apparent to those skilled in the art from the following detailed description of the preferred embodiment and the drawings. All documents mentioned herein are hereby incorporated in their entirety by reference.
[39] The invention and the following detailed description of certain embodiments thereof may be understood by reference to the following figures:
[40] Fig. 1 depicts a simplified architecture including a virtual advertising platform and related facilities.
[41] Fig. 2 illustrates an embodiment of image capture and recognition that may be used by the virtual advertising platform.
[42] Fig. 3 illustrates an embodiment of video image mapping within a three- dimensional environment that may be used by the virtual advertising platform. [43] Fig. 4 illustrates an augmentation process that may be used for recompositing a video data to include a virtual video content within the virtual advertising platform.
[44] Fig. 5 illustrates a simplified method and system for developing and testing algorithms within the virtual advertising platform.
[45] Fig. 6 depicts a simplified flowchart of the interactions among selected components in the process of tagging metadata and integrating content into a video feed.
[46] Fig. 7 depicts the modules in a preferred embodiment of the invention.
[47] Fig. 8 depicts a simplified flowchart of the process of tagging and aggregating metadata and placing content on the digital images.
[48] Fig. 9 depicts an embodiment of inserting a live pop-up into a video stream based on pattern recognition of the number and text appearing on a player shirt.
[49] Fig. 10 depicts an embodiment of a static logo being inserted onto a fixed surface using a fixed camera video feed.
[50] Fig. 11 depicts an embodiment of a static logo being inserted in place of an existing logo from the source video feed generated using a moving camera.
[51] Fig. 12 depicts an embodiment of a logo being integrated into a video feed on a moving target.
[52] Fig. 13 depicts an embodiment of the insertion of a logo into a live video feed, such that the logo displays on a moving target using coded targets. DETAILED DESCRIPTION
[53] Detailed embodiments of the present invention are disclosed herein; however, it is to be understood that the disclosed embodiments are merely exemplary of the invention, which may be embodied in various forms. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the present invention in virtually any appropriately detailed structure. Further, the terms and phrases used herein are not intended to be limiting, but rather to provide an understandable description of the invention.
[54] The terms "a" or "an," as used herein, are defined as one or more than one. The term "another," as used herein, is defined as at least a second or more. The terms "including" and/or "having", as used herein, are defined as comprising (i.e., open transition). The term "coupled" or "operatively coupled," as used herein, is defined as connected, although not necessarily directly and not necessarily mechanically.
[55] Referring to Fig. 1, in embodiments of the present invention, a virtual advertising platform 120 is provided in a simplified video broadcasting context in which the virtual advertising platform 120 may be used to insert virtual video content within a digital video data feed 118 that is received by a virtual advertising platform 120 to create a virtualized digital video feed 142. A digital video feed 118 may originate with a camera at a live event 104 that is recording the live event 102 in real time, or broadcasting the live event 102 with a broadcasting delay. A digital video feed 118 may also originate from rebroadcast programming 108, such as that from a network affiliate rebroadcasting previously recorded studio recordings, such as a sitcom, or a previously recorded sports event, such as an international football match. In embodiments, a digital video feed 118 may originate from a stored digital video medium 110, such as a DVD, camcorder, mobile device, computer, or some other medium that is capable of storing digital video. In embodiments, a digital video feed 118 may originate from an internet-based video platform, such as a website, email attachment, live video streaming (e.g., a webcam or an internet telephony program, such as Skype), computer user upload to the internet (e.g., to a website such as www.YouTube.com), or some other means of internet-based video transmission.
[56] In embodiments, the virtual advertising platform 120 may receive the digital video feed 118. The receipt of the digital video feed 118 may be passive, as in the embodiment of a third party actively sending the digital video feed 118 to the virtual advertising platform 120 that passively receives the digital video feed 118, or the virtual advertising platform 120 may actively seek out and obtain a digital video feed 118, including actively seeking to obtain a digital video feed 118 or plurality of digital video feeds that meet a criterion. In an example, the virtual advertising platform 120 may be programmed to compare a dataset against a datum or data relating to digital video feeds, such as keywords, locations, broadcast locations, or some other criteria. The virtual advertising platform 120 may include a search and retrieval facility that is enabled to search among available digital video feeds 118 according to a criterion or criteria. For example, the virtual advertising platform 120 may search a website for a digital video feed 118 that is associated with a keyword of "music video," and retrieve the video, for example via download, for further rendering and recompositing within the virtual advertising platform 120.
[57] A digital video feed 118 may be received by the virtual advertising platform
120, which performs a series of steps to recomposite the digital video feed 118 as a virtualized digital video feed 142 in which the originally received digital video feed 118 is rendered to include at least one component of virtual video content (e.g., an advertisement) that was not present in the originally received digital video feed 118. In embodiments, the virtual video content may be a wholly new element of video or it may be an improvement or enhancement of an item of video content found in the originally received digital video feed 118, such as a new video enhancement which corrects corrupted video data and/or video data in the original digital video feed 118 that was obscured in some manner. [58] Referring to Fig. 2, in embodiments, an image-processing platform associated or within the virtual advertising platform 120 may be responsible for analyzing an incoming digital video feed 118 (also referred to herein as a "video," "video content," "video stream," and the like) in real-time, performing the detection of logos or other video content, including but not limited to advertising content, recovering geometrical and appearance parameters for the detected logos or detected content, and transmitting encoded metadata required for later augmentation with replacement logos.
[59] The process may begin with the virtual advertising platform 120 decoding an incoming digital video feed 118, using a current frame 204 or a frame previous 202 to a current frame, thus extracting raw color pixels for analysis. The process may be used to select a prototype logo 208, herein referred to as Logo N, for detection in the current frame 204 and/or previous frame 202 based at least in part by accessing detection databases consisting of any number of prototype logos 212 for a particular event, as well as an optional database consisting of event-specific prototype images of objects 210 upon which logos are present (background targets). In embodiments, the a logo within the digital video feed 118 may be detected based at least in part on a partial match or recognition of a logo or other type of video content within the digital video feed 118. The prototype images in the detection databases may undergo an image analysis step in which the information, including but not limited to the following, is extracted in order to form a unique representation of the logo (and optionally the background targets):
[60] The virtual advertising platform 120 may enable detection of salient features
220, where the salient regions consist of heterogeneous regions within an otherwise generally symmetrical or homogenous image.
[61] The virtual advertising platform 120 may enable detection of spatial patterns
222, where the spatial patterns consist of the encoding of textures and uniform color/intensity regions together with their spatial relationships. [62] The virtual advertising platform 120 may enable detection of spectral distribution 224, where the spectral distribution consists of a summary of color and intensity information.
[63] The virtual advertising platform 120 may enable a zoom level comparison
228, and the algorithm, as described herein, that is used as part of image recognition and detection may enable adaptation to a plurality of zoom levels that are present within the video feed.
[64] An incoming video image may undergo a similar extraction of saliency, spatial patterns and spectral distribution, followed by a comparison between these characteristics and those of prototype logos (detection). The detection phase may carry out comparisons between the various features at multiple scales of zoom, and may be able to detect multiple instances of the same logo. The same process may be carried out for each logo in the database in order to determine a match 230 between the video image in the received video stream 118 and the stored image or logo (e.g., Logo N) in the prototype logos 212 and/or prototype objects 210 databases. Temporal smoothing of detection results may be based on storage of detections from previous image frames, making use of physical constraints and predictive filtering to reduce jitter 232. The detection phase may indicate locations and identities of logos in the scene, which as shown in Fig. 3, may be followed by a pose-estimation algorithm process in which the geometrical positioning of the detected logo in the scene is ascertained within a 3-D environment 128. This accounts for detecting logo translations 302, logo scalings 308, logo rotations 304, shearing 312 and warping 310 of a detected logo as compared to a database prototype, resulting in metadata 320 encoding of these spatial parameters. A detected and geometry-corrected logo may undergo an alignment procedure in order to reconstruct a pixel to pixel mapping between the two logos, such as Logo N 218 from the digital video feed 118 and a Logo X 404 obtained from a replacement logos database 400 for the purpose of selecting the replacement logo 402 to insert as a virtual content item into the digital video feed 118 in place of Logo N (see Fig. 4). The Logo X may be aligned 316 and major discrepancies between the aligned pair of Logo N and Logo X may be used to construct an occlusion mask 314, based at least in part on applying geometric features 410 of the logos to the alignment step, and thus accounting for partial occlusions that exist between the camera and target, and partial obscuration due to viewing angle, or some other type of viewing obstruction or occlusion (e.g., due to folds in fabric, such as a player's jersey, or light reflection from the side of an object). The occlusion mask may be encoded into an outgoing metadata 320 structure for the augmented phase and applied 412 as part of the augmentation process. In the next phase, color-differences between a recovered logo and a prototype may be assessed, and encoded as a color transformation matrix 318 and applied 414 during the augmentation process for later correction of the augmented logo. The transformation parameters may be added to the metadata 320 structure. Similarly, specular (as opposed to uniform) lighting effects may be accounted for by detecting anomalous lighting patterns in the aligned image pair. This information may be encoded into the metadata structure for specular reflection compensation in the augmentation phase. A blending algorithm 418 may involve extracting pixel properties in the vicinity of the detected logo. These properties may be encoded into the metadata 320 structure to allow for a natural blending at the augmentation phase, particularly at the edges of replacement logos 400. The blending algorithm may be used to create an augmented video stream 420. The augmented video stream may be a virtualized digital video stream 142 that may be transmitted to other entities and client devices 158 for viewing.
[65] As depicted in Fig. 5, the virtual advertising platform 120 may include an algorithm testing and learning facility that may rank, prioritize, and optimize the performance of the algorithms used by the virtual advertising platform 120 for the placement of virtual video content within a digital video feed. Following the detection of an algorithm 214, as described herein, a detected algorithm 500 may be tested against a criterion and its performance scored 502, ranked or otherwise evaluated for its value in recognizing and detecting a target logo 504.
[66] Referring again to Fig. 1, in embodiments, following the compositing of the virtual video element within the originally received digital video feed 118, a virtualized digital video feed may be created and distributed by the virtual advertising platform 120. The virtualized digital video feed may be distributed to entities such as, but not limited to, a master control booth 114, such as that associated with a network broadcaster, a regional broadcaster 152, such as a local affiliate of a network broadcaster, the internet 154, such as a website, or some other entity capable of receiving a video distribution. Data and/or metadata may be associated with the virtualized digital video feed 142 including, but not limited to tracking data 144, such as cookies 148 or pixel tracking 150 data, that permits the distribution of the virtualized digital video feed 142 to be tracked, recorded, and shared with parties, including the virtual advertising platform 120. An entity, such as a regional broadcaster 152, internet 154 website, or some other entity may receive the virtualized digital video feed 142 and transmit it to a client device 158 including, but not limited to, an internet-enabled device 160, TV 162, phone 164, or some other device capable of displaying a digital video. A user of the client device 158 may then view an instance of the virtualized digital video feed 168, and data confirming this viewing instance may be further transmitted, for example on the basis of the tracking data, to an entity, such as the virtual advertising platform 120. The virtual advertising platform 120 may receive and store this user viewing data, along with a plurality of users' viewing data, and use this information at least in part for the purposes of determining a relevancy for a type of virtual video content to insert within a digital video feed 118. User data 170, such as demographic 172, economic 174, and usage history, relating to a user may be associated with a client device 158, and this data may also be received and stored by the virtual advertising platform 120, along with a plurality of users' data, and use this information at least in part for the purposes of determining a relevancy for a type of virtual video content to insert within a digital video feed 118.
[67] In embodiments, virtual video content that is used by the virtual advertising platform 120 to include within a virtualized digital video feed 168 may be sponsored content 180, such as an advertisement. Sponsored content 180 may be further associated with an ad exchange 182 or ad network within which advertisers 188 may place bids using a bidding platform 184 for the right to have a given sponsored content 180 placed as a virtual content within a virtualized digital video feed 168. [68] In embodiments of the present invention the virtual advertising platform
120 may be used to insert virtual content other than advertisements or sponsored content, including but not limited to, entertainment video, amateur video, special effects, of some other type of non-advertising content.
[69] In embodiments of the present invention the virtual advertising platform
120 may be used to insert virtual video content into a three-dimensional digital video data feed.
[70] In embodiments of the present invention, the virtual advertising platform
120 may receive a digital video data feed. A digital video data feed may derive from a 2D camera, a 3-D camera, an infrared camera, a stereoscopic camera, or some other type of camera. The virtual advertising platform 120 may map a region within a first video frame of the digital video data feed to the region within a second video frame by performing the steps of: (i) selecting the region within the first video frame based at least in part on recognition of data (e.g., pixel data, steganographic data) within the region matching that of a video data criterion (e.g., indexed image/video segments of known advertisements); (ii) selecting the region from a second video frame within the digital video data feed, captured by the stereoscopic camera at a time subsequent to the first frame, and associating the first location of the region in three-dimensional video space in the first video frame with the second location of the region in three-dimensional video space in the second video frame, wherein the association is based at least in part on quantitative analysis, as described herein, of data within the region in the first and second frames; and (iii) summarizing and storing the association as a three-dimensional mapping metric. The virtual advertising platform 120 may segment the region into a plurality of region segments, and iteratively processing the plurality of region segments within the region by performing Steps i, ii, and iii for each region segment to create a plurality of three-dimensional mapping metrics, wherein each three-dimensional mapping metric summarizes a location within the three-dimensional space for each of the plurality of region segments across each of the frames within the digital video data feed. The virtual advertising platform may summarize the association among the plurality of three-dimensional mapping metrics as a three-dimensional mapping algorithm, and a replacement video region may be mapped to the region within the first video frame, wherein the mapping is a quantitative association of data (e.g., pixel data, steganographic data) within the replacement video region and the region within the first video frame. The virtual advertising platform may manipulate video data of the replacement video region, based at least in part on the application of the three- dimensional mapping algorithm, to render a second version of the replacement video region suitable for placement within the second video frame, wherein the rendering of the replacement video region is visually and/or quantitatively equivalent to the alteration in three-dimensional space of the region in the first and second frames that is summarized by the three-dimensional mapping metric.
[71] In embodiments of the present invention, the virtual advertising platform
120 may receive a digital video data feed. A digital video data feed may derive from a 2D camera, a 3-D camera, an infrared camera, a stereoscopic camera, or some other type of camera. The virtual advertising platform 120 may map a region within a first video frame of the digital video data feed to the region within a second video frame by performing the steps of: (i) selecting the region within the first video frame based at least in part on recognition of data (e.g., pixel data, steganographic data) within the region matching that of a video data criterion (e.g., indexed image/video segments of known advertisements); (ii) selecting the region from a second video frame within the digital video data feed, captured by the stereoscopic camera at a time subsequent to the first frame, and associating the first location of the region in three-dimensional video space in the first video frame with the second location of the region in three-dimensional video space in the second video frame, wherein the association is based at least in part on quantitative analysis of data within the region in the first and second frames; and (iii) summarizing and storing the association as a three-dimensional mapping metric. The virtual advertising platform 120 may segment the region into a plurality of region segments, and iteratively processing the plurality of region segments within the region by performing Steps i, ii, and iii for each region segment to create a plurality of three-dimensional mapping metrics, wherein each three-dimensional mapping metric summarizes a location within the three-dimensional space for each of the plurality of region segments across each of the frames within the digital video data feed. The virtual advertising platform may summarize the association among the plurality of three-dimensional mapping metrics as a three-dimensional mapping algorithm, and a replacement video region may be mapped to the region within the first video frame, wherein the mapping is a quantitative association of data (e.g., pixel data, steganographic data) within the replacement video region and the region within the first video frame. The virtual advertising platform may manipulate video data of the replacement video region, based at least in part on the application of the three-dimensional mapping algorithm, to render a second version of the replacement video region suitable for placement within the second video frame, wherein the rendering of the replacement video region is visually and/or quantitatively equivalent to the alteration in three-dimensional space of the region in the first and second frames that is summarized by the three-dimensional mapping metric. In embodiments, the virtual advertising platform 120 may iteratively manipulate video data of a plurality of replacement video regions, based at least in part on the application of the three-dimensional mapping algorithm, wherein the iterative manipulation produces a plurality of replacement video regions, each of which corresponds to one frame of a series of series of frames within the digital video data feed. The virtual advertising platform 120 may aggregate each of the plurality of replacement video regions to create a plurality of composite replacement video images, wherein each of the plurality of composite replacement video images corresponds to each of the frames of the series of frames within the digital video data feed. Each of the composite replacement video images may be validated against a criterion replacement image, wherein the validation is summarized as a quantitative validity metric, and the three-dimensional mapping algorithm may iteratively adjust to optimize the predictive validity of the quantitative validity metric.
[72] In embodiments of the present invention, the virtual advertising platform
120 may recomposite the digital data feed into a new digital data feed in which the placement of the composite replacement video images is substituted for content within the digital video feed, and rebroadcast the new digital data feed. [73] In embodiments of the present invention, the virtual advertising platform
120 may enable data interpolation to fill in missing video imagery due to obfuscation from, for example, sun reflection or dimmed lighting, folded clothing, blocked images, obfuscated images and the like.
[74] In embodiments of the present invention, the virtual advertising platform
120 may insert tracking data into a recomposited virtual video feed so that downstream usage may be tracked (e.g., internet-streamed content).
[75] In embodiments of the present invention, the virtual advertising platform
120 may use a distributed computing environment and receive video data at a server from a digital video data feed (e.g., from a Master Control Booth) and segment the video data into a plurality of video data segments, distributing the plurality of video data segments to a plurality of servers (wherein the plurality of servers are within a distributed computing environment).
[76] In embodiments of the present invention, the virtual advertising platform
120 may select a virtual video content for placement within a digital video data feed (using the methods described herein), wherein the selection is based at least in part on information relating to at least one of (i) a broadcast affiliate, (ii) a regional code associated with a distribution destination, and/or (iii) a device on which the digital video feed will be displayed (e.g., a cable settop box or cell phone).
[77] In embodiments of the present invention, the virtual advertising platform
120 may place a virtual video content into a video data feed based at least in part on the selection of a virtual video content from a dictionary in which video content stored within the dictionary is associated with metadata that describes in part a mapping onto known advertisements for which the video in the dictionary may be substituted.
[78] In embodiments, an ad exchange 182 may be associated with the virtual advertising platform 120, as described herein, may present a mode of enabling advertisements through various online portals such as websites by creating a platform for integrating various entities involved in the preparation and delivery of sponsored content 180, such as advertisements. It may act as a single platform for enabling transactions between the advertiser and publishers. The integration of various services in a single platform may facilitate bidding of advertisements, for example using a bidding platform 184 in real-time, dynamic pricing, customizable reporting capabilities, identification of target advertisers and market niches, rich media trafficking, algorithms for scalability, yield management, data enablement, and the like. In addition API's for interfacing with other platforms (e.g. the virtual advertising platform 120), ad networks, brokers, and the like may be provided in order to build a globally distributed facility for seamless integration. An ad exchange 182 may be implemented through various electronic and communication devices that may support networking. Some examples of such devices may include but are not limited to desktops, palmtops, laptops, mobile phones, cell phones, and the like. It may be understood by a person ordinarily skilled in the art that various wired or wireless techniques may be employed to support networks of these devices with external communication platforms such as Cellular, WIFI, LAN, WAN, MAN, Internet and the like. A complete system of an ad exchange 182 hereinafter referred to as an ad exchange 182 for descriptive purposes may include entities such as ad exchange 182 servers, ad inventories, ad networks, ad agencies, advertisers, publishers, virtual advertising platform 120 facilities, and the like. The detailed description of some of these entities is provided herein, separately for simplicity of the description.
[79] An ad exchange 182 server may include one or more servers that may be configured to provide web services or other kind of services for facilitating placement of sponsored content 180, such as insertion of sponsored content 180 on websites. Likewise, an ad exchange 182 server may be a computer server, such as a web server, that may perform the tasks of storing online advertisements and delivering the advertisements to website users or viewers, mobile network providers, other platforms such as a virtual advertising platform 120, and the like. The ad exchange 182 server may facilitate display of relevant advertisements and information each time a visitor or a user visits a webpage using a web browser or refreshes the web page. The advertisements may be in the form of virtual video content, banner ads, contextual ads, behavioral ads, interstitial ads and the like. The ad exchange 182 server may perform the task of keeping a log of the number of impressions and clicks, record traffic data number of users, IP address of the users for identifying spam and the like. Logs may be utilized for creating statistical graphs for analyzing traffic flow of packets, routing paths and the like. Further, a database may be maintained by the ad exchange 182 server to store information related to the users of webpages and client devices 158 and to store their behavioral and contextual information. This behavioral and contextual information may be used by the ad exchange 182 server, and by the virtual advertising platform 120, to present relevant advertisements to the user in the form of virtual video content that is inserted into a digital video feed 118. For example, contextual information relating to a client device 158 may indicate that a language setting on the device is set so that the default language is "English." This contextual information may be used at least in part by the virtual advertising platform 120 to select virtual video content that is based on English for insertion within a digital video feed 118 in place of non-English elements present in the digital video feed 118. The database may be updated by the ad exchange 182 server periodically or when triggered by an ad exchange 182 server owner. The database may be a standalone database or may be a distributed database, and may be further associated with the virtual advertising platform 120.
[80] In embodiments, a publisher may be an owner of an ad exchange 182 server. Such a deployment may be called a local ad exchange 182 server since the ad exchange 182 server is controlled and maintained by the publisher and the ad exchange 182 server may serve only the publisher. However, an ad exchange 182 serve may also be deployed and hosted by a third party. Such a deployment is called a third party server or a remote server since the owner of the ad exchange 182 server and the web server are different. In this scenario, a direct link may be maintained between the ad exchange 182 server owner (third party) and the publisher to keep the publisher updated regarding online advertisements on the web page and any transaction therein. In a remote server mode of deployment of the ad exchange 182 server, the ad exchange 182 server may serve numerous domains owned by various publishers differently. [81] In accordance to various embodiments of the present invention, several other tasks may be performed by the ad exchange 182 server. The ad exchange 182 server may assist in uploading advertisements or any other similar content on the web page, including loading content, such as digital video feeds 118 to the virtual advertising platform 120. The ad exchange 182 server may also facilitate in downloading a downloadable content of the advertisements or a portion of the advertisements as defined by the restrictions imposed by advertisers. Further, an ad exchange 182 server may also be utilized in avoiding ads trafficking on a web page or web pages. The trafficking may be avoided based on the defined criteria and parameters regarding business and commercial viabilities and importance.
[82] In embodiments, an ad exchange 182 server may apply a cap or a limit to the number of times a sponsored content, such as virtual video content, is displayed, thereby setting a limit on the usage based on the money invested for online advertisements. In other instances, an ad exchange 182 server may disable the display of certain advertisements based on the user's context and behavior. Furthermore, the time period for displaying advertisements to users may be controlled by the ad exchange 182 server, and this information used by the virtual advertising platform 120 for the purposes of selecting the type of virtual content to include in a virtualized digital video feed 142. In an example, the time period may be set uniform for all users or may vary for various users based on the behavioral, contextual, or other information gathered by the ad exchange 182 server or contextual information previously stored in a database, including a database that is associated with the virtual advertising platform 120. In certain implementations of the present invention, the ad exchange 182 server may inform the sequence of advertisements used by the virtual advertising platform 120 for placing virtual content within a digital video feed, base at least in part on the interests and user data 170 of a user.
[83] In embodiments, owners of ad inventories may be advertisers who desire to display the content such as virtual content that is placed by the virtual advertising platform 120 within a digital video feed 118, and the like. The advertisers may purchase a portion of space within a digital video feed 118 for the display of their inventories and advertisements. Ad inventories may be stored in or accessed by an ad exchange 182 server, or the virtual advertising platform 120, from where the inventories may be fetched for displaying within a digital video feed 118. These inventories may then be added to the allocated space within a digital video feed by the virtual advertising platform 120, and /or the owner of a digital video feed 118. The allocation of space to the ad inventories and display of the content of the ad inventories may be governed by the ad exchange 182 Server and/or by parameters within the virtual advertising platform 120.
[84] In embodiments, an advertiser is a buying entity in an ad exchange 182 that may provide sponsored content 180, advertisements and other similar content to parties that are capable of placing the content, including the virtual advertising platform 120 that is enabled to place video content within a digital video feed 118. In a general scenario, thousands of advertisers may be connected to numerous publishers through ad exchange 182 servers. The advertisers might not be directly linked to an ad exchange 182 server, but rather an intermediate system such as an ad network or an ad agency may be provided where various advertisers may be linked or represented in the ad exchange 182. Advertisers may place bids for purchasing a defined space within a virtualized digital video feed 142 for the placement of sponsored content 180, such as advertisement and the space required for the advertisements with other relevant details. The advertisements may be classified by the virtual advertising platform 120 according to the criteria defined in an intermediate system such as costs, contexts, relevancy of the content relative to digital video feed 118 content and the like. Classified sponsored content 180 may then be sorted and prioritized by the virtual advertising platform 120 and the advertiser with the highest bid may be provided the required space to place sponsored content 180 within a virtualized digital video feed 142. Advertisers may also opt for purchasing the space from several publishers of digital video feeds 118. [85] In embodiments, a publisher may be a seller who owns or operates display locations, such as websites, that are able to display digital video feeds 118 and virtualized digital video feeds 142, and to sell a defined space to the advertisers on a, for example, web page. Advertisers may interact with publishers through the intermediate system, such as an ad network, an ad exchange 182, through the virtual advertising platform 120, and the like as described herein for buying or selling purposes. Publishers may allocate the space within a digital video feed 118, and the virtual advertising platform 120 may add sponsored content, video inventory or advertisement content, in the form of video, within the allocated space. Publishers and/or the virtual advertising platform 120 may forecast the number of impressions that may occur during a particular period of time such as a day or a month on a specific web page. With this forecasted information and the information related to the already allocated space, publishers and/or the virtual advertising platform 120 may predict the amount of space that may further be sold. This space that may further be sold to an advertiser or an agent or any other buying entity may be termed as an asset. The publisher and/or the virtual advertising platform 120 may also classify the inventories and video media based on several criteria as defined herein. The categorization may be performed either manually or using an automated system such as by a programmed algorithm. The manual classification may involve persons that may review and analyze the video content of digital video feeds 118 and based on the review and analysis the video content may be classified into various categories. The classification may also be performed through automated system such as through a virtual advertising platform 120 as described herein. A programmed algorithm for automated classification may be stored in the virtual advertising platform 120 which is enabled to review and analyze digital video feeds 118 and classify the digital video feeds 118 into defined categories. The classification technique may provide an additional advantage to the publishers and advertisers in several ways. For example, by estimating the level of relevancy of a video inventory, sponsored video content, or advertising content to a given digital video feed 118, the publisher may demand higher charges since the probability of gaining interest in the advertising content is higher while the user views a virtualized digital video feed 142 in which a relevant sponsored video content 180 is inserted. Similarly, a digital video feed 118 may be prioritized as more relevant if it is more probabilistic that a greater number of video viewings will be made per unit time. Data relating to the viewing history of digital video feeds 118 may be collected, stored and analyzed by the virtual advertising platform 120.
[86] In embodiments, an ad exchange 182 may implement algorithms, which may allow the publishers to price ad impressions during bidding in real-time. Apart from selecting the bidder on predefined criteria, an ad exchange 182 may ensure that the bids submitted by the advertisers may neither be undervalued nor overvalued. An ad exchange 182 may automatically generate maximum return on every impression. In addition, the reporting of sales data may be presented to the publishers in a simplified format for easy understanding. The publishers may be authorized to identify the brands and/or product preferred by them for placement of ad impressions. Likewise, an ad exchange 182 may allow the publishers to restrict certain brands, contents, formats, and like based to their preferences.
[87] In embodiments, ad networks may be a group of publishers and/or advertisers that are connected together. An ad network may be an organization or an entity that may connect web sites that want to host advertisements with advertisers who want to run advertisements. An ad network may be of categorized as a representative network, blind network, and targeted network. Representative networks allow full transparency of content to the advertisers. On the other hand, blind networks may provide a low price to advertisers at the cost of the freedom to decide the placement of advertisements on the publisher's web page. Target networks may be directed to specific targeting technologies including analyzing behavioral or contextual information of the users, as described herein, and as may be collected, stored and analyzed by the virtual advertising platform 120.
[88] In embodiments, the virtual video content that is available through the virtual advertising platform 120, created within the virtual advertising platform 120, or associated with the virtual advertising platform 120 may be syndicated to users' client devices 158 using a syndication facility based at least in part on using contextual information associated with the virtual video content, and/or content similar to the virtual video content and including sponsored content 180, in order to determine the relevancy, using a relevancy determination facility, of virtual video content to a criterion, such as a keyword associated with the virtual video content, usage history of the content, and/or metadata that is associated with the virtual video content. Relevancy may be based at least in part on a relevancy between contextual data associated with the virtual video content and user data 170. In an example, a virtual video content selection criterion may be derived by the virtual advertising platform 120 based at least in part on a user's prior usage of, and behaviors within, a client device 158, including but not limited to prior video viewings made on the client device 158, such as an internet-enabled device 160. For example, a user viewing a virtualized digital video feed 142 on a client device 158 that derives from the virtual advertising platform 120 may have previously searched, retrieved, used, or interacted with virtual video content that is associated with metadata, such as an URL indicating an amateur video posting website like YouTube, or a keyword such as "The World Cup." An automated program running within or in association with the virtual advertising platform 120, or affiliated with the virtual advertising platform 120, may recognize keywords, metadata, or other material within this prior-viewed video content that indicates a relevance to the virtual video content genre "Sports." Based at least in part on this, the user may be associated with a user profile datum/data indicating that the user is interested in sports. The selection of the virtual video content to syndicate to a user may be based at least in part on the user data and user profile data indicating an interest in sports may be automated, initiated individually by the virtual advertising platform 120, and/or initiated by the user or creator of virtual video content that is available within or associated with the virtual advertising platform 120, for example video content that is submitted to a website for manipulation, such as YouTube.
[89] Contextual information that may be associated with video content may also include keywords, terms, or phases located within or associated with the video content and/or virtual video content, the links to video content, the links from video content, click patterns and clickthroughs associated with prior use of the video content (including click patterns and clickthroughs associated with sponsored content appearing in association with the virtual video content), metadata, video content usage patterns including time, duration, depth and frequency of video content usage, the video content's origination host, genre(s) relating to the video content, and other indicia of video content context.
[90] The relevancy of the contextual information associated with video content may be indicated through the use of a relevancy score. The relevancy score may be a numerical summary of the statistical association between, for example, contextual video and virtual video content parameters (e.g., genre of video content) and user parameters (e.g., genres of video content previously downloaded by user). The relevancy score may be a proprietary score assigned to a video content or virtual video content by the virtual advertising platform 120, or third party service provider that is associated with the virtual advertising platform 120. The relevancy scores of a syndicated virtualized video content may be stored in a virtual video content relevance dictionary.
[91] In embodiments, usage patterns may be obtained from a database of user data 170 and/or metadata relating to users of the virtual advertising platform 120. A wide range of usage patterns may be used to assist with formation of queries (implicit and explicit) and with retrieval and organization of video content search results, such as that presented to a recipient of a virtualized video data feed 142 from the virtual advertising platform 120. An algorithm facility may include one or more modules or engines suitable for analyzing usage patterns to assist with such functions in forming a query- For example, an algorithm facility may analyze usage patterns based on time of day, day of week, day of month, day of year, work day patterns, holiday patterns, time of hour, patterns surrounding transactions, patterns surrounding incoming and outgoing video content, patterns of clicks and clickthroughs, patterns of communications, and any other patterns that can be discerned from data that is used within, or in association with the user data 170 and/or a client device 158 on which video content is viewed. Usage patterns may be analyzed using various predictive algorithms, such as regression techniques (least squares and the like), neural net algorithms, learning engines, random walks, Monte Carlo simulations, and others as described herein. [92] In embodiments, an API, or plurality of API's may be provided to enable and facilitate the management and use of user data, such as user profiles, and the operation of syndication within or in association with the virtual advertising platform 120.
[93] In embodiments, the determination of relevance, relevancy, associations, correspondence, and other measures of correlation and relationships between virtual video content, users, and metadata associated with both, as described herein, may be made based at least in part on statistical analysis. Statistical analysis may include, but is not limited to techniques such as liner regression, logistic regression, decision tree analysis, Bayes techniques (including naive Bayes), K nearest neighbors analysis, collaborative filtering, data mining, and other statistical techniques may be used.
[94] In an example, linear regression analysis may be used to determine the relationship between one or more independent variables, such as user profile data, and another dependent variable, such as a datum associated with a virtual video content (e.g., author name, genre, and the like), and modeled by a least squares function, called a linear regression equation. This function is a linear combination of one or more model parameters, called regression coefficients.
[95] In another example, Bayes theorem may be used to analyze user profile and/or video content data and virtual video content, such as contextual data that is associated with video content, data relating to virtualized digital video feeds 142 that are created by the virtual advertising platform 120, or some other type of data used within the virtual advertising platform 120. Using Bayes thereom, conditional probabilities may be assigned to, for example, user profile variables, where the probabilities estimate the likelihood of a video content or virtual video content viewing and may based at least in part on prior observations of the users' prior interactions with video content. Naive Bayes classifiers may also be used to analyze video content and virtual video content data. A naive Bayes classifier is a probabilistic classifier based on applying Bayes' theorem with strong (naive) independence assumptions. A naive Bayes classifier assumes that the presence (or lack of presence) of a particular feature of a class is unrelated to the presence (or lack of presence) of any other feature. For example, a user of video content may be classified in a user profile as interested in video content related to Singapore if he has previously searched for, retrieved, downloaded, used, and/or interacted with video content, or other type of content, related to Singapore, responded to virtual video content related to Singapore, and the like. A Bayes classifier may consider properties, such as prior use of Singapore video content, prior downloads of Singapore-related video content, searches for Singapore video content and the like to independently contribute to the probability that this user is interested in Singapore-related video content and may respond well to virtual video content, such as advertisements, that are related to Singapore and organizations within Singapore. Once a classification is assigned within the user profile (e.g., User X = Singapore Fan), the user's information may be stored and shared by the virtual advertising platform 120 (e.g., sending the data to.an ad server where the classification "Singapore Fan" may be used to select Singapore-related sponsored content, such as virtual video content to be used by the virtual advertising platform 120, to deliver to the user is association with the delivery and presentation of Singapore, or other subject-related content). A single user profile may include a plurality of classifiers. For example, the Singapore fan's user profile may also include classifiers indicating that the user is a "native English speaker," or an "online poker player," and so forth, using the data that is associated with the user's actions and behaviors within the user dataset 170, and any other data sources as described herein. An advantage of the naive Bayes classifier is that it may require a small amount of training data to estimate the parameters (means and variances of the variables) necessary for classification. Because independent variables are assumed, only the variances of the variables for each class need to be determined and not the entire covariance matrix. This characteristic of naive Bayes may enable the classification.
[96] In embodiments, a behavioural data analysis algorithm may be used for developing behavioral profiles for use by the virtual advertising platform 120 for the selection of video content and virtual video content to be included in a virtualized digital video feed 142 that will be presented to a client device 158. Behavioral profiles may then be used for targeting advertisements and other virtual video content. A behavioral profile may include a summary of a user's video viewing activity, including the types of content and applications accessed, and other behavioral properties. The user's activity summary may include searches, browses, purchases, clicks, impressions with no response, or some other activity as described herein. The behavioral properties may be summarized as continuous interest scores of a video content category, past responsiveness to virtual video content (e.g., transactions made following viewing of advertising virtual video content), or some other property. Continuous frequency scores and continuous recency scores (e.g., how recently the activity occurred) may be considered as behavioral properties for use in constructing a behavioral profile. A user's activity summary and the behavioral properties may be categorized using the analytic techniques as described herein (e.g., naive Bayes classifiers). Video content data, and its characteristics, and sponsored content, such as virtual video content, that may be associated with the presentation of video content to a user (e.g., an advertisement), may also be used for the generation of a behavioral profile. For example, data such as advertisement identity, ad tag, user identity, advertisement spot identity, date, and user response may be used. In addition, content categories may be used for targeting virtualized digital video feeds 142 to users' client devices and/or advertisements based on a behavioral profile, or portion of a behavioural profile. Further, content categories may be associated with each search, browse, download, purchase, or other online behavioral activities and/or transactions.
[97] A program of automatically syndicating virtual video content to a user may be based at least in part upon the relevance of contextual data associated with the video content and information known about a user, or group of users (e.g., user profile data, as described herein). The automation of syndicating video content and virtualized digital video feeds 142 may be based at least in part on associating metadata with the virtual video content. Contained within the metadata may be information regarding the relevance of the virtual video content to various users and/or user groups. An example of only one of the many examples of how a metadata may contain relevance information may include: metadata indicating the relevance of a virtual video content to a user based at least in part on the user data 170, data related to the user's client device (e.g., a video playback capability), or some other type of data and/or metadata indicating the average relevancy score that is associated with a video content viewing by a user from a given user category, and the like.
[98] In embodiments, a server application, or plurality of server applications, designed for retrieving video content using the virtual advertising platform 120 may read search websites, syndication feeds, or other content and/or data looking for video content to use as part of creating a virtualized digital video feed 142 using the virtual advertising platform 120. In another embodiment, the virtual advertising platform 120 may be associated with a database or plurality of databases in which the URLs or other data corresponding to and identifying video content and entities having stored video content. Once the server(s) confirms the website or other video content storage location is to receive and/or provide syndicated video content, including receiving virtualized digital video feeds 142 from the virtual advertising platform 120, the server may automatically receive, tag, and/or provide video content to the website or other entity, and/or send virtualized digital video feed 142 content to the website. In embodiments, a tag may be provided by any number of different entities or sources. For example, the tag may be provided by the virtual advertising platform 120, a third party tagging service, or some other tagging provider.
[99] In embodiments, an automated video syndication program and/or virtualized digital video feed 142 syndication program, may derive revenue, for example through a flat fee, revenue sharing, or no-fee service program offered to an advertiser, website, ad exchange, ad network, publisher, television broadcaster, or some other entity. In embodiments, parties such as a user of a client device 158 that is capable of playing video may be required to pay a usage fee to access, create, aggregate, and/or interact with virtual content and the creation, use, distribution, and viewing of virtualized digital video feeds 142. In another embodiment, sponsored content such as an advertisement may be presented to a user in conjunction with the presentation of virtual video content. The owner of the sponsored content, or other interested party, may be required to pay a fee for the right to present the sponsored content to a user's client device in the form of a virtualized digital video feed 142 that is created by the virtual advertising platform 120. This revenue may be shared among the virtual advertising platform 120 and a third party (e.g., website owner). Revenue may be derived from sponsors of virtual video content participating in the automated syndication program. Fees may be derived from the sponsors of virtual video content, a competitive bidding process, auction, flat fee service, or the like. The fee structure and bidding may be based at least in part on a relevancy score associated with a virtual video content.
[100] The disclosure concerns a system and method for content-triggered visual tagging of people and objects in live sports, in videos and in reality TV entertainment shows. The method applies to predefined patterns occurring within cluttered scenes. Automated tagging is based on an automated image-processing method in which specific operators are trained to detect a set of predefined visual patterns. The method is invariant to minor pattern variations due to motion, perspective distortion, and non-rigid deformation. Live tagging metadata streams are generated and may be amalgamated with 3rd party metadata streams/services to generate derived, higher-level metadata for numerous applications. For example, a video signal is acquired from a live broadcast via an internet-derived service, such as a streaming video. When a pre-defined set of visual patterns is detected, interactive content may be placed as desired on the images, utilizing geographic- and user-specific knowledge to customize the nature of this augmented content.
[101] In embodiments of the present disclosure, an interactive system of video customization and delivery (the "interactive video system") is provided that is enabled to facilitate the streaming of interactive customized video where components of the video stream may be personalized to the needs and interests of viewers and where viewers may interact with content and with one another in relation to the viewing experience. In embodiments, the interactive video system may enable video providers to include advertisements targeted to the interests of a given viewer in the video stream being watched by that viewer and such advertisements may be clickable, selectable, or otherwise responsive to the viewer and enabled for interaction by the viewer, such that the viewer may prompt the inclusion of additional advertising-related content by selecting the advertisement or otherwise expressing interest in it. Such advertising- related content may include the ability to complete a transaction while watching the video, possibly in a pop-up window. In embodiments, the interactive video system may offer viewers one or more of the following features: (1) the real-time insertion of interactive pop-ups, possibly based on video content-based triggers or a combination of viewer-specific data and content-based triggers; (2) the ability to click on or otherwise select an object in the video stream to generate custom or personalized content, such as pop-up boxes with more information, advertising links, comments from other viewers, related videos, and other related images and information; (3) virtual camera controls, allowing viewers to switch camera angles, to zoom in or out, and to replay portions of the video stream; (4) social media features that allow viewers to interact with one another, such as by liking, sharing, commenting, gaming (e.g., video games, contests, or the like), betting, and tagging; and/or (5) access to social-network-generated content, such as comments, video responses, and Twitter "tweets" related to the video, or some other type of social media interaction or content.
[102] Referring to Figure 6, an embodiment of the disclosure may include one or more of the following components:
(1) core technology 601 that may include image processing, pattern recognition, and real-time computation tasks;
(2) algorithms for the processing and analysis of project elements, including metadata, web-infrastructure, and video-streaming, such algorithms possibly enabling the technology to interoperate with other systems, to provide user interfaces, and to manage logging, reporting and video processing, transferring, and storage;
(3) acquisition of video 602 from live broadcasts and internet-derived services, and using live frame extraction 603 to extract frames (or digital images) from video 602;
(4) predefinition of sets of visual patterns 604 (for example logos) to be detected; capturing of logos using logo capture 605 and chameleonator 606 interfaces and storing in logo database 607, automated generation of detection models that may be based on the nature of the predefined patterns established in a training phase;
(5) automatic detection of the presence of the predefined visual patterns, possibly in cluttered scenes with multiple moving images, robust to appearance variations due to lighting changes, cameras, variations in pose, and partial occlusion using on-player logo detector 608, static logo detector 609 and shirt- pattern player identity detector 610;
(6) tagging metadata to the logo automatically (using Coded Target 611) or manually (using Curated Target 612) and storing in Coded Target database 613; Coded-target player identity detector 614 feeds the metadata into metadata engine 615
(7) triggering of metadata messages by metadata engine 615 from metadata database 616 based on the detection of predefined patterns, such metadata containing numerous data points, including time, location, scale, and identity information;
(8) the transmission of metadata via metadata service 617 (for example, broadcast infrastructure, internet, or other data transmission method); the metadata can also be used for metadata statistics 620;
(9) the aggregation of video content-based metadata (with service rules/recipes 619) with real-time metadata streams from third party services using metadata aggregator 618, resulting in derived metadata streams; third party services including social media 621, betting services 622, live statistics 623, advertising services apparel/product 624, contextualised advertising 625, historical data 626, crowdsourced metadata 627 having 3rd party integration plugins 628; aggregated metadata is stored in aggregated metadata database 629
(10) annotation, augmentation, and live updating of broadcast and internet video via aggregated metadata API 630 on both primary video displays 631 and second screen mobile devices 632 such as computers, mobile phones, and tablets, for purposes of implementing (a) betting and/or gaming services 633, (b) advertising services, (c) interactive apparel and product placement for shopping 634, (d) social media services 635; (e) live player statistics 636; (f) annotations based on the retrieval of historical or data-based, tag-specific information; and (g) crowd-sourced metadata;
(10) live estimation of the duration over which predefined patterns appear in a video sequence;
(11) feedback processes for providing information on the operation of these components, such as testing and optimization of both the performance and the reliability of core technology elements and the algorithms, that may allow further refinement of these components and improvements to their efficacy and efficiency; and
(12) other improvements designed to facilitate the commercialization of the disclosure.
[103] Referring to Figure 7, a preferred embodiment of the invention may include a number of modules and other functional elements, including, but not limited to: a detection and recognition module 701. Detection and recognition module 701 may identify logos and other elements in a source video feed or video stream (which is a sequence of digital images) or static imagery (digital image), based at least in part on the methods and systems as described herein. Detection and recognition module 701 can be a web crawler or web spider or the like, implemented with Node.js, PHP, Python, Ruby or any other suitable programming languages and platforms.
[104] The invention may also include an automated metadata tagging module 702 that assigns metadata tags to such identified elements, such metadata tagging module enabled to cross-reference the identified video elements with external databases, and assign codes to image regions. This form of Automated Content Recognition (ACR) using metadata tags work as an enhancement over current available ACR methods as these coded tags (meta-tags) would act as the digital markers to automatically recognise specific content in the video stream. [105] The invention may also include an administered metadata tagging module 703 enabling an administrator to designate elements of a video feed to be tagged with metadata codes, where such administered metadata tagging module may include automated features that may calculate the area of the image in each frame to be tagged and the frames that match the administrator's tagging instructions.
[106] The invention may also include a metadata aggregation module 704 to aggregate the automatically tagged metadata and the administrated tagged metadata, with third party metadata. Third party metadata means metadata obtained from other sources or third party services. Third party services non-exhaustively including social media, betting services, live statistics, advertising services apparel/product, contextualised advertising, historical data, crowdsourced metadata. The aggregation of the tagged metadata with real-time metadata streams from third party services using metadata aggregation module 704, results in derived metadata streams. The aggregation of metadata is important as the integration of 3rd party metadata services or sources combined with the tagged metadata would enable contextualised, interactive and social metadata. This subsequent aggregated metadata would allow for the augmentation and live updating of broadcast and internet video for purposes of implementing (a) betting and/or gaming services 633, (b) advertising services, (c) interactive apparel and product placement for shopping 634, (d) social media services 635; (e) live player statistics 636 as depicted in figure 6. The metadata aggregation module 704 can also aggregate the tagged metadata with any inherent metadata from the digital image.
[107] The invention may also include a static image selection and generation module 705 enabled to replace or superimpose a static image within video content, such as a logo, with another static image, such a module enabled to make use of viewer- specific data in this replacement process.
[108] The invention may also include a dynamic image selection and replacement module 706 enabled to replace or superimpose dynamic images, such as a waving flag or a logo on a jersey, with another dynamic image, such a module enabled to make use of viewer-specific data in the replacement process.
[109] The invention may also include a video stream integration module 707 enabled to integrate a logo, button, or other image into a source video, such that the integrated content is located in a logical and consistent place and does not obstruct other important elements of the video more than necessary.
[110] The static image selection and generation module 705, dynamic image selection and replacement module 706 and video stream integration module 707 can be part of an image selection and generation module.
[Ill] The invention may also include a feedback management module 708 enabled to interpret and respond to viewer input, for example, by changing database values in response to viewer inputs and then looping back to the image selection modules to reassess the need for customized content.
[112] In operation and referring to figure 8, in step 801, the administrator defines and designates the set of visual patterns for recognition prior to the video stream broadcast. This set of visual patterns can be a logo. This video stream broadcast can be a live broadcast or a rebroadcast of a previously recorded video.
[113] In step 802, the automated metadata tagging module 702 assigns metadata tags to the set of visual patterns.
[114] In step 803, the administrator uses the administered metadata tagging module 703 to manually assign metadata tags to the set of visual patterns.
[115] In step 804, the metadata aggregation module 704 aggregates the automatically tagged metadata and the administrated tagged metadata, with third party metadata. [116] In step 805, the detection and recognition module 701 detects and identifies the set of visual patterns in the video stream. This detection and identification may be accomplished (a) by recognition of distinct markings, including jersey numbers for athletes, (b) based on coded patterns placed on clothing, (c) based on object or face recognition algorithms, or (d) based on administrator or director designation.
[117] In step 806, when the pre-defined set of visual patterns is detected, the image selection and generation module displays and places content on the digital images in the video stream (a video stream is a sequence of digital images), utilizing geographic- and user-specific knowledge (i.e. metadata) to customize the nature of this augmented content. In other words, the content being displayed is customized or based upon the aggregated metadata. Preferably, the image selection and generation module superimposes the content over the set of visual patterns. This can be done by overlaying the content over the geo-spatial coordinates (which is part of the tagged metadata) of the set of visual patterns. Preferably, the content is interactive. The image selection and generation module can comprise static image selection and generation module 705, dynamic image selection and replacement module 706 and video stream integration module 707. The curated triggering of integrated content occurs when an administrator determines that tagging an object or event is appropriate or when such an administrator deems it appropriate to trigger the release of content based on existing tags.
[118] In an example of these embodiments involving the broadcast of a sporting event, athletes may wear jerseys with special markings that facilitate identification regardless of whether the player's face, jersey number, or name is visible. In another example of these embodiments involving sports broadcasts, a product logo may be pre- designated to appear on the jersey of a given viewer's favourite player. In a third example of these embodiments involving sports broadcasts, a message may be designated to appear when a goal or point is scored, such message possibly including the option for the viewer to get more information on the player who scored the goal or point, to bet on that player, or to purchase a product associated with that player, such as a jersey. In a fourth example of these embodiments involving sports broadcasts, a director or administrator may determine that a given moment in a match has a high level of excitement or emotion, and may then tag characteristics of the video stream as they exist at that moment or may trigger the release of messages that have been pre- designated or calculated to be appropriate for such moments.
[119] In embodiments, the disclosure may include a video tagging process, which may include one or more of the following sub-processes: (1) designation by administrators of logos or other images for recognition prior to video stream broadcast, where video stream broadcast refers either to a live broadcast or to a rebroadcast of previously recorded video; (2) generation of models of logos or other images prior to video stream broadcast; (3) detection and identification of people and objects in the video stream, which detection may be accomplished (a) by recognition of distinct markings, including jersey numbers for athletes, (b) based on coded patterns placed on clothing, (c) based on object or face recognition algorithms, or (d) based on administrator or director designation; (4) curated triggering of integrated content when an administrator or director determines that tagging an object or event is appropriate or when such an administrator or director deems it appropriate to trigger the release of content based on existing tags. In an example of these embodiments involving the broadcast of a sporting event, athletes may wear jerseys with special markings that facilitate identification regardless of whether the player's face, jersey number, or name is visible. In another example of these embodiments involving sports broadcasts, a product logo may be pre-designated to appear on the jersey of a given viewer's favorite player. In a third example of these embodiments involving sports broadcasts, a message may be designated to appear when a goal or point is scored, such message possibly including the option for the viewer to get more information on the player who scored the goal or point, to bet on that player, or to purchase a product associated with that player, such as a jersey. In a fourth example of these embodiments involving sports broadcasts, a director or administrator may determine that a given moment in a match has a high level of excitement or emotion, and may then tag characteristics of the video stream as they exist at that moment or may trigger the release of messages that have been pre-designated or calculated to be appropriate for such moments.
[120] In embodiments, a number of technologies relating to managing video streaming; managing and manipulating metadata; recognizing and manipulating images; providing and responding to user-interface tools; synchronizing the use of such technologies; and carrying out other processes helpful to the generation, processing, and transmission of large amounts of data.
[121] In embodiments, image recognition may be accomplished through the use of automatically generated detection models for identifying pre-defined spatial patterns in colour images extracted from video sequences, as established in an independent training phase. Different algorithmic approaches can be taken to identify the spatial patterns, utilizing approaches as discussed in Zitova and . Flusser, 2003 [Zutova and Flusser, October 2003, Image registration methods: a survey, Image and Vision Computing, 21(11) 977-1000]. One approach is to use a prototypical example of a targeted pattern as the training model, comprising a 2-dimensional array of pixel values distributed across three colour channels. This may be achieved by cropping the prototype pattern from a video sequence. In this approach, either all three colour channels may be utilized, or an intensity image only may be utilized, in which an average of the three colour channels is performed. Normalised cross correlation can then be performed between the prototype and incoming images, in which the prototype is referred to each pixel in the incoming image, resulting in a match score. Multiple calculations on the same incoming image are required to detect spatial patterns that undergo scale and rotation in a scene. The normalized cross correlation method is extended in this case by utilizing an iterative approach whereby the training model is scaled and rotated a number of times. The method thus computes match scores for a range of scales and rotations, and considers the highest score. A score exceeding a predetermined threshold is considered to be a detection. [122] In embodiments, interface with a wide range of video input sources may be enabled, including but not limited to interface with live satellite video, live cable video, live wireless video, live internet video, pre-recorded video, and other forms of video transmission and storage.
[123] In embodiments, the manipulation of metadata, where metadata is a term for referring to data about data, specifically data about the descriptive elements that define an asset, may be enabled. In the context of video production and delivery, metadata may be referred to as "TV metadata" and may include pieces of information and images that can be used to describe the content of video, such as subtitles, actors, characters, plot elements, duration, broadcast quality, reviews (for non-live video), and whether or not the video is being broadcast live. Use of TV metadata to customize content delivery may have the effect of making the video viewing experience more engaging, more social, and more rewarding for viewers and may create new business opportunities for content creators, broadcasters, and other players in the video production and distribution chain. Such manipulation of TV metadata may take a number of forms, including one or more of the following: (1) creation of metadata through automated means, administrator tagging, or viewer input; (2) the aggregation of various types of metadata from various sources, including combining live video content-based metadata with metadata from third party services; (3) transmission and communication of metadata; (4) processing of metadata; (5) editing of metadata; and (6) other use of metadata that may help to facilitate the functionality of the disclosure. In examples of these embodiments, viewer profile information may be combined with viewer inputs to create new metadata. In other embodiments, tagging of video may refer to linking of TV metadata with external sources of data through the use of interactive tags. In examples of these embodiments involving sporting events, when a viewer who is identified in profile metadata as a fan of Team A indicates through the user interface an interest in ordering a jersey, new metadata may be created indicating that the viewer is interested in a Team A player jersey. [124] In embodiments, the web-streaming infrastructure may include a number of components and interfaces, including but not limited to (1) servers; (2) buffering technologies and resources; (3) communications interfaces; (4) processing arrays; and (5) gateways.
[125] In embodiments, video tagging applications may tag video, enable administrator tagging, interpret user inputs, calculate appropriate content for insertion, and insert content into video streams for delivery to viewers. Such applications may take a number of forms, which may include one or more of the following: (1) technology- demonstrator form designed to demonstrate functionality; (2) commercialized form to accomplish specific tasks, and (3) generalized form to accept third-party plug-ins, modules, and inputs.
[126] In embodiments, the insertion and management of clickable zones into video streams may be enabled. Such clickable zones may have one or more of the following characteristics: (1) they may be generated based on metadata, including a combination of viewer profile metadata and video-specific metadata; (2) they may incorporate an augmented HTML layer onto the screen superimposed on the video feed, such that elements of the video become clickable or otherwise subject to triggering by user input; (3) they may be selectable by a viewer using a mouse, trackpad, touch screen, gesture, voice command, stylus, or other viewer input mechanism; (4) they may be adjusted in size, shape, and position in real time as objects in the video move and camera angles change; (5) they may involve simultaneous viewing of a video feed on a television and on an internet-enabled computer, tablet, or portable device in cases where the clickable area is on the computer, tablet or portable device rather than the television; (6) when clicked or otherwise selected, they may result in changes to metadata and initiate changes to the video stream, such as the insertion of pop-up messages, integration of new customized content, or other changes appropriate based on the new metadata. In examples of these embodiments involving the broadcast of sporting events, team logos on the playing field may be made clickable, such that a viewer watching the sporting event on an iPad may touch one of the team logos to get additional information and user interface options. Such additional information and options may include, but is not limited to, one or more of the following: (1) team statistics; (2) player profiles; (3) ecommerce interfaces enabling the viewer to purchase team-branded products; (4) online betting interfaces enabling the viewer to place wagers on team members of the entire team; (5) schedules and ticket sales for future matches by that team; and (6) any other information or options determined to be appropriate by algorithms processing relevant metadata.
[127] In embodiments, the insertion and management of customized viewer selectable augmented advertising into video streams may be enabled. Such customized advertising may have one or more of the following characteristics: (1) it may involve the automatic triggering of the annotation and augmentation of interactive, Internet Protocol- based advertisements through a predefined set of business rules; (2) it may be generated based on metadata; (3) it may involve the use of clickable zones, as defined herein; (4) it may be targeted to the needs and interests of individual viewers; (5) it may be viewable on a computer, tablet, or portable device, such as a mobile phone, iPad, iPod, laptop, desktop, or other internet-connected computing device; (6) it may incorporate an augmented HTML layer onto the screen superimposed on the video feed, such that elements of the video become clickable or otherwise subject to triggering by user input; and (7) it may involve simultaneous viewing of a video feed on a television and on an internet-enabled computer, tablet, or portable device. In examples of these embodiments involving the broadcast of sporting events, clickable advertising may be integrated into the video stream on player jerseys, such that viewers may click on the jerseys to call up ecommerce interfaces enabling the viewer to purchase player jerseys and other team-branded products. In other examples of these embodiments involving the broadcast of sporting events, contextualized (localized and personalized) advertisements may be automatically triggered when a predefined logo on a player jersey is automatically detected above a designated scale range. [128] In embodiments, real-time augmentation of video streams using data from sources that have been pre-placed in the video production environment may be enabled. Such sources may include sensors, thermometers, motion detectors, accelerometers, pressure gauges, and other devices for collecting information. In an example involving a sporting event, data may be transmitted wirelessly from a sensor built into an athlete's shoe to determine the speed at which the athlete is running and that information may be inserted into the video feed of viewers whose metadata settings indicate an interest in knowing the performance statistics of that athlete. In another example involving a sporting event, data on an athlete's pulse rate may be transmitted wirelessly from a heart rate monitor worn by the player and that information may be inserted into the video feed of viewers whose metadata settings indicate an interest in knowing the vital statistics of that athlete. Such process of providing real-time data augmentation may also include integrated advertising that may feature brand placement, product ordering links, or other commerce-related features. In an example of these embodiments involving sporting events, data on the speed that an athlete wearing a shoe sensor is running might include the logo of the shoe manufacturer or a link to purchase a pair of similar shoes.
[129] In embodiments, social media integration may be enabled having at least one or more of the following features: (1) the aggregation of video content-based metadata with social media network APIs tagged to targeted elements of a video broadcast and made interactive through augmentation with clickable pop-ups; (2) integration into the video stream of links to other social media sites inserted in viewer- selectable images, such as Facebook "like" buttons or Twitter "tweet" buttons; (3) ability of viewers to enter comments on people, objects, events, video zones, video segments, and other identified content into the video stream; (4) ability of viewers to chose to view comments of other viewers generally, other viewers in their social networks only, or no comments at all; (5) ability of viewers to share thoughts, preferences, and reactions using a proprietary social network, such as by clicking on augmented "startags," where a startag is a clickable or otherwise selectable image integrated into the video stream whose selection by a viewer may make changes to that viewer's metadata tags; (6) integration of viewer input into live video broadcasts; and/or (7) tracking and reporting of social network usage to gauge brand engagement and to measure uptake. In examples of these embodiments involving sporting events, viewers may click on their favorite players, tagging those players using startags on the proprietary network or using Facebook "like" buttons, may tweet about players or plays using a tweet button, and may make comments about plays with such comments being made available to certain other viewers in real time and possibly also to other people through various social networks. For example, a viewer could comment about a call made by a referee. In an example of these embodiments involving a game show broadcast, a contestant may be able to ask home viewers for their advice, with such advice being provided by viewers through user-interface tools, aggregated by servers, and transmitted back to the studio.
[130] In embodiments, elements of video streams may be annotated with links for more information, which may be available for viewers to access by clicking or otherwise selecting the links. Such annotations may include additional information on people, objects, or other elements of the video stream. Such links may be associated with augmented logos or may be available by clicking on non-augmented elements of the video stream, such as people and objects. In an example of these embodiments involving a sporting event, a clickable button may be integrated into the video stream that allows a viewer to click for statistics on a certain player. In another example of these embodiments involving a sporting event, a viewer may click on a player for more information about that player, including statistics, without having been prompted by a button or other inserted content. In yet another example of these embodiments involving a sporting event, a viewer may click on a goal (net) to get information on all the goals scored during the game including links to view video segments of those goals being scored.
[131] In embodiments, viewers may select from a menu of earlier segments of a video to replay. In an example of these embodiments involving the broadcast of a sporting event, a viewer may be able to click on a button labeled "top plays" to view a list of earlier segments of the match tagged as having the highest viewer interest with each segment on the list launching the replay of that segment of the video stream when clicked, possibly offering the viewer the option to view the segment from multiple camera angles or to zoom in while viewing the segment. Such replayed video segments may also list advertising sponsors and may offer links to information about the advertisers or to ecommerce product order pages.
[132] In embodiments, a range of advertising options may be enabled, including one or more of the following: (1) integration of localized advertising into video streams, where "localized advertising" refers to advertising for products and services that match preferences identified by a given viewer's metadata tags, such advertising either appearing on existing elements of the video stream— including people, objects, and logos— or appearing in the form of pop-up advertisements added to the video stream based on viewer metadata tags; (2) sponsorship of enhanced features, such as live statistics, supplemental content, or segment replays; (3) lead generation based on viewer metadata; (4) revenue sharing based on paid services solicited by advertisers; (5) cost-per-click advertising; (6) insertion of social media campaigns into broadcasts; and (7) direct marketing to viewers by email, text message, or other means based on viewer contact preferences and metadata. Such localized advertising may include ecommerce capabilities consisting of contextualized and clickable advertisements offering viewers the choice of making instant purchases, saving a potential purchase in a shopping cart to buy later, or adding possible purchases to a wish list by clicking a "want" button. Such ecommerce capabilities may be automatically triggered using the annotation and augmentation of interactive, Internet Protocol-based products through a predefined set of business rules. In examples of these embodiments involving sporting events, an advertiser may have its logo inserted into a live video feed onto the uniforms of players, onto the field, or elsewhere in the video, such that its logo may be clicked by viewers to get more information about the advertiser or its products or to order its products through an ecommerce interface. In other examples of these embodiments involving sporting events, advertisements may appear in balloons or other pop-ups in areas of the video feed where they are not obscuring the event, such a in areas of the display not occupied by players, the ball, or other essential elements of the game. [133] In embodiments, a betting module may be enabled that allows viewers to place bets on aspects of a video feed, such as a live video broadcast of a sporting event. Such betting module may involve the aggregation of video content-based metadata with real-time metadata streams from sports betting platforms and may be tagged to targeted viewers and objects and made interactive through the augmentation of clickable pop-ups. Such bets may be placed through the use of clickable buttons that identify a predetermined betting option; or may include customized bets that may be entered through various user-interface options, such as typing or voice commands. Initiating the process of placing a bet may cause a pop-up betting screen to be integrated into the video stream. In an example of betting by clicking a button with a pre-determined betting option while viewing a live feed of a sporting event, a viewer may click a button that says, "Click here to bet $10 on the home team to win." In an example of customized betting on a live feed of a sporting event, a viewer may place a bet by voice command, such as, "I bet $10 that Smith will score a goal in the next ten minutes." In such examples, the betting odds and potential return on the bet may be displayed in a pop-up box that is integrated into the video stream and which may include a confirm bet button, as well as options for setting betting preferences, editing credit card information, and disabling the betting interface. In such examples, the betting popup box may be used by viewers to track their bets live during the sporting event.
[134] In embodiments, real-time statistics may be integrated into live video broadcasts and may have one or more of the following characteristics: (1) statistics may be displayed to viewers based on their viewing preferences, as indicated by viewer- specific metadata tags, or may be made available through buttons or other viewer- selectable links; (2) statistics may be displayed in text, graphically, or both; (3) statistics may be aggregated and calculated within the interactive video system or may be acquired from third-party sources or through the use of third-party technology; (4) statistics may include data relating only to the video being broadcast or may include historical information on past events; and (5) statistics may include both retrospective data and prospective probability calculations. In examples of these embodiments involving sporting events, the video stream may include augmented "more information" buttons on each player that, when clicked or otherwise selected, display statistical information on that player or the odds that the player will achieve a particular objective within a set time frame, such as scoring a goal within the next ten minutes or scoring the first goal of the match.
[135] Referring to Figure 9 in embodiments, a live popup insertion 901 may be integrated into the video stream based on pattern recognition of the number and text appearing on a player shirt.
[136] Referring to Figure 10 in embodiments, a static logo 1001 may be inserted onto a fixed surface in either a live or a recorded video.
[137] Referring to Figure 11, in embodiments, a static logo 1101 may be inserted in place of a logo in the source video feed in either a live or a recorded video.
[138] Referring to Figure 12, in embodiments, a logo may be integrated into a video feed on a moving target in the source video feed in either a live or a recorded video.
[139] In embodiments, an animation tool may be used to facilitate the integration of an inserted graphical element onto the surface of a moving object. In examples of this embodiment involving sporting events, such an animation tool may be used to make a logo appearing on a player jersey appear to flex and shift with the surface of the jersey, as the jersey moves in response to player motion, with changing camera angles, and in response to wind and other external factors.
[140] Referring to Figure 13, a live popup insertion 1301 based on identification of on-player coded targets is shown. A logo may be integrated into a live video feed, such that the logo displays on a moving target using coded targets. In examples of these embodiments involving sporting events, targets may be indicated on the jersey of a player and these targets may be used to insert the image of a logo of a product of interest to a given viewer onto the jersey of that player in the video feed being transmitted to that viewer.
[141] In embodiments, the disclosure may enable the automatic triggering of annotation and augmentation tagged to targeted elements of a video feed allowing the creation and transmission of crowdsourced metadata. In examples involving sporting events, viewers may create their own annotations tagged to a detected player, where the viewer-created annotations may be shared with other viewers according to viewer preferences or other settings contained in metadata.
[142] In embodiments, live estimation of advertising statistics may be enabled. Such estimation may involve the tracking of predefined special patterns during a live sports or reality-TV event, the translation of those tracking data into duration statistics normalized based on the event duration, and may thereby be used to calculate the percentage of the broadcast during which the advertisement was visible to viewers. In examples of these embodiments involving sporting events, the amount of time a logo on an athlete's shirt is visible and clickable during the course of a game may be calculated and transmitted to the advertiser.
[143] In embodiments, data integration techniques and methods may be used as part of the virtual advertising platform 120, as described herein, to collect, join, merge, validate, analyze, and perform other data processing operations for digital video data, virtual video content data, user data, client device data (e.g., applications used to interact with virtual video content), and other data types as described herein. Data integration techniques and methods may be used to take the information collected from a plurality of digital video data sources in order to draw an inference from the collected information, identifying a potential change to a database based on newly received information, and validating the change to the database based on the inference.
[144] In embodiments, data integration techniques and methods may be used to extract information from a plurality of digital video data sources, and the like, the data sources having a plurality of distinct data types, transforming the data from the data sources into a data type that can be represented in, for example, a database to be used by a virtual advertising platform 120, the database thereby integrating information from the distinct data types.
[145] In embodiments the distinct data types may be selected from a group consisting of content data, user data, contextual information relating to video content and virtual video content, user behavioral information (including user profiles), demographic information, usage history, and other data sources and types as described herein. In embodiments, data integration techniques and methods may be used to apply rules, such as by a rules engine, in connection with creation, updating and maintenance of a data set, such as one stored or used in association with a virtual advertising platform 120. A rules engine may be applied to secondary change data, that is, data that comes from one or more data sources and that indicates that a change may be required in a data set or to inference data, that is, data derived by inferences from one or more data sets. For example, a rule may indicate that a change in a data set will be made if a secondary data source confirms an inference, or if an inference is consistent with data indicated by a data source. Similarly, a rule might require multiple confirmations, such as requiring more than one data source or more than one inference before confirming a change to a data set (or creation of a new feature or attribute in the data set). Rules may require any fixed number of confirmations> whether by other data sets or by inferences derived from those data sets. Rules may also embody various processes or work flows, such as requiring a particular person or entity to approve a change of a given type or a change to a particular type of data.
[146] In embodiments, data integration techniques and methods may be used to extract information from a plurality of digital video data sources, the data sources having a plurality of distinct data types, storing the data in a common data set, considering a change request associated with a database, such as a database that is associated with a virtual advertising platform 120, and using the common data set to validate the change request. [147] In embodiments, data integration techniques and methods may be used to extract information from a plurality of digital video data sources, the data sources having a plurality of distinct data types, storing the data in a common data set, considering the common data set to identify potential changes to a database, such as a database that is associated with an virtual advertising platform 120, and initiating a change request based on the common data set.
[148] In embodiments, a data integration facility may be used to integrate data from a plurality of digital video data sources, the data sources including attributes relevant to an virtual advertising platform 120, wherein the data integration facility is selected from the group consisting of an extraction facility, a data transformation facility, a loading facility, a message broker, a connector, a service oriented architecture, a queue, a bridge, a spider, a filtering facility, a clustering facility, a syndication facility, and a search facility.
[149] In embodiments, a data integration facility may be used to integrate data from a plurality of digital video data sources, taking an inference drawn from analysis of data collected by a plurality of data sources, applying a data integration rule to determine the extent to which to apply the inference, and updating a data set based on the application of the rule.
[150] In embodiments, a data integration facility may be used to integrate data from a plurality of digital video data sources, taking an inference drawn from analysis of data collected by a plurality of data sources, applying a data integration rule hierarchy to determine the extent to which to apply the inference, and updating a data set based on the application of the rule.
[151] In embodiments, a data integration facility may provide a rule hierarchy to determine a data type to use in a data set related to a system, such as an virtual advertising platform 120, the rule hierarchy applying a rule based on at least one of a data item, the richness of a data item, the reliability of a data item, the freshness of a data item, and the source of a data item and representing the rule hierarchy in a data integration rule matrix, wherein the matrix facilitates the application of a different rule hierarchy to a different type of data.
[152] In embodiments, a data integration facility may be used to integrate data from a plurality of digital video data sources, taking an inference drawn from analysis of data collected by a data sources, applying a data integration rule matrix to determine the extent to which to apply the inference, and updating a data set based on the application of the rule.
[153] A data integration facility may be used in association with a system, such as a virtual advertising platform 120, to iteratively collect and make inferences about data that is collected for use in the virtual advertising platform 120. Iteration may be performed a plurality of times, or continuously, as an on-going process to collect and make inferences about data attributes. Iteration may be a function of the entire data set (e.g., an entire virtual video content usage history of a user), or a function of specific data segments (e.g., virtual video content usage history < 24 hours). Data attributes may be stored for subsequent comparison to previously collected data inference attributes. In embodiments, this process may be continuous, and represent an ongoing comparison of inferred attributes for the purpose of detecting differences over time.
[154] The data integration facility may include at least one of a bridge, a message broker, a queue and a connector. Therefore, a useful data source may be associated with a data integration facility via computer code, hardware, or both, that establishes a connection between the source and the data integration facility. For example, the bridge may include code that takes data in a native data type (such as data in a mark-up language format), extracts the relevant portion of the data, and transforms the data into a different format, such as a format suitable for storing the data for use in an virtual advertising platform 120, or by users of the virtual advertising platform 120. The message broker may extract data from a data source (e.g., website), place the data in a queue or storage location for delivery to a target location (e.g., virtual advertising platform 120 server), and deliver the data at an appropriate time and in an appropriate format for the target location (e.g., to a user of the virtual advertising platform 120). In embodiments, the target location may be a virtual advertising platform 120 database, a data mart, a metadata facility, or a facility for storing or associating an attribute within the virtual advertising platform 120. The connector may comprise an application programming interface or other code suitable to connect source and target data facilities, with or without an intermediate facility such as a data mart or a data bag. The connector may, for example, include AJAX code, a SOAP connector, a Java connector, a WSDL connector, or the like.
[155] In embodiments, the data integration facility may be used to integrate data from a plurality of digital video data sources, the data sources including attributes relevant to, for example the virtual advertising platform 120. The data integration facility may include a syndication facility. The syndication facility may publish information in a suitable format for further use by computers, services, or the like, such as in aid of creating, updating or maintaining a virtual advertising platform 120 database, such as one related to user behavioral profiles, publishers, or some other type of data used by the virtual advertising platform 120, as described herein. For example, the syndication facility may publish relevant data in RSS, XML, OPML or similar format, such as user data, wireless operator data, ad conversion data, publisher data, and many other types of information that may be used by the virtual advertising platform 120. The syndication facility may be configured by the data integration facility to feed data directly to a virtual advertising platform 120 database, such as a user profile database, in order to populate relevant fields of the database with data, to populate attributes of the database, to populate metadata in the database, or the like. In embodiments the syndicated data may be used in conjunction with a rules engine, such as to assist in various inferencing processes, to assist in confirming other data, or the like.
[156] In embodiments, the data integration facility may include a services oriented architecture facility. In the services oriented architecture facility, one or more data integration steps may be deployed as a service that is accessible to various computers and services, including services that assist in the development, updating and maintenance of a virtual advertising platform 120 database, such as a user profile database, or the like. Services may include services to assist with inferences, such as by implementing rules, hierarchies of rules, or the like, such as to assist in confirmation of data from various sources. Services may be published in a registry with information about how to access the services, so that various data integration facilities may use the services. Access may be APIs, connectors, or the like, such as using Web Services Definition Language, enterprise Java beans, or various other codes suitable for managing data integration in a services oriented architecture.
[157] In embodiments, the data integration facility may include at least one of a spidering facility, a web crawler, a clustering facility, a scraping facility and a filtering facility. The spidering facility, or other similar facility may thus search for data, such as available from various domains, services, operators, publishers, and sources, available on the Internet or other networks, extract the data (such as by scraping or clustering data that appears to be of a suitable type), filter the data based on various filters, and deliver the data, such as to a virtual advertising platform 120 database. Thus, by spidering relevant data sources, the data integration facility may find relevant data, such as user behavioral data, contextual data relating to content, publisher data, and many other types (of the types variously described herein) of information. The relevant data may be used to draw inferences, to support inferences, to contradict inferences, or the like, with the inference engine, such as to assist in creation, maintenance or updating of an virtual advertising platform 120 database. The data may also be used to populate data fields directly, to populate attributes associated with data items, or provide metadata.
[158] The methods and systems described herein may be deployed in part or in whole through network infrastructures. The network infrastructure may include elements such as computing devices, servers, routers, hubs, firewalls, clients, personal computers, communication devices, routing devices and other active and passive devices, modules and/or components as known in the art. The computing and/or non- computing device(s) associated with the network infrastructure may include, apart from other components, a storage medium such as flash memory, buffer, stack, RAM, ROM and the like. The processes, methods, program codes, instructions described herein and elsewhere may be executed by one or more of the network infrastructural elements.
[159] The methods, program codes, and instructions described herein and elsewhere may be implemented on a cellular network having multiple cells. The cellular network may either be frequency division multiple access (FDMA) network or code division multiple access (CDMA) network. The cellular network may include mobile devices, cell sites, base stations, repeaters, antennas, towers, and the like. The cell network may be a GSM, GPRS, 3G, EVDO, mesh, or other networks types.
[160] The methods, programs codes, and instructions described herein and elsewhere may be implemented on or through mobile devices. The mobile devices may include navigation devices, cell phones, mobile phones, mobile personal digital assistants, laptops, palmtops, netbooks, pagers, electronic books readers, music players and the like. These devices may include, apart from other components, a storage medium such as a flash memory, buffer, RAM, ROM and one or more computing devices. The computing devices associated with mobile devices may be enabled to execute program codes, methods, and instructions stored thereon. Alternatively, the mobile devices may be configured to execute instructions in collaboration with other devices. The mobile devices may communicate with base stations interfaced with servers and configured to execute program codes. The mobile devices may communicate on a peer to peer network, mesh network, or other communications network. The program code may be stored on the storage medium associated with the server and executed by a computing device embedded within the server. The base station may include a computing device and a storage medium. The storage device may store program codes and instructions executed by the computing devices associated with the base station.
[161] The computer software, program codes, and/or instructions may be stored and/or accessed on machine readable media that may include: computer components, devices, and recording media that retain digital data used for computing for some interval of time; semiconductor storage known as random access memory (RAM); mass storage typically for more permanent storage, such as optical discs, forms of magnetic storage like hard disks, tapes, drums, cards and other types; processor registers, cache memory, volatile memory, non-volatile memory; optical storage such as CD, DVD; removable media such as flash memory (e.g. USB sticks or keys), floppy disks, magnetic tape, paper tape, punch cards, standalone RAM disks, Zip drives, removable mass storage, off-line, and the like; other computer memory such as dynamic memory, static memory, read/write storage, mutable storage, read only, random access, sequential access, location addressable, file addressable, content addressable, network attached storage, storage area network, bar codes, magnetic ink, and the like.
[162] The methods and systems described herein may transform physical and/or or intangible items from one state to another. The methods and systems described herein may also transform data representing physical and/or intangible items from one state to another.
[163] The methods and/or processes described above, and steps thereof, may be realized in hardware, software or any combination of hardware and software suitable for a particular application. The hardware may include a general purpose computer and/or dedicated computing device or specific computing device or particular aspect or component of a specific computing device. The processes may be realized in one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable device, along with internal and/or external memory. The processes may also, or instead, be embodied in an application specific integrated circuit, a programmable gate array, programmable array logic, or any other device or combination of devices that may be configured to process electronic signals. It will further be appreciated that one or more of the processes may be realized as a computer executable code capable of being executed on a machine readable medium.
[164] The computer executable code may be created using a structured programming language such as C, an object oriented programming language such as C++, or any other high-level or low-level programming language (including assembly languages, hardware description languages, and database programming languages and technologies) that may be stored, compiled or interpreted to run on one of the above devices, as well as heterogeneous combinations of processors, processor architectures, or combinations of different hardware and software, or any other machine capable of executing program instructions.
[165] Thus, in one aspect, each method described above and combinations thereof may be embodied in computer executable code that, when executing on one or more computing devices, performs the steps thereof. In another aspect, the methods may be embodied in systems that perform the steps thereof, and may be distributed across devices in a number of ways, or all of the functionality may be integrated into a dedicated, standalone device or other hardware. In another aspect, the means for performing the steps associated with the processes described above may include any of the hardware and/or software described above. All such permutations and combinations are intended to fall within the scope of the present disclosure.
[166] While the invention has been disclosed in connection with the preferred embodiments shown and described in detail, various modifications and improvements thereon will become readily apparent to those skilled in the art. Accordingly, the spirit and scope of the present invention is not to be limited by the foregoing examples, but is to be understood to be in the broadest sense allowable by law.

Claims

1. A method of displaying content, comprising the steps of :
defining a set of visual patterns;
tagging metadata to the set of visual patterns;
detecting the set of visual patterns within a digital image, wherein the detection of the set of visual patterns is robust to appearance variations caused by lighting changes, camera angle and partial occlusion;
aggregating the tagged metadata with third party metadata; and
displaying content along with the digital image, wherein the content is based on the aggregated metadata.
2. The method of claim 1, wherein the step of displaying the content along with the digital image comprises superimposing the content over the set of visual patterns on the digital image to augment the digital image.
3. The method of claim 1 or claim 2, wherein the digital image is part of a sequence of digital images.
4. The method of any of the preceding claims, wherein the content is a HyperText Markup Language layer which is clickable and responsive to user input.
5. The method of claim 3 or claim 4, wherein the content is selected from the group consisting of : a message for a betting service, an advertising message, a social-media message, a message concerning the detected set of visual patterns, a message of historical data concerning the detected set of visual patterns, and a message of comments about an event depicted by the sequence of digital images.
6. The method of claims 3 to 5, further comprising the step of generating a detection model to detect the set of visual patterns, wherein the detection model is trained to recognize and identify the set of visual patterns in the sequence of digital images.
7. The method of claims 5 or 6, wherein the sequence of digital images represents a sporting event and the message of historical data concerning the detected set of visual patterns comprises information concerning past performance of a player.
8. The method of claims 5 to 8, wherein the message of comments comprises comments from an audience at a sporting event.
9. The method of any one of the preceding claims, further comprising the step of estimating a time for which the content is displayed along with the digital image.
10. The method of any one of the preceding claims, wherein the step of detecting the set of visual patterns within the digital image is performed by a web crawler.
11. The method of any one of the preceding claims, wherein the step of tagging metadata to the set of visual patterns is performed by manual or automatic means.
12. The method of any one of the preceding claims, wherein the tagged metadata comprises metadata concerning an identity of the set of visual patterns.
13. A system for displaying content comprising at least one processor programmed to implement:
a detection and recognition module to detect a set of visual patterns within a digital image, wherein the detection of the set of visual patterns is robust to appearance variations caused by lighting changes, camera angle and partial occlusion;
an automated metadata tagging module to automatically tag metadata to the set of visual patterns;
an administrated metadata tagging module to allow manual tagging of metadata to the set of visual patterns;
an metadata aggregation module that aggregates the tagged metadata with third party metadata; and
an image selection and generation module to display content along with the digital image, wherein the content is based on the tagged metadata.
14. The system of claim 13, wherein the image selection and generation module displays the content along with the digital image by superimposing the content over the set of visual patterns on the digital image to augment the digital image.
15. The system of claims 13 or 14, wherein the digital image is part of a sequence of digital images.
16. The system of claims 13 to 15, wherein the content is a HyperText Markup Language layer which is clickable and responsive to user input, and the at least one processor is further programmed to implement a feedback management module to interpret the user input.
17. The system of claims 15 or 16, wherein the content is selected from the group consisting of : a message for a betting service, an advertising message, a social-media message, a message concerning the detected set of visual patterns, a message of historical data concerning the detected set of visual patterns, and a message of comments about an event depicted by the sequence of digital images.
18. The system of claims 15 to 17, wherein the at least one processor is further programmed to generate a detection model to detect the set of visual patterns, wherein the detection model is trained to recognize and identify the set of visual patterns in the sequence of digital images.
19. The system of claims 17 or 18, wherein the sequence of digital images represents a sporting event and the message of historical data concerning the detected set of visual patterns comprises information concerning past performance of a player.
20. The system of claims 17 to 19, wherein the message of comments comprises comments from an audience at a sporting event.
21. The system of claims 13 to 20, wherein the at least one processor is further programmed to estimate a time for which the content is displayed along with the digital image.
22. The system of claims 13 to 21, wherein the detection and recognition module is a web crawler.
23. The system of claims 13 to 22, wherein the tagged metadata comprises metadata concerning an identity of the set of visual patterns.
PCT/SG2014/000126 2013-03-14 2014-03-14 An interactive system for video customization and delivery WO2014142758A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361781322P 2013-03-14 2013-03-14
US61/781,322 2013-03-14

Publications (1)

Publication Number Publication Date
WO2014142758A1 true WO2014142758A1 (en) 2014-09-18

Family

ID=51537216

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2014/000126 WO2014142758A1 (en) 2013-03-14 2014-03-14 An interactive system for video customization and delivery

Country Status (1)

Country Link
WO (1) WO2014142758A1 (en)

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9154942B2 (en) 2008-11-26 2015-10-06 Free Stream Media Corp. Zero configuration communication between a browser and a networked media device
US9258383B2 (en) 2008-11-26 2016-02-09 Free Stream Media Corp. Monetization of television audience data across muliple screens of a user watching television
US9386356B2 (en) 2008-11-26 2016-07-05 Free Stream Media Corp. Targeting with television audience data across multiple screens
US9519772B2 (en) 2008-11-26 2016-12-13 Free Stream Media Corp. Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
WO2017011770A1 (en) * 2015-07-16 2017-01-19 Vizio Inscape Technologies, Llc System and method for improving work load management in acr television monitoring system
US9560425B2 (en) 2008-11-26 2017-01-31 Free Stream Media Corp. Remotely control devices over a network without authentication or registration
US9583144B2 (en) 2015-02-24 2017-02-28 Plaay, Llc System and method for creating a sports video
US9838753B2 (en) 2013-12-23 2017-12-05 Inscape Data, Inc. Monitoring individual viewing of television events using tracking pixels and cookies
WO2018034954A1 (en) * 2016-08-14 2018-02-22 The Ticket Fairy, Inc. Metadata based generation and management of event presentations
US9906834B2 (en) 2009-05-29 2018-02-27 Inscape Data, Inc. Methods for identifying video segments and displaying contextually targeted content on a connected television
US9955192B2 (en) 2013-12-23 2018-04-24 Inscape Data, Inc. Monitoring individual viewing of television events using tracking pixels and cookies
US9961388B2 (en) 2008-11-26 2018-05-01 David Harrison Exposure of public internet protocol addresses in an advertising exchange server to improve relevancy of advertisements
US9986279B2 (en) 2008-11-26 2018-05-29 Free Stream Media Corp. Discovery, access control, and communication with networked services
US10080062B2 (en) 2015-07-16 2018-09-18 Inscape Data, Inc. Optimizing media fingerprint retention to improve system resource utilization
US10116972B2 (en) 2009-05-29 2018-10-30 Inscape Data, Inc. Methods for identifying video segments and displaying option to view from an alternative source and/or on an alternative device
WO2018217176A1 (en) * 2017-05-22 2018-11-29 Rahel Saranga System for online advertisement tracking and earning distribution
US10169455B2 (en) 2009-05-29 2019-01-01 Inscape Data, Inc. Systems and methods for addressing a media database using distance associative hashing
US10192138B2 (en) 2010-05-27 2019-01-29 Inscape Data, Inc. Systems and methods for reducing data density in large datasets
US10334324B2 (en) 2008-11-26 2019-06-25 Free Stream Media Corp. Relevant advertisement generation based on a user operating a client device communicatively coupled with a networked media device
US10375451B2 (en) 2009-05-29 2019-08-06 Inscape Data, Inc. Detection of common media segments
US10395693B2 (en) 2017-04-10 2019-08-27 International Business Machines Corporation Look-ahead for video segments
US10405014B2 (en) 2015-01-30 2019-09-03 Inscape Data, Inc. Methods for identifying video segments and displaying option to view from an alternative source and/or on an alternative device
US10419541B2 (en) 2008-11-26 2019-09-17 Free Stream Media Corp. Remotely control devices over a network without authentication or registration
US10482349B2 (en) 2015-04-17 2019-11-19 Inscape Data, Inc. Systems and methods for reducing data density in large datasets
US10567823B2 (en) 2008-11-26 2020-02-18 Free Stream Media Corp. Relevant advertisement generation based on a user operating a client device communicatively coupled with a networked media device
US10631068B2 (en) 2008-11-26 2020-04-21 Free Stream Media Corp. Content exposure attribution based on renderings of related content across multiple devices
US10873788B2 (en) 2015-07-16 2020-12-22 Inscape Data, Inc. Detection of common media segments
US10880340B2 (en) 2008-11-26 2020-12-29 Free Stream Media Corp. Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US10902048B2 (en) 2015-07-16 2021-01-26 Inscape Data, Inc. Prediction of future views of video segments to optimize system resource utilization
US10949458B2 (en) 2009-05-29 2021-03-16 Inscape Data, Inc. System and method for improving work load management in ACR television monitoring system
US10977693B2 (en) 2008-11-26 2021-04-13 Free Stream Media Corp. Association of content identifier of audio-visual data with additional data through capture infrastructure
US10983984B2 (en) 2017-04-06 2021-04-20 Inscape Data, Inc. Systems and methods for improving accuracy of device maps using media viewing data
CN112689192A (en) * 2019-10-18 2021-04-20 腾讯美国有限责任公司 Method, system and storage medium for media processing and streaming
WO2021094828A1 (en) * 2019-11-11 2021-05-20 Cruz Moya Jose Antonio Video processing and modification
WO2021161132A1 (en) * 2020-02-13 2021-08-19 Edisn Media And Tech Solutions Pvt. Ltd. System and method for analyzing videos in real-time
US11115696B2 (en) * 2019-07-10 2021-09-07 Beachfront Media Llc Programmatic ingestion and STB delivery in ad auction environments
US20220027624A1 (en) * 2019-04-08 2022-01-27 Google Llc Media annotation with product source linking
US11272248B2 (en) 2009-05-29 2022-03-08 Inscape Data, Inc. Methods for identifying video segments and displaying contextually targeted content on a connected television
CN114205677A (en) * 2021-11-30 2022-03-18 浙江大学 Short video automatic editing method based on prototype video
US11308144B2 (en) 2015-07-16 2022-04-19 Inscape Data, Inc. Systems and methods for partitioning search indexes for improved efficiency in identifying media segments
WO2024078751A1 (en) * 2022-10-10 2024-04-18 Amine Arezki Method and related systems for dynamically overlaying an image on an object in a streamed video sequence

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006121986A2 (en) * 2005-05-06 2006-11-16 Facet Technology Corp. Network-based navigation system having virtual drive-thru advertisements integrated with actual imagery from along a physical route
WO2009017983A2 (en) * 2007-07-30 2009-02-05 Yahoo! Inc. Textual and visual interactive advertisements in videos
US20090259941A1 (en) * 2008-04-15 2009-10-15 Pvi Virtual Media Services, Llc Preprocessing Video to Insert Visual Elements and Applications Thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006121986A2 (en) * 2005-05-06 2006-11-16 Facet Technology Corp. Network-based navigation system having virtual drive-thru advertisements integrated with actual imagery from along a physical route
WO2009017983A2 (en) * 2007-07-30 2009-02-05 Yahoo! Inc. Textual and visual interactive advertisements in videos
US20090259941A1 (en) * 2008-04-15 2009-10-15 Pvi Virtual Media Services, Llc Preprocessing Video to Insert Visual Elements and Applications Thereof

Cited By (81)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10334324B2 (en) 2008-11-26 2019-06-25 Free Stream Media Corp. Relevant advertisement generation based on a user operating a client device communicatively coupled with a networked media device
US9167419B2 (en) 2008-11-26 2015-10-20 Free Stream Media Corp. Discovery and launch system and method
US9258383B2 (en) 2008-11-26 2016-02-09 Free Stream Media Corp. Monetization of television audience data across muliple screens of a user watching television
US9386356B2 (en) 2008-11-26 2016-07-05 Free Stream Media Corp. Targeting with television audience data across multiple screens
US9519772B2 (en) 2008-11-26 2016-12-13 Free Stream Media Corp. Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US10986141B2 (en) 2008-11-26 2021-04-20 Free Stream Media Corp. Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US9560425B2 (en) 2008-11-26 2017-01-31 Free Stream Media Corp. Remotely control devices over a network without authentication or registration
US9576473B2 (en) 2008-11-26 2017-02-21 Free Stream Media Corp. Annotation of metadata through capture infrastructure
US10977693B2 (en) 2008-11-26 2021-04-13 Free Stream Media Corp. Association of content identifier of audio-visual data with additional data through capture infrastructure
US9591381B2 (en) 2008-11-26 2017-03-07 Free Stream Media Corp. Automated discovery and launch of an application on a network enabled device
US9589456B2 (en) 2008-11-26 2017-03-07 Free Stream Media Corp. Exposure of public internet protocol addresses in an advertising exchange server to improve relevancy of advertisements
US9686596B2 (en) 2008-11-26 2017-06-20 Free Stream Media Corp. Advertisement targeting through embedded scripts in supply-side and demand-side platforms
US9706265B2 (en) 2008-11-26 2017-07-11 Free Stream Media Corp. Automatic communications between networked devices such as televisions and mobile devices
US9703947B2 (en) 2008-11-26 2017-07-11 Free Stream Media Corp. Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US9716736B2 (en) 2008-11-26 2017-07-25 Free Stream Media Corp. System and method of discovery and launch associated with a networked media device
US9838758B2 (en) 2008-11-26 2017-12-05 David Harrison Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US10880340B2 (en) 2008-11-26 2020-12-29 Free Stream Media Corp. Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US9848250B2 (en) 2008-11-26 2017-12-19 Free Stream Media Corp. Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US9854330B2 (en) 2008-11-26 2017-12-26 David Harrison Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US9866925B2 (en) 2008-11-26 2018-01-09 Free Stream Media Corp. Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US10791152B2 (en) 2008-11-26 2020-09-29 Free Stream Media Corp. Automatic communications between networked devices such as televisions and mobile devices
US10771525B2 (en) 2008-11-26 2020-09-08 Free Stream Media Corp. System and method of discovery and launch associated with a networked media device
US10631068B2 (en) 2008-11-26 2020-04-21 Free Stream Media Corp. Content exposure attribution based on renderings of related content across multiple devices
US9961388B2 (en) 2008-11-26 2018-05-01 David Harrison Exposure of public internet protocol addresses in an advertising exchange server to improve relevancy of advertisements
US9967295B2 (en) 2008-11-26 2018-05-08 David Harrison Automated discovery and launch of an application on a network enabled device
US10567823B2 (en) 2008-11-26 2020-02-18 Free Stream Media Corp. Relevant advertisement generation based on a user operating a client device communicatively coupled with a networked media device
US9986279B2 (en) 2008-11-26 2018-05-29 Free Stream Media Corp. Discovery, access control, and communication with networked services
US10032191B2 (en) 2008-11-26 2018-07-24 Free Stream Media Corp. Advertisement targeting through embedded scripts in supply-side and demand-side platforms
US10074108B2 (en) 2008-11-26 2018-09-11 Free Stream Media Corp. Annotation of metadata through capture infrastructure
US10425675B2 (en) 2008-11-26 2019-09-24 Free Stream Media Corp. Discovery, access control, and communication with networked services
US10419541B2 (en) 2008-11-26 2019-09-17 Free Stream Media Corp. Remotely control devices over a network without authentication or registration
US10142377B2 (en) 2008-11-26 2018-11-27 Free Stream Media Corp. Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US9154942B2 (en) 2008-11-26 2015-10-06 Free Stream Media Corp. Zero configuration communication between a browser and a networked media device
US10375451B2 (en) 2009-05-29 2019-08-06 Inscape Data, Inc. Detection of common media segments
US11080331B2 (en) 2009-05-29 2021-08-03 Inscape Data, Inc. Systems and methods for addressing a media database using distance associative hashing
US10271098B2 (en) 2009-05-29 2019-04-23 Inscape Data, Inc. Methods for identifying video segments and displaying contextually targeted content on a connected television
US10820048B2 (en) 2009-05-29 2020-10-27 Inscape Data, Inc. Methods for identifying video segments and displaying contextually targeted content on a connected television
US9906834B2 (en) 2009-05-29 2018-02-27 Inscape Data, Inc. Methods for identifying video segments and displaying contextually targeted content on a connected television
US10169455B2 (en) 2009-05-29 2019-01-01 Inscape Data, Inc. Systems and methods for addressing a media database using distance associative hashing
US10949458B2 (en) 2009-05-29 2021-03-16 Inscape Data, Inc. System and method for improving work load management in ACR television monitoring system
US11272248B2 (en) 2009-05-29 2022-03-08 Inscape Data, Inc. Methods for identifying video segments and displaying contextually targeted content on a connected television
US10185768B2 (en) 2009-05-29 2019-01-22 Inscape Data, Inc. Systems and methods for addressing a media database using distance associative hashing
US10116972B2 (en) 2009-05-29 2018-10-30 Inscape Data, Inc. Methods for identifying video segments and displaying option to view from an alternative source and/or on an alternative device
US10192138B2 (en) 2010-05-27 2019-01-29 Inscape Data, Inc. Systems and methods for reducing data density in large datasets
US11039178B2 (en) 2013-12-23 2021-06-15 Inscape Data, Inc. Monitoring individual viewing of television events using tracking pixels and cookies
US9838753B2 (en) 2013-12-23 2017-12-05 Inscape Data, Inc. Monitoring individual viewing of television events using tracking pixels and cookies
US9955192B2 (en) 2013-12-23 2018-04-24 Inscape Data, Inc. Monitoring individual viewing of television events using tracking pixels and cookies
US10306274B2 (en) 2013-12-23 2019-05-28 Inscape Data, Inc. Monitoring individual viewing of television events using tracking pixels and cookies
US10284884B2 (en) 2013-12-23 2019-05-07 Inscape Data, Inc. Monitoring individual viewing of television events using tracking pixels and cookies
US10405014B2 (en) 2015-01-30 2019-09-03 Inscape Data, Inc. Methods for identifying video segments and displaying option to view from an alternative source and/or on an alternative device
US11711554B2 (en) 2015-01-30 2023-07-25 Inscape Data, Inc. Methods for identifying video segments and displaying option to view from an alternative source and/or on an alternative device
US10945006B2 (en) 2015-01-30 2021-03-09 Inscape Data, Inc. Methods for identifying video segments and displaying option to view from an alternative source and/or on an alternative device
US9583144B2 (en) 2015-02-24 2017-02-28 Plaay, Llc System and method for creating a sports video
US10482349B2 (en) 2015-04-17 2019-11-19 Inscape Data, Inc. Systems and methods for reducing data density in large datasets
US10873788B2 (en) 2015-07-16 2020-12-22 Inscape Data, Inc. Detection of common media segments
US10902048B2 (en) 2015-07-16 2021-01-26 Inscape Data, Inc. Prediction of future views of video segments to optimize system resource utilization
US11971919B2 (en) 2015-07-16 2024-04-30 Inscape Data, Inc. Systems and methods for partitioning search indexes for improved efficiency in identifying media segments
US10674223B2 (en) 2015-07-16 2020-06-02 Inscape Data, Inc. Optimizing media fingerprint retention to improve system resource utilization
US11308144B2 (en) 2015-07-16 2022-04-19 Inscape Data, Inc. Systems and methods for partitioning search indexes for improved efficiency in identifying media segments
US11659255B2 (en) 2015-07-16 2023-05-23 Inscape Data, Inc. Detection of common media segments
WO2017011770A1 (en) * 2015-07-16 2017-01-19 Vizio Inscape Technologies, Llc System and method for improving work load management in acr television monitoring system
CN108028947B (en) * 2015-07-16 2021-05-11 构造数据有限责任公司 System and method for improving workload management in an ACR television monitoring system
US11451877B2 (en) 2015-07-16 2022-09-20 Inscape Data, Inc. Optimizing media fingerprint retention to improve system resource utilization
CN108028947A (en) * 2015-07-16 2018-05-11 构造数据有限责任公司 System and method for improving the workload management in ACR TV monitor systems
US10080062B2 (en) 2015-07-16 2018-09-18 Inscape Data, Inc. Optimizing media fingerprint retention to improve system resource utilization
WO2018034954A1 (en) * 2016-08-14 2018-02-22 The Ticket Fairy, Inc. Metadata based generation and management of event presentations
US10970738B2 (en) 2016-08-14 2021-04-06 The Ticket Fairy, Inc. Metadata based generation and management of event presentations
US10983984B2 (en) 2017-04-06 2021-04-20 Inscape Data, Inc. Systems and methods for improving accuracy of device maps using media viewing data
US11675775B2 (en) 2017-04-06 2023-06-13 Inscape Data, Inc. Systems and methods for improving accuracy of device maps using media viewing data
US20210279231A1 (en) * 2017-04-06 2021-09-09 Inscape Data, Inc. Systems and methods for improving accuracy of device maps using media viewing data
US10679678B2 (en) 2017-04-10 2020-06-09 International Business Machines Corporation Look-ahead for video segments
US10395693B2 (en) 2017-04-10 2019-08-27 International Business Machines Corporation Look-ahead for video segments
WO2018217176A1 (en) * 2017-05-22 2018-11-29 Rahel Saranga System for online advertisement tracking and earning distribution
US11727681B2 (en) * 2019-04-08 2023-08-15 Google Llc Media annotation with product source linking
US20220027624A1 (en) * 2019-04-08 2022-01-27 Google Llc Media annotation with product source linking
US11115696B2 (en) * 2019-07-10 2021-09-07 Beachfront Media Llc Programmatic ingestion and STB delivery in ad auction environments
CN112689192A (en) * 2019-10-18 2021-04-20 腾讯美国有限责任公司 Method, system and storage medium for media processing and streaming
WO2021094828A1 (en) * 2019-11-11 2021-05-20 Cruz Moya Jose Antonio Video processing and modification
WO2021161132A1 (en) * 2020-02-13 2021-08-19 Edisn Media And Tech Solutions Pvt. Ltd. System and method for analyzing videos in real-time
CN114205677A (en) * 2021-11-30 2022-03-18 浙江大学 Short video automatic editing method based on prototype video
WO2024078751A1 (en) * 2022-10-10 2024-04-18 Amine Arezki Method and related systems for dynamically overlaying an image on an object in a streamed video sequence

Similar Documents

Publication Publication Date Title
WO2014142758A1 (en) An interactive system for video customization and delivery
US9013553B2 (en) Virtual advertising platform
JP6803427B2 (en) Dynamic binding of content transaction items
US20210409800A1 (en) Apparatus and method for gathering analytics
US11184676B2 (en) Automated process for ranking segmented video files
JP5649303B2 (en) Method and apparatus for annotating media streams
KR101525417B1 (en) Identifying a same user of multiple communication devices based on web page visits, application usage, location, or route
US20170228781A1 (en) Systems and methods for identifying, interacting with, and purchasing items of interest in a video
JP6713414B2 (en) Apparatus and method for supporting relationships associated with content provisioning
AU2017330571A1 (en) Machine learning models for identifying objects depicted in image or video data
EP2751782A1 (en) Virtual advertising platform
US10922744B1 (en) Object identification in social media post
US11432046B1 (en) Interactive, personalized objects in content creator&#39;s media with e-commerce link associated therewith
Zhou Digital Transformation of Advertising: Trends, Strategies, and Evolving User Preferences in Online Advertising

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14763604

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14763604

Country of ref document: EP

Kind code of ref document: A1