GB2557627A - Multimedia system & method - Google Patents

Multimedia system & method Download PDF

Info

Publication number
GB2557627A
GB2557627A GB1621168.2A GB201621168A GB2557627A GB 2557627 A GB2557627 A GB 2557627A GB 201621168 A GB201621168 A GB 201621168A GB 2557627 A GB2557627 A GB 2557627A
Authority
GB
United Kingdom
Prior art keywords
objects
user
multimedia
algorithm
ordering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1621168.2A
Other versions
GB201621168D0 (en
Inventor
Cox Jasmine
Northwood Chris
Robinson Ben
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
British Broadcasting Corp
Original Assignee
British Broadcasting Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by British Broadcasting Corp filed Critical British Broadcasting Corp
Priority to GB1621168.2A priority Critical patent/GB2557627A/en
Publication of GB201621168D0 publication Critical patent/GB201621168D0/en
Priority to PCT/GB2017/053731 priority patent/WO2018109468A1/en
Publication of GB2557627A publication Critical patent/GB2557627A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/907Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/02Arrangements for generating broadcast information; Arrangements for generating broadcast-related information with a direct linking to broadcast information or to broadcast space-time; Arrangements for simultaneous generation of broadcast information and broadcast-related information
    • H04H60/06Arrangements for scheduling broadcast services or broadcast-related services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/68Systems specially adapted for using specific information, e.g. geographical or meteorological information
    • H04H60/73Systems specially adapted for using specific information, e.g. geographical or meteorological information using meta-information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26258Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for generating a list of items to be played back in a given order, e.g. playlist, or scheduling item distribution according to such list
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/85406Content authoring involving a specific file format, e.g. MP4 format
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8541Content authoring involving branching, e.g. to different story endings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8543Content authoring using a description language, e.g. Multimedia and Hypermedia information coding Expert Group [MHEG], eXtensible Markup Language [XML]

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Library & Information Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

Multimedia content which is delivered to a user device has objects each having metadata defining a step, expected duration and dependencies to other objections at least some of the objects having associated multimedia content. An ordering algorithm operates on the objects and associated multimedia using the expected duration and dependencies of the objects and the multimedia is rendered on the user device. These objects are arranged into multiple threads each comprising multiple objects, one of the threads being defined as a user thread being defined as a user thread that contains objects defined as user objects. The user thread requires interaction from a user. This provides a way for ordering and retrieval of the objects to operate in response to user interaction. The metadata may also define external resources needed for completion of a step.

Description

(54) Title of the Invention: Multimedia system & method
Abstract Title: Multimedia content with metadata and ordering algorithm (57) Multimedia content which is delivered to a user device has objects each having metadata defining a step, expected duration and dependencies to other objections at least some of the objects having associated multimedia content. An ordering algorithm operates on the objects and associated multimedia using the expected duration and dependencies of the objects and the multimedia is rendered on the user device. These objects are arranged into multiple threads each comprising multiple objects, one of the threads being defined as a user thread being defined as a user thread that contains objects defined as user objects. The user thread requires interaction from a user. This provides a way for ordering and retrieval of the objects to operate in response to user interaction. The metadata may also define external resources needed for completion of a step.
Figure GB2557627A_D0001
At least one drawing originally filed was informal and the print reproduced here is taken from a later filed formal copy.
Figure GB2557627A_D0002
Figure GB2557627A_D0003
Figure GB2557627A_D0004
Figure GB2557627A_D0005
2/9
Figure GB2557627A_D0006
C5 —ι—ι Φ* 07 Ρ > m ω §5
S to =3 o CD “O
CC
0>
CO ω
o
CL “O ’ S -2
Φ C~
Figure GB2557627A_D0007
o?
C57
0)
-c:
O
CO o
CL
LO o
LL
Jai
o 05 Ο sc
ω Ο5^ί co
ι- Ο Φ .£= Cl
CO
O i„, 0> CO
-5co jn — c r Φ O. 3? >
4/9
Figure GB2557627A_D0008
Figure GB2557627A_D0009
Figure GB2557627A_D0010
ε x ζ-ό φ ο ίΛ Π 'X ο
φ ο Φ ο
S_ . ^2 £~ φ ,ο «3
Q.
Ο, •SC
Figure GB2557627A_D0011
Figure GB2557627A_D0012
Figure GB2557627A_D0013
Time
Figure GB2557627A_D0014
o ΓΧ o £Z
co Γ3> CO
o 03 CL
lice Ό CO
Φ
co X>
to r>
Φ to
.. o
CO 4~> Q. φ o £ ~CL O , ω rr
O) o>
c;
~j£ φ ο Φ 2
Q φ
3s φ £ o o to co
XX
JX
o C3> O c
co cr>-+± co
u_ o Φ .£ Cl
Figure GB2557627A_D0015
s
CO ”O φ
CZ5 'tn o <» A -σ — tn -σ co ra . « ω .o £?j +-> o X3 f“ gL· CO £ ¢6 °
A. ε 2 o CO p *“ is: —
Q o o> > o o
90=^ ¢0 r
Figure GB2557627A_D0016
Figure GB2557627A_D0017
Figure GB2557627A_D0018
Figure GB2557627A_D0019
z -X
End
)
Start
Figure GB2557627A_D0020
Calculate priority list from graph
Figure GB2557627A_D0021
Find next step to schedule
Calculate step’s earliest possible start time from prerequisite steps
Figure GB2557627A_D0022
Increment the start time by 1 second
Schedule the step and remove from priority list
Figure GB2557627A_D0023
Figure GB2557627A_D0024
Figure GB2557627A_D0025
Figure GB2557627A_D0026
Lb'·
70·?··'- φ ·
• £Ώ Ο
775 η· :P:c to·: GS
□ώ?.;.< S,
TiT?
. ™Γ. .' f *j
“Tip
Jj.?
'..€77' ω··
,ίΞ •:o3 Z~ :o; £ X··^· £5<
Figure GB2557627A_D0027
Ό
Π3 ω
S to ω
ο
Time
Figure GB2557627A_D0028
773
Figure GB2557627A_D0029
MULTIMEDIA SYSTEM & METHOD
BACKGROUND OF THE INVENTION
This invention relates to delivery of multimedia to a user device.
Multimedia content such as text, graphics, audio and video may be delivered to user devices such as smartphones, tablets, computers and televisions in a variety of ways. Multimedia content, in particular instructional content, has typically been stored as simple, sequential lists of steps for delivery to a user device in the defined order. While this is a useful way of presenting instructional content, a considerable amount of useful information is lost in the process of reducing the multimedia to such a simple sequential list.
Various attempts have been made to improve upon the delivery and sequencing of multimedia content for delivery to user devices. One such attempt is referred to as Object-Based Broadcasting. The BBC R&D UX team have conducted several Object-Based Broadcasting experiments in the past. For example, a “Responsive Radio project, explored a system which allows a user to tailor the length of a radio documentary to their desired duration. This works with a concept of an abstract data model that then gets rendered down into a timeline fitting a user's preferences.
Other techniques for scheduling are well known. For example, scheduling algorithms using a concept of critical path have been known since the method published in 1959. The basic principle of the critical path method is that it inspects different branches of a directed graph to find the longest path(s) from the start to end, which then reveals the order in which the tasks (nodes in the graph) should be carried out.
Another aspect of scheduling problems is “resource levelling. This is the process of assigning resources to execute tasks. In the wider field of project management, the “Critical chain method aims to provide a version of the critical path method that factors in these concerns.
SUMMARY OF THE INVENTION
We have appreciated the need to improve upon the way in which multimedia content is delivered on user devices, to allow flexible presentation of multimedia to a user, whilst respecting constraints related to the multimedia content.
We have appreciated various improvements that may be made by structuring multimedia content as objects and using a data model and an algorithm to define how and when the objects of multimedia should be presented on a user device.
In broad terms, the invention provides a method of delivering multimedia content on a user device, comprising receiving objects of multimedia, each object having multimedia content and metadata defining a step and dependencies to other objects of multimedia, and operating an algorithm to retrieve and assert the multimedia objects in accordance with the metadata.
Various advantages of a method and system embodying the invention may be seen. The data model may contain a concept of a step, since this is can be considered as a fundamental building block of a whole piece of multimedia. The data model may also contain a list of dependencies between steps - multimedia is often a sequential form of media and it matters which order the steps are carried out in.
The data layer may also then be able to perform “scheduling on steps. By looking at the steps required for a given piece of multimedia content, the scheduler should automatically decide the best order to perform the steps in, taking into account the fact that some steps can happen in the background and some can't. The data model may annotate each “step with some metadata saying whether it can be run as a background task.
The schedule may be recalculated in real time as the multimedia is delivered and the user interacts with the system, in response to changing expected times of objects as the experience progresses, as well as the changing of the objects in the schedule in response to interactions from the user and external stimulus.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will now be described in more detail by way of example with reference to the drawings, in which:
Figure 1: is a schematic diagram of a device embodying the invention;
Figure 2: shows tasks that are stored as objects in a media store;
Figure 3: shows a sequence of steps that may form instructional content;
Figure 4: shows the steps of Figure 3 with the addition of an expected duration of each step;
Figure 5: shows a simple linear timeline of tasks in a priority list;
Figure 6: shows steps that form example instructional of the type that may be stored as objects in an embodiment ofthe invention;
Figure 7: shows a schedule rendered from the steps of Figure 6;
Figure 8: shows a problem of step ordering;
Figure 9: shows steps of Figure 6 that form example instructional with the addition of consecutive labelling of the type that may be stored as objects in an embodiment ofthe invention;
Figure 10: shows an optional enforcement of consecutive edges for tasks categorised as “user” tasks
Figure 11: shows a rendering algorithm that may be operated on objects embodying the invention;
Figure 12: shows the output of the rendering algorithm of Figure 11; and
Figure 13: shows the overall scheduling algorithm of an embodiment of the invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
The invention may be embodied in a method of processing multimedia for asserting on a device, devices for performing such processing, transmitters, receivers and systems involving such a method. In this context, multimedia includes one or more of audio, video, text, graphics, images and any other content typically understood as comprising multimedia.
We will first describe an embodiment of the invention in terms of a generalised data model and algorithms to demonstrate how the concepts may be universally applicable to a variety of different types of content. We will then describe a particular use case for delivering instructional content to a user. Lastly, we will describe one example implementation related to delivery of multimedia content related to a cooking recipe to a user.
Data Model & Algorithms
The generalised data model has the following features that enable a scheduling algorithm to be operated appropriately for instructional multimedia content.
An object oriented approach is used in which instructional steps are stored as objects, each having a description, expected duration, dependencies to other objects and categorisation as requiring user interaction or not. One or more segments of multimedia may be attached to objects so that, when the object is asserted, the corresponding multimedia is delivered to the user. Using this approach an algorithm may schedule instructional steps in an appropriate order based upon the duration and dependency constraints.
A feature particular to instructional content is the need to ensure multimedia is asserted only when a user is ready and in a sequence that provides an apparently linear experience to the user. To achieve this a scheduling algorithm defines multiple threads, at least one thread being defined as the user thread allowing a user to respond when a step has been completed. Other threads interact with the user thread based on dependencies. Accordingly, the whole content can be delivered in an appropriate sequence and based upon elapsed time and user feedback.
Figure 1 shows a schematic view of a device embodying the invention. The device 1 comprises an input 10, an order processor 12 a media store 14 and a display 16. The device may be any suitable media presentation device including, but not limited to a smart TV, tablet device, smartphone or other such device able to receive input from a user and assert multimedia content on a screen.
The input 10 may be any known user input including a mouse, keyboard or a touch sensitive screen. The order processor 12 is configured with an algorithm that schedules the assertion of media from the store 14 as described later. The display 16 may be a touch screen display and include the functionality of input 10. Of particular note is that the multimedia stored in the media store 14 and the order processing algorithm may be received at an input 18 by broadcast or otherwise from a communication channel whether wired or wireless. In this way, a set of multimedia and the instructions for assertion of that content may be delivered to a user device. The metadata and algorithms may be downloaded to the user device from a remote source in advance and the multimedia obtained during operation or all components stored locally or remotely.
Figure 2 shows a set of tasks as task 1, task 2... task N that are stored the the media store 14 and which may have associated multimedia. The relationship may be one to one as shown, one to many or some tasks may not have any associated multimedia. Each task is defined as an object in the media store and has attributes that include a duration, and may include other attributes such as resources needed for the task, and whether the task is a “user” task or not. Tasks that are “user” tasks are treated within the ordering algorithm in a particular way described in the subsequent diagrams as “human” tasks. Such tasks are defined by the order processor 12 has having particular status in the ordering arrangement.
Instructional Multimedia
The embodiment of the invention may particularly be used for delivery of instructional multimedia. Such instructional multimedia includes any content aimed at demonstrating to a user how to achieve a goal.
Instructional content may include a variety of types of content, one example of which is recipes for cookery. A structured recipe data model enables a sequence of instructions to be presented to the user in a way that feels familiar, but is actually personalised to their requirements and tastes. There are many aspects of recipes that could be improved by modelling in a more generic way, with two examples being ingredient substitutions and skill referencing/lookup. In the example, however, we focus on the problem of concurrency in the kitchen, which is something novice cooks tend to struggle with more than anything else. Specifically, we look at the problem of scheduling recipe steps in an optimal and realistic way, enabling experiences that guide users through the process of simultaneously cooking multiple different recipes.
Example Implementation
We will use the example of a recipe for a poached egg on toast.
The initial piece of work with the data model was finding a simple representation of a recipe's steps. Traditionally in a recipe book, one would expect to see a recipe's steps represented in a way similar to this:
1. Heat a pan of water until simmering. Gently crack an egg in. Simmer for 3 minutes.
2. While the egg simmers, cut a slice of bread and put it into the toaster.
3. Once the egg white has set, remove from the pan and put onto some kitchen roll. Butter the toast and serve with salt and pepper.
This recipe is presented as a simple linear sequence of steps. While this is acceptable for the poached egg example, it does not work so well for more complicated recipes where a user's setup may affect the cooking time (and hence the order) of each step. It's also not very flexible, does not lend itself to being combined with other recipes, and the steps are not particularly granular. For this reason, we chose a different way to store our recipe data. Using a simple directed graph (illustrated in figure 3), we were able to represent a simple recipe in a more granular way, removing the interleaving and recovering some of the lost information about inter-step dependencies.
Figure 3: A graph-based view of the poached egg recipe described above. Note how the different strands of the recipe are more granular and are no longer interleaved. Although not shown, each step could have associated with it a piece of prose similar to the steps above.
However, while in many ways this is an improvement, we have also lost some usability. It's no longer obvious which order the steps should be carried out in by the user (should we start by heating the water or slicing the bread?), which is the major positive of the sequential structure described above.
To solve the shortcomings noted, we modify the data model slightly, so that now each step has a notion of a “duration. This gives a rough representation of how long we expect each step to take, and the new graph is displayed in figure 4.
Figure 4 shows the same graph as figure 3, but now each step also has an expected duration. It now becomes clearer which strand of the recipe should be started first, and this can be analytically determined (e.g. using critical path analysis).
Given that we now have a weighting for each step in this directed graph, we can now perform more mathematical analysis. Using the Critical Path Method (as discussed above), we obtain a “priority list stating in which order the steps should be completed to finish the whole recipe as quickly as possible. The result for the poached egg example is shown below.
1. Fill pan with water
2. Heat water to simmer
3. Crack egg into pan
4. Poach egg
5. Slice bread
6. Put bread into toaster
7. Toast bread
8. Butter toast
9. Season and serve
The “priority list gained by performing critical path analysis on the graph in figure
4. Note how this has a very similar form to the sequential recipe.
It is worth noting that this priority list takes the same form as the sequential recipe. By performing critical path analysis, we have gone from our abstract graph-based representation of a recipe to a list of instructions displayable to a user. This is the process of “rendering, which is an important concept in the design of the data model. “Rendering describes the idea of storing data in a generic, abstract, object8 based way, and then collapsing it back down into more familiar forms to suit particular scenarios. The benefits of doing things in an object-based way now begin to become clear. With the recipe stored as a graph, we can add metadata to alter its properties, which will then be reflected in our rendered priority list. For example, for novice users we could add a condition that doubles the expected duration of particularly diffcult step, and the priority list would then re-flow to give a new suggested order to carry out the steps in to suit that particular user.
Although at this stage the object-based data model is now able to render a priority list that looks like a traditional cookery book recipe, our requirements are slightly more involved. For this implementation, we would like to be able to render a full “schedule, with each step placed on a timeline. The naive way to render a timeline is simply to take the priority list obtained above and create a timeline showing each step in succession, as shown in figure 5. The expected duration of the cook would then simply be the sum of the durations of each step. Figure 5 shows a timeline created by sequentially carrying out the tasks in the priority list. This is not a very intelligent way of rendering a timeline as it does not have any notion of multiple steps being able to be carried out at once. However, rendering a schedule this way is not helpful, since it gives no scope for multiple steps being carried out simultaneously (e.g. making the toast while the egg poaches). For this reason we need to extend the data model to contain some information about how steps can be run in parallel. We do this by introducing the idea of “appliances. Each appliance is analogous to a “worker pool in computing, with each appliance being defined by a name and a worker capacity.
The “worker capacity denotes how many steps can be carried out simultaneously by the appliance - a hob generally has 4 rings, so its capacity is 4. A list of the available appliances for our poached egg recipe is shown below:
Naaie Worker capacity Hnmaa 1
Hob rrtig 4
Toaster 2
We then label each step in the graph with the appliances it needs to carry that step out. This then leaves us with the graph shown in figure 6. Figure 6 shows the poached egg recipe graph, annotated with the appliances (and the required quantity of workers in brackets) needed to run each step. Note how the “Crack egg into pan and “Put bread into toaster steps require multiple appliances simultaneously - this is perfectly valid from the data model's point of view.
The addition of the appliance information now means we can render a schedule that has several tasks happening in parallel. The details of the rendering algorithm will be discussed in detail later, but the result is that it produces a schedule like that in figure 7. This schedule has multiple timelines per appliance (according to each appliance's worker capacity), to which the steps from figure 6 are added.
The final adjustment to the data model concerns usability. Currently, the rendering algorithm does not get provided with any information dictating at what time individual steps get scheduled. That's a problem for cookery, since sometimes certain steps must be immediately followed by other steps (either because the product of one step will deteriorate if not used immediately in the next step, or because it would be irritating for the user to have to switch recipe strands in certain scenarios). For example, in our poached egg recipe we cannot tell in advance whether the “Slice bread step will be scheduled immediately before “Put bread into toaster (as shown in figure 8a), or whether some step from the egg-poaching strand will get placed in between them (as shown in figure 8b).
Figure 8 shows how the editor creating the content would like to ensure the first two steps in 8a always directly follow on from one another, but has no way of guaranteeing that 8b or something similar will not be rendered instead.
With this in mind, a final feature was added to the data model - the ability to mark certain edges in the recipe graph as being “consecutive. This is a tag that the scheduling algorithm can then read to ensure that desired steps are always scheduled next to one another. The graph with the “consecutive edges labelled is shown in figure 9. Figure 9 shows the graph for the poached egg recipe, with the consecutive edges labelled in red.
In general, these consecutive edges can make writing a scheduling algorithm difficult, as it introduces a new set of constraints to enforce which could lead to impossible-to-schedule scenarios. To simplify things, we restricted consecutive edges so that they were only permitted between steps that (a) both required a human, and (b) where the consecutive edge was the only edge leading into the second step. This is shown in figure 10. Figure 10 shows how we only allow consecutive edges to exist where both steps A and B require humans, and where B only has one input. These changes meant that we avoided any impossible-toschedule scenarios, and the restrictions on consecutive edges did not cause any problems for the system.
With the data model in place, the final task was to write an algorithm that is able to inspect the data model (the recipe graph and list of appliances) and place the steps onto a timeline to create a schedule. The algorithm was given the backronym “OGRE, standing for “Object Grabber and Rendering Engine.
As discussed above, there are a few requirements on the scheduling algorithm. These are as follows:
1. The rendered schedule must satisfy any consecutive edges (as defined above).
2. The rendered schedule must respect appliance capacities (for example, 5 steps requiring a hob ring should not be scheduled to happen simultaneously if the hob only has a capacity of 4 rings).
3. Recipe steps should be scheduled in an optimal way, with steps from different strands being run in parallel where possible.
There are many ways the algorithm could have been written (perhaps by encoding some of these features in a cost function and then optimising it globally) but in the end we decided that since the first two requirements are hard constraints rather than features to optimise, it would be more appropriate to write a less involved algorithm. Figure 11 shows a high-level view of the final rendering algorithm.
Figure 11 shows a flow diagram of how OGRE renders steps from a graph into a schedule. It starts by creating a priority list of all the steps from the graph. It then sequentially places steps onto the timeline in a way that satisfies the constraints imposed by the data model.
This process is represented by the box in figure 11 labelled “Calculate priority list from graph. The first task for the algorithm is to work out which strands of the recipe graph will take the longest, and hence which steps need to be started first.
Intuitively, by inspecting the graph for our poached egg example we can see that the egg-poaching strand will take much longer to complete than our toast-making strand, so we can say that a cook ought to begin the egg-poaching before they start thinking about the toast.
To automate this process, we used critical path analysis. As discussed above, critical path analysis is a standard method in scheduling problems, designed to prioritise tasks in a directed graph. The overall idea is that it inspects the graph and calculates which strands of the recipe will take the longest, and then outputs a list of what order the steps should be carried out in, in descending order of priority. This is known as a “priority list, and is shown earlier.
The priority list is a pure function of the graph, and no extra information is required to create it. For that reason we generate it once for the algorithm, rather than recreating it for each step of the main loop in figure 11.
With the priority list generated, we are now ready to start looping through the steps and scheduling them (placing them on the timeline). The first iteration of the loop is trivial, as the step at the top of the priority list can be scheduled at time 0 without any further consideration. In the following sections we will look into how steps are scheduled outside this special case.
This process is represented by the box in figure 11 labelled “Find next step to schedule. In general, the earlier in the algorithm a particular step gets put through the scheduling process, the closer to time 0 it will be in the timeline. This intuitively makes sense, because as we progress through the priority list, the more congested the timeline will become and so the more likely it will be that the algorithm has to push a step further into the future.
With that in mind, we need to make sure we push steps off the priority list and onto the timeline in a sensible order. The way to deal with steps in decreasing order of priority, which would be the best way to satisfy requirement 3 (“Steps should be scheduled in an optimal way). However, this would mean we cannot guarantee that requirement 1 (“Steps denoted as being consecutive must always appear directly after one another) will be kept. So instead, before we pull the highest priority step from the priority list, we inspect the rendered timeline so far and see if a step has been scheduled that must be followed by a not-yet-scheduled step. If so, we pull that not-yet-scheduled step off the priority list rather than the highest-priority step we sacrifice optimising the total cook time in order to satisfy the consecutive-steps constraint. Now that we have identified the next step to add to the schedule, we must now decide at what point on the schedule's timeline it should be placed. This depends on two factors, which are discussed in turn in the following two sections.
Satisfying the order of the recipe
This process is represented by the box in figure 11 labelled “Calculate step's earliest possible start time from prerequisite steps. We must firstly make sure that we do not try to schedule a step to begin at a point in time before its prerequisite steps have finished. For example, in the poached egg recipe we cannot schedule the “Season and serve step to start before the steps “Poach egg and “Butter toast are scheduled to have finished.
For this reason, we look at the prerequisite steps for the step we're scheduling, and inspect the timeline to find out when those steps are due to finish. We then take the latest of those times to be the earliest point at which we can schedule the step. If we are scheduling a step with no prerequisite steps, we simply set this earliest point to be time 0.
Not exceeding the capacity of appliances
This process is represented by the yellow diamond and mini-loop at the bottom of figure 11. With the step's proposed start time in hand, we now look to enforce requirement 2 by making sure that the timelines of the required appliances have sufficient capacity to schedule the step. Taking the step's proposed start time from to be tO and the step's duration to be d, we check that each required appliance has enough free worker timelines between to and to + d. If they all do then we can schedule the step to begin at tO, otherwise we increase tO (in increments of one second) until each appliance satisfies the condition.
Re-calculation
Like a GPS sat-nav, the system is designed to be constantly re-rendering the schedule as the user progresses so that the timeline stretches/shrinks/reows to fit how quickly the user is completing their steps. For example, if a user takes longer than expected to complete a step, the later steps in the timeline should get pushed back (and possibly have their order changed, depending on the nature of the recipe) until the user indicates they have completed the step. To do this, the scheduling algorithm needs to have a way to inject information about the user's cook so far.
Each time the schedule is rendered, the algorithm is passed in a set of data saying how far into the timeline the user is, which steps they have already completed, and how long each took to complete. This then means that those steps can be immediately placed directly onto the timeline as the data dictates, and then the scheduling algorithm is only performed on the remaining steps that the user has not yet begun.
For example, suppose the user was halfway through cooking the poached egg recipe. They had started off following the rendered schedule in figure 9, and then as they completed the steps the schedule kept being recalculated. When they finish the “Heat water to simmer step, the schedule will be recalculated again, the result of which is shown in figure 12. The shaded steps have been recreated directly from the injected user progress information, and the white steps have been scheduled according to the scheduling algorithm. Figure 12 shows the result of performing the scheduling algorithm halfway through a cook. The shaded steps are ones the user has already completed (or is currently in process), and the white ones are those that have not yet been started.
Figure 13: shows the overall scheduling of a system embodying the invention that may use functionality previously described.
An important feature of this algorithm is that it models resources as exist in the real world, and that it allows for dependencies of varying strength to be specified between these steps that can produce a strong user experience that does not “jar” with the audience’s experience by having strong connections between some steps to enforce them to be directly scheduled, and weak connections between others. This results in a series of “threads” in the experience which can only be jumped between in certain states.
The scheduling and data model may attach multimedia assets to some of these steps that can be shown to the audience. In this, we deem objects marked as being in our user thread as a type of object which can have multimedia assets attached. As the scheduling algorithm guarantees that the user will only one task scheduled at once, then when the task starts in the scheduling algorithm, the appropriate multimedia assets can be fetched and displayed on the user’s screen. The user interface can then respond to when the user completes that task and updates the state of the schedule to indicate when that task is completed.
As the schedule is constantly recomputed in response to changing state (such as time and the completion state of the steps in the experience), then the media is not fetched until the moment it is needed, giving a dynamic experience. The schedule will not, however, ever do a reschedule such that the current task suddenly changes, so once media starts playing, the user will not be shown another piece of media until they have indicated that this task is completed. In the event that no user tasks are currently available to be scheduled, then some placeholder content is shown.
This can be extended to have multiple screens and multiple users partaking in the experience at once, to allow for mass participation experiences.

Claims (33)

1. A method of delivering multimedia content on a user device, comprising:
- obtaining objects each having metadata defining a step, expected duration and dependencies to other objects, at least some objects having associated multimedia content;
- operating an ordering algorithm to order the objects and associated multimedia in accordance with the metadata that includes expected duration and dependencies to other objects; and
- rendering the multimedia content in accordance with the order; wherein the ordering algorithm arranges objects into multiple threads, each thread having multiple objects and at least one thread being defined as a user thread that contains objects defined as user objects and requires interaction from a user.
2. A method according to claim 1, wherein metadata for an object further defines whether the object should be asserted consecutively with another object.
3. A method according to claim 1 or 2, wherein metadata for an object further defines external resources needed for completion of a step.
4. A method according to any of claims 1 to 3, where the obtaining objects comprises retrieving objects and associated multimedia from a local store.
5. A method according to any of claims 1 to 3, wherein the obtaining objects comprises receiving objects and associated multimedia from a remote source.
6. A method according to any of claims 1 to 5, wherein further rendering of the multimedia is interrupted pending the input from a user on completion of a step.
7. A method according to any of claims 1 to 6, wherein at least some objects are defined as requiring consecutive assertion and the ordering algorithm is constrained to order such objects consecutively.
5
8. A method according to claim 7, wherein only objects that are defined as user objects are constrained to require consecutive assertion.
9. A method according to any of claims 1 to 8, wherein the objects are ordered according to a priority list and the ordering algorithm orders objects by
10 retrieving objects from the priority list subject to constraints including one or more of, consecutive assertion constraints, prerequisite assertion constraints and capacity of resources.
10. A method according to any of claims 1 to 9, wherein the ordering algorithm
15 is arranged to operate during assertion of content and to re-order by operating the ordering process taking into account the fact that steps are already completed or partially completed.
11. A method according to any of claims 1 to 10, wherein the ordering algorithm
20 is arranged to operate during assertion of content and to re-order by operating the ordering process taking into account the difference between the expected and actual duration of steps that are already completed.
12. A method according to any of claims 1 to 11, wherein rendering the
25 multimedia content comprises asserting multimedia and metadata for objects designated in the user thread according to the ordering algorithm.
13. A method according to any of claims 1 to 12, wherein a user may cause the obtaining of additional objects during the rendering of the multimedia
30 content, and the algorithm is operable to order the objects including the additional objects and associated multimedia in accordance with the metadata.
14. A method according to any of claim 13, wherein the obtaining of additional objects is in response to the user requesting additional multimedia content by a user input.
5 15. Apparatus for delivering multimedia content on a user device, comprising:
- means for obtaining objects each having metadata defining a step, expected duration and dependencies to other objects, at least some objects having associated multimedia content;
- means for operating an ordering algorithm to order the objects and
10 associated multimedia in accordance with the metadata that includes expected duration and dependencies to other objects; and
- means for rendering the multimedia content in accordance with the order; wherein the ordering algorithm arranges objects into multiple threads, each thread having multiple objects and at least one thread being defined as a
15 user thread that contains objects defined as user objects and requires interaction from a user.
16. Apparatus according to claim 15, wherein metadata for an object further defines whether the object should be asserted consecutively with another
20 object.
17. Apparatus according to claim 15 or 16, wherein metadata for an object further defines resources needed for completion of a step.
25
18. Apparatus according to any of claims 15 to 17, where the obtaining objects comprises retrieving objects and associated multimedia from a local store.
19. Apparatus according to any of claims 15 to 17, wherein the obtaining objects comprises receiving objects and associated multimedia from a
30 remote source.
20. Apparatus according to any of claims 15 to 19, wherein further rendering of the multimedia is interrupted pending the input from a user on completion of a step.
21. Apparatus according to any of claims 15 to 19, wherein at least some objects are defined as requiring consecutive assertion and the ordering algorithm is constrained to order such objects consecutively.
5
22. Apparatus according to claim 21, wherein only objects that are defined as user objects are constrained to require consecutive assertion.
23. Apparatus according to any of claims 15 to 22, wherein the objects are ordered according to a priority list and the ordering algorithm orders objects
10 by retrieving objects from the priority list subject to constraints including one or more of, consecutive assertion constraints, prerequisite assertion constraints and capacity of resources.
24. Apparatus according to any of claims 15 to 23, wherein the ordering
15 algorithm is arranged to operate during assertion of content and to re-order by operating the ordering process taking into account the fact that steps are already completed.
25. Apparatus according to any of claims 15 to 24, wherein the ordering
20 algorithm is arranged to operate during assertion of content and to re-order by operating the ordering process taking into account the duration of steps that are already completed.
26. Apparatus according to any of claims 15 to 25, to wherein rendering the
25 multimedia content comprises asserting objects designated in the user thread according to the ordering algorithm.
27. Apparatus according to any of claims 15 to 26, wherein the means for obtaining objects comprises a store of the device.
28. Apparatus according to any of claims 15 to 26, wherein the means for obtaining objects comprises means for retrieving objects from a remote source.
29. Apparatus according to any of claims 15 to 28, wherein the apparatus comprises any one of a smart TV, tablet device, smartphone or other user device.
5
30. Apparatus according to any of claims 15 to 29, wherein a user may cause the obtaining of additional objects during the rendering of the multimedia content, and the algorithm is operable to order the objects including the additional objects and associated multimedia in accordance with the metadata.
31. Apparatus according claim 30, wherein the obtaining of additional objects is in response to the user requesting additional multimedia content by a user input.
15
32. A computer program comprising code which when executed on a device undertakes the method of any of claims 1 to 14.
33. A method according to any of claims 1 to 14, wherein the method is operable remote from the user device and the rendering includes
20 transmitting the multimedia to the user.
Intellectual
Property
Office
Application No: Claims searched:
GB162U68.2
1-33
GB1621168.2A 2016-12-13 2016-12-13 Multimedia system & method Withdrawn GB2557627A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1621168.2A GB2557627A (en) 2016-12-13 2016-12-13 Multimedia system & method
PCT/GB2017/053731 WO2018109468A1 (en) 2016-12-13 2017-12-13 Object-based multimedia delivery system & method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1621168.2A GB2557627A (en) 2016-12-13 2016-12-13 Multimedia system & method

Publications (2)

Publication Number Publication Date
GB201621168D0 GB201621168D0 (en) 2017-01-25
GB2557627A true GB2557627A (en) 2018-06-27

Family

ID=58221977

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1621168.2A Withdrawn GB2557627A (en) 2016-12-13 2016-12-13 Multimedia system & method

Country Status (2)

Country Link
GB (1) GB2557627A (en)
WO (1) WO2018109468A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060149781A1 (en) * 2004-12-30 2006-07-06 Massachusetts Institute Of Technology Techniques for relating arbitrary metadata to media files
WO2009022781A1 (en) * 2007-08-13 2009-02-19 Samsung Electronics Co,. Ltd. Method and apparatus for generating and reproducing media object-based metadata
US20140109142A1 (en) * 2010-10-21 2014-04-17 Bart P.E. van Coppenolle Method and apparatus for content presentation in a tandem user interface

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5515490A (en) * 1993-11-05 1996-05-07 Xerox Corporation Method and system for temporally formatting data presentation in time-dependent documents

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060149781A1 (en) * 2004-12-30 2006-07-06 Massachusetts Institute Of Technology Techniques for relating arbitrary metadata to media files
WO2009022781A1 (en) * 2007-08-13 2009-02-19 Samsung Electronics Co,. Ltd. Method and apparatus for generating and reproducing media object-based metadata
US20140109142A1 (en) * 2010-10-21 2014-04-17 Bart P.E. van Coppenolle Method and apparatus for content presentation in a tandem user interface

Also Published As

Publication number Publication date
GB201621168D0 (en) 2017-01-25
WO2018109468A1 (en) 2018-06-21

Similar Documents

Publication Publication Date Title
US9524084B2 (en) Presenting images of multiple media entities
US11755174B2 (en) Template based calendar events with graphic enrichment
US10614428B2 (en) Recipe selection system with bidirectional calendar interface
AU2016201606B2 (en) Display object pre-generation
US10803769B2 (en) System and method for generating a recipe player
Hamada et al. Cooking navi: assistant for daily cooking in kitchen
KR101903922B1 (en) Progressively indicating new content in an application-selectable user interface
JP6637487B2 (en) Queuing of intelligent content from secondary devices
JP2009503630A (en) Event queuing in an interactive media environment
GB2557627A (en) Multimedia system &amp; method
US20150006664A1 (en) Information processor, information processing method, and computer program product
US9342224B2 (en) Social networking web site picture album navigation path
US20180286277A1 (en) System and methodology for regulating instructional multimedia applications
US9537922B2 (en) Partitioning digital media for consumption
CN103513972B (en) Display methods and electronic equipment
US11462195B2 (en) Dynamically transitioning between visual media items at a digital-content-display device
US20140344351A1 (en) Displaying user&#39;s desired content based on priority during loading process
US20200387536A1 (en) Information processing device, information processing method, program, and storage medium
Gallidabino et al. Multi-Device complementary view adaptation with liquid media queries
JP6351021B1 (en) Information processing system, information processing method, terminal device, and program
US20170195750A1 (en) System and method for generating and updating video news edition
JP2019083505A (en) Information processing system, information processing method, terminal device and program
KR101601724B1 (en) Method for providing and creating social network shared objects
US9307181B1 (en) Time-based triggering of related content links
JP5087651B2 (en) Program and command processing method

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)