AU778745B2 - Functional planning system - Google Patents

Functional planning system Download PDF

Info

Publication number
AU778745B2
AU778745B2 AU35609/02A AU3560902A AU778745B2 AU 778745 B2 AU778745 B2 AU 778745B2 AU 35609/02 A AU35609/02 A AU 35609/02A AU 3560902 A AU3560902 A AU 3560902A AU 778745 B2 AU778745 B2 AU 778745B2
Authority
AU
Australia
Prior art keywords
decision
attributes
sub
groups
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU35609/02A
Other versions
AU3560902A (en
Inventor
Thomas Phillip Howard
Farhad Fuad Islam
Ryszard Kowalczyk
Michael Alexander Oldfield
Mikhail Prokopenko
Dong Mei Zhang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AUPR4600A external-priority patent/AUPR460001A0/en
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU35609/02A priority Critical patent/AU778745B2/en
Publication of AU3560902A publication Critical patent/AU3560902A/en
Application granted granted Critical
Publication of AU778745B2 publication Critical patent/AU778745B2/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Description

S&FRef: 592418
AUSTRALIA
PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT
ORIGINAL
Name and Address of Applicant: Actual Inventor(s): Address for Service: Invention Title: Canon Kabushiki Kaisha 30-2, Shimomaruko 3-chome, Ohta-ku Tokyo 146 Japan Mikhail Prokopenko Dong Mei Zhang Ryszard Kowalczyk Thomas Phillip Howard Farhad Fuad Islam Michael Alexander Oldfield Spruson Ferguson St Martins Tower,Level 31 Market Street Sydney NSW 2000 (CCN 3710000177) Functional Planning System ASSOCIATED PROVISIONAL APPLICATION DETAILS [33] Country [31] Applic. No(s) AU PR4600 [32] Application Date 24 Apr 2001 The following statement is a full description of this invention, including the best method of performing it known to me/us:- 5815c -1- FUNCTIONAL PLANNING SYSTEM Field of the Invention The present invention relates to a system having artificial intelligence, and more particularly to a system for making context-based decisions.
Background Watching television is an everyday activity in a large number of households, and provides for a source of entertainment from a range of program content such as sport and movies, as well as news and actuality programs (eg. documentaries and lifestyle). With the advent of Digital Television (DTV), the home television has become the main interface with the outside world, in that the Internet may be browsed using the television, on-line shopping may be done, and electronic messages (e-mail) may be received through the television.
Traditionally, a user/viewer sets a large number of"User Preferences" dictating to the DTV how content should be displayed. For example, this may include settings as to whether the user wishes to be notified when e-mail arrives. Furthermore, the user may wish for messages to be automatically displayed in a secondary window. However, such user preferences are not static and may be dependent upon various factors. Such factors may include for example the nature of the program that the user is watching when the e-mail came in, the priority of the message etc. This is undesirable, as the user is therefore required, when setting the "User Preferences" to predict the most likely scenario that may occur, or alternatively, make the most conservative selection.
Summary of the invention It is an object of the present invention to substantially overcome, or at least ameliorate, one or more disadvantages of existing arrangements.
5924 1.doc -2- According to a first aspect of the invention there is provided a method of determining a decision within a specified context, said specified context being defined by a first set of attributes, said method comprising the steps of: receiving user behaviour patterns, each said user behaviour pattern including a second set of attributes, a user decision and frequency of occurrence; comparing attributes of said first set of attributes with attributes of each second set of attributes to determine for each user behaviour pattern a number of matched attributes; forming groups of said user behaviour patterns based upon said number of 10 matched attributes; and performing a decision selecting process for the group having the highest .o number of said matched attributes to thereby select the decision which is most appropriate to S"said specified context from said user behaviour patterns.
According to another aspect of the invention, there is provided a method of o oo o 15 determining a decision within a specified context, said specified context being defines by a ooo° ig first set of attributes, said method comprising the steps of: receiving user behaviour patterns, each said user behaviour pattern including a second set of attributes, a user decision having an assigned decision value and frequency of occurrence; comparing attributes of said first set of attributes with attributes of each second set of attributes to determine for each user behaviour pattern a number of matched attributes; forming groups of said user behaviour patterns, each of said groups having a same number of said matched attributes; 5924!S for each of said groups, forming sub-groups of said behaviour patterns, with each of said sub-groups comprising unique matched attributes; for the group having a highest number of said matched attributes, calculating an average of said decision values corresponding to those user behaviour patterns having a highest frequency of occurrence in said corresponding sub-groups; determining a decision interval from a predefined group of decision intervals that is closest to said average; and selecting a decision from said sub-group that is within said determined decision interval, said decision being selected having a highest frequency of occurrence.
10 According to another aspect of the invention there is provided an apparatus for oo performing any one of the above methods.
•coo According to yet another aspect of the invention there is provided a program stored on ooo• a computer medium for performing any one of the above methods.
•°The invention resides in the manner in which a decision is determined given the context of the decision and previous user behaviour, that is, previous decisions made during previous contests.
oBrief Description of the Drawings 0oo0 -One or more embodiments of the present invention is described hereinafter with reference to the drawings, in which: Fig. 1A is a schematic representation ofa system for learning functional user behaviour patterns and for planning optimal sequences of events suitable for a particular user in a particular context; Fig. I B is a detailed representation of an avatar agent of the system in Fig. 1 A; Fig. 2 is an extract from a typical Electronic Program Guide; Fig. 3 is an illustration of an animated character displayed on a display screen of the system in Fig. 1A; 5924!8 -4- Fig. 4 is a flow diagram of a method performed by a planning module of the avatar agent illustrated in Fig. 1B of determining which particular decision is most appropriate given previous user behaviour patterns and the context of the decision; Fig. 5 is a schematic diagram of a task within a task network; Fig. 6 is a schematic diagram of an example contextual task network; Figs 7A to 7F are flow diagrams of a method performed by a learning module of the avatar agent illustrated in Fig. 1A, to learn contextual behaviour patterns from decisions made by the user; Fig. 8 is a schematic block diagram of a general purpose computer upon which an embodiment of the present invention can be practiced; Figs. 9A and 9B show a table containing example instance entries; Figs. 10A to I OC show a table containing behaviour patterns created from the instances shown in Figs. 9A and 9B; and Figs. 11 A and II B each show an example list of contextualised records.
Detailed Description including Best Mode Some portions of the description which follows are explicitly or implicitly presented in terms of algorithms and symbolic representations of operations on data within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a selfconsistent sequence of steps leading to a desired result.
The present specification also discloses an apparatus for performing the operations of the methods. Such apparatus may be specially constructed for the required purposes, or may comprise a general-purpose computer or other device selectively activated or reconfigured by 59 2410". dc a computer program stored in the computer. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various generalpurpose machines may be used with programs in accordance with the teachings herein.
Alternatively, the construction of more specialised apparatus to perform the required method steps may be appropriate. The structure of a conventional general-purpose computer will appear from the description below.
Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears.
Fig. 1A shows a schematic representation of a system 50 for learning functional usage patterns of a user of the system and for planning a best sequences of events suitable for a particular user in a particular context. The system 50 comprises a digital television (DTV) connected through interconnection 25 to a "set top" box 20. A DTV-agent system 21 is preferably formed within the "set top" box 20. In use, the user interacts with the DTV-agent system 21 using a remote control device 30. The DTV-agent system 21 may alternatively be integrated into the DTV 10 or incorporated into a personal computer 100 such as that seen in Fig. 8 and appropriately interfaced with the DTV The computer system 100 shown in Fig. 8 comprises a computer module 102, input devices such as a keyboard 110 and mouse 112, and output devices including a printer 108 and a display device 104. The computer module 102 typically includes at least one processor unit 114, a memory unit 118, for example formed from semiconductor random access memory (RAM) and read only memory (ROM), input/output interfaces including a video interface 122, and an /O interface 116 for the keyboard 110 and mouse 112. A storage 59241 3.10C -6device 124 is provided and typically includes a hard disk drive 126 and a floppy disk drive 128. A magnetic tape drive (not illustrated) may also be used. A CD-ROM drive 120 is typically provided as a non-volatile source of data. The components 114 to 128 of the computer module 102, typically communicate via an interconnected bus 130 and in a manner which results in a conventional mode of operation of the computer system 100 known to those in the relevant art.
Typically, an application program is resident on the hard disk drive 126, and is read and controlled in its execution by the processor 114. Intermediate storage of the program may be accomplished using the semiconductor memory 118, possibly in concert with the hard disk drive 126. In some instances, the application program may be supplied to the viewer encoded on a CD-ROM or floppy disk and read via the corresponding drive 120 or 128, or alternatively may be read by the viewer from a network via a modem device (not illustrated). Still further, the software can also be loaded into the computer system 100 from other computer readable medium including magnetic tape, a ROM or integrated circuit, a magneto-optical disk, a radio or infra-red transmission channel between the computer module 102 and another device, a computer readable card such as a PCMCIA card, and the Internet and Intranets including email transmissions and information recorded on websites and the like. The foregoing is merely exemplary of relevant computer readable mediums. Other computer readable mediums may be practiced without departing from the scope and spirit of the invention.
Referring again to Fig. 1A, in the preferred implementation, the DTV-agent system 21 includes a DTV-agent 36 for controlling the DTV 10, an electronic storage device 26, a unified messaging module (UMM) 27, and a number of avatar-agents 37. An "agent" in this regard is typically implemented by a computer program module, configured to perform a 59241 8.du% -7desired function or produce a desired effect. An Inter-agent-server (IAS) 35 manages communications between the DTV-agent 36, the storage device 26, the UMM 27, and the avatar-agents 37. The IAS 35 also connects the "set top" box 20 through a gateway 45 to an external network 42. A number of content servers 43 and Electronic Program Guide (EPG) databases 22 are connected to the external network 42.
The functions of the DTV-agent 36 include: the provision of a graphical user interface via a display screen 11 of the DTV to control the functionality of the DTV 10 including interacting with the content servers 43 to select multimedia content for viewing, viewing of messages from the UMM 27, and viewing of content recorded on the electronic storage device 26; and to gather user selections of programs and interactions with the DTV 10, and delivers them to the avatar agents 37.
Each of the avatar-agents 37 is associated with a single user, and uses at least a viewer instance database 24 to store information relating to the associated user for later use by the avatar-agent 37. The term database as used herein refers to data records generally and is not meant to imply any specific data structure. As seen in Fig. 1B, each avatar-agent 37 includes an avatar manager 38 that maintains control of the particular avatar-agent 37. The avatar manager 38 is also responsible for sending messages to, and receiving messages from, the DTV-agent 36, the storage device 26 and the UMM 27 through the IAS 35. Within the avatar-agent 37, the avatar manager 38 is also responsible for interfacing with each of a learning module 39 and a planning module 41. The avatar-agent 37 may additionally include a recommendation module (not illustrated) for making recommendations to the user of available programs to watch.
392418.doc -8- Electronic messages can be sent by several different methods, which may include electronic mail (e-mail), fax, and voice-mail. The UMM 27 is a module which enables access to these electronic messages from a single receptacle. Users may access the electronic messages through the DTV 10, allowing them to retrieve those messages at any suitable time.
The UMM 27 preferably has the following features: processing of e-mail attachments; voice-mail to e-mail: messages arrive as digital audio files that can be played over DTV speakers (not illustrated), and is converted to text format which can be viewed on the DTV display 11, printed on an attached printer (not illustrated), e-mailed to other users, forwarded to a fax machine (not illustrated) connected to the external network 42; fax to e-mail: faxes arrive in original format and can be viewed on DTV display 11, printed on an attached printer, e-mailed to other users, forwarded to a fax machine connected to the external network 42; and e-mail to voice-mail: electronic messages are read aloud by a computer-generated voice.
The UMM 27 receives and, in conjunction with the avatar agents 37, processes incoming messages regardless of whether the user is logged on or whether the DTV 10 is turned off. For example, a detected message can be stored in appropriate digital format (audio, image, text) and then accessed upon user request.
The electronic storage device 26 is capable of recording and playing back content.
This content is typically received from the content servers 43. The electronic storage device 26 may record content upon a request from the user in a usual manner. The storage device 26 59W418'.doc u u^.
-9may further automatically record a fragment of a currently watched program while the user is busy, and played-back upon request in a small window on the DTV display 11.
The storage device 26 may incorporate the following features: automatic operation of primary functions: Play, Stop, Fast Forward and Rewind; automatic operation of secondary functions: Pause, Frame-by-frame Advance (forward and reverse), Slow Motion (forward and reverse); detection of conflicts between requests from other modules, such as the planning module 41, and its current status; and automatic search and access to record fragments by end-pointers, in order to conveniently locate a required fragment.
The content servers 43 are typically provided by content providers and contain multimedia content including movies and television programs. The available programs are listed in the EPG databases 22. Each of the content providers may maintain an associated EPG database 22. The available programs listed in the EPG databases 22 may be linked to the corresponding multimedia content in the content servers 43 by means of a program identifier. An extract from a typical EPG database 22 is shown in Fig. 2. The EPG database 22 has a number of program entries 60, each of which may include a number of attributes 61 with values 62. The attributes 61 may include: EPG identifier (ID) which is a unique number per entry; Program ID which is unique for each program resulting in re-runs having the same Program ID; Category ID, for example '01' for art, '16' for drama, '50' for Movies and Series, for sport etc.; 39241 i .d Subcategory ID, for example, in the movies category the subcategory '001' for action/adventure, '064' for comedy, '074' for crime etc; Title; Remarks, which is a general text field about the program entry; Keywords; Rating; EPG Channel; Start Day; Start Date; Start time; Duration; and Year of Make.
In use, the user switches the DTV 10 'ON' using the remote control 30. Next, the user identifies himself/herself to the DTV-agent system 21, thereby establishing a user ID.
Referring to Fig. 3, this may be done by selecting an animation character 12 from a set of characters generated by the DTV-agent 36 and displayed on the display screen 11. The selection may be performed using the remote control A message is sent to the avatar agents 37, activating only the avatar-agent 37 associated with the identified user. The user may typically use the remote control 30 to view on the DTV screen 11 an electronic program guide to select a program to watch, retrieve a message from the UMM 27 or select to view content stored previously on the electronic storage device 26.
It often occurs that, while the user is viewing content from the content servers 43 on the DTV screen 11, a message for the user is received by the UMM 27. The avatar-agent 37 is 2 "Y .U -11notified, and displays an icon on the DTV screen 11. The user then decides whether to view the message and if so, at what size on the DTV screen 11.
Information relating to user decisions, such as the context during which the decision was made and the actual user decision, is gathered by the DTV-agent 36 and delivered to the active avatar agent 37, which stores the information in the viewer instance database 24. User decisions are stored as instances in the viewer instance database 24, with each instance representing the particular user's decision within a specified context.
In a specific implementation, two types of instances are identified, hereinafter termed BP type (Behaviour Pattern type). All instances of a specific BP type and user are preferably stored in an instance file associated with the specific BP type and the user's User ID. Given a particular user, the fields of the instance files are now described in more detail.
The first instance file corresponds with BP type 0, which is associated with UMM content display. This type of instance occurs when the user is viewing content from the content servers 43 and a message from electronic mail (e-mail), fax, or voice-mail comes in.
The user typically makes a decision based on the priority of the message, as well as the nature or content being watched. The fields of the first instance file, all of which has numbers as values, are as follows: DTV_ContentCategory Category ID, available from the EPG database 22, of the content that the user was watching on the DTV 10 when interaction occurred. For example a value '16' corresponds to the category 'drama', whereas a value '75' corresponds to a category 'sport'; 5924 i 8.duc -12- UMM_Content_Category category of the UMM content when interaction occurred. For example a value '32' corresponds to a UMM category 'fax', whereas a value '56' corresponds to a UMM category 'e-mail'; UMM_ContentPriority specifies the priority associated with the UMM content. A value of '34' may represent 'low priority', a value of '60' a 'medium priority', and a value of '99' a 'high priority'; Day day of week when interaction occurred, with a value '0' representing 'Sunday' etc; Time of Day a time of the day when interaction occurred, with a value '1' representing 'morning', representing 'afternoon', etc; Decision decision took by the user; and Frequency =1 for instance files.
The user's decision may be: 0 indifferent 0 11 open small secondary window 12 open medium secondary window 13 open large secondary window -10 do not interrupt The second instance file corresponds with BP type 1, which is associated with hard disk drive (HDD) Partial Content Play Back. This type of instance occurs when the user is viewing content from the content servers 43 and the user's viewing is interrupted while the user is reading an UMM message. When the user closes the message window, the user is provided with an option to view the recorded portion. The user typically makes a decision 5241 8.uui 13based on the nature of the recorded content from the electronic storage device 26, the duration of the recorded content, as well as the nature of content presently being watched.
The fields of the second instance file, all of which has numbers as values, are as follows: DTV_ContentCategory; HDD_ContentCategory category of the recorded electronic storage device content (for example a value '50' corresponds to an electronic storage device content category 'movie and series', whereas a value corresponds to an electronic storage device content category 'public affairs'; specifies the duration of the content recorded (for example a value of '34' may represent 'short' recording which is 3 minutes, a value of '51' represents 'medium' recording which is between 3 and 10 minutes, and '92' a 'long' recording which is 10 minutes; Duration Day; Timeof Day; Decision; and Frequency =1 for instance files.
Decision is the user's decision whether to resize a secondary window to a particular size and play back the recorded fragment of the HDD content. In particular, the Decision may be: 0 indifferent 0 11 open small secondary window 12 open medium secondary window 3924 i.ou -14- 13 open large secondary window -10 do not interrupt These instances are used by the avatar agent 37, and in particular, the learning module 39, to learning functional usage patterns and dependencies. From the functional usage patterns and dependencies, the planning module 41 of the avatar agent 37 plans and coordinates activities of the DTV agent 36, the electronic storage device 26 and the UMM 27.
Each instance stored in the viewer instance database 24 is associated with a set of features each feature representing a unique attribute and attribute value pair. Each attribute has a number of possible values. An example of an attribute-value pair is DTV_ContentCategory 16 where DTV_ContentCategory is the attribute and '16' is the attribute value.
For example, a UMM content Display (BP-type 0) instance may be DTV_Content_Category UMM_Content_Category 32; UMM_Content_Priority 54; Day =3; Time_of Day 2; Decision 12; Frequency 1, which may be interpreted as meaning: the user was viewing sport, a fax came in with a medium priority, it was Tuesday afternoon, and the user took the decision to open a medium secondary window. The frequency for an instance is always 1.
The learning module 39 operates to identify all generalisation patterns from a number of instances. The generalization patterns represent shared patterns within the instances. The I 0 JO j i o~u^ learning module 39 takes the instance file as input and generates a Generalisation Pattern List (GPList) which contains all the generalisation patterns. Each generalisation pattern in the GPList may be represented as follows: ([intersection], occurrence), wherein intersection indicates a pattern that is shared by different instances, and occurrence indicates the number of instances that share such an intersection. For example, from the following instances: C1 f2, f3, f4, f7); C2 (fl, f2, f5, f6, f7); and C3 (f3, f5, f6, f8) the GPList generated has three items: f2, f7], 2); and f6], 2) The first GPList entry above arises from the attribute and attribute value pairs fl, f2 and f7 occurring both in instances Cl and C2. The other entries are derived in a similar manner.
Assume for example that the instance file contains the instances shown in Figs. 9A and 9B. The GPList is then generated by the learning module 39 from the instances.
The learning module 39 additionally selects appropriate ones of the generalization patterns in the GPList, to identify behaviour patterns. Each GPList entry is examined to determine whether it provides a value in the "Decision" field, together with at least one other specific value. If so, it is regarded as a behaviour pattern and entered as an entry in the BPList. Hence, the BPList encapsulates patterns of user behaviour with respect to one 39241 .duc -16function (encapsulated by the BP type) of DTV: UMM content display and HDD partial content play back.
For the above example, the BPList generated by the learning module 39 would contain the entries shown in Figs. 10A to 10C. Entry 70 indicates that there were 16 instances where the user took the decision "open small secondary window" (Decision=1 1) while watching sport (DTV_Content_Category=75). The other attribute and attribute value pairs are varying and are therefore represented with a value which may be interpreted as "don't care". The 16 instances are marked 71a-71p in Figs. 9A and 9B.
A function of the planning module 41 is to, based on the context of a decision, find which particular decision is most appropriate given previous user behaviour patterns.
Typically, previous instances match a current state only partially. This causes some uncertainty in determining which particular decision is most appropriate. According to some patterns, it may seem that the user would prefer the decision "open small secondary window", while according to others, a "medium secondary window" decision would be preferred.
Some patterns may indicate that it is better to "ask for confirmation".
Consider the decision space for decisions by the planning module 41 for UMM Content Display (BP type and HDD Partial Content Play Back (BP type=l). The decisions correspond with the user decisions. The "indifferent" decision may be interpreted as "ask for confirmation", and is used by the planning module 41 when it is unable to adequately resolve the decision. This is typically the case when there is insufficient pattern in the user decisions in the particular context. Accordingly the decisions and their decision values may be: 0 indifferent 11 open small secondary window 5924 i .doc -17- 12 open medium secondary window 13 open large secondary window -10 do not interrupt A frequency of previous decisions, stored as part of a pattern in the BPList, is only a relative indication, because of the granularity of the decision space. For example, consider the following frequencies of decisions aggregated across similar situations, where similarity is judged by a number of matched attribute and attribute-value pairs between BPList entries and that of the current context: indifferent 7 e open small secondary window 4 open medium secondary window 6 open large secondary window 2 open small secondary window 2 In this example, the 'indifferent' decision has the highest frequency. However, overall, the user has chosen some window (small, medium, or large) 14 times. This indicates that a window is a better option under the circumstances. The planning module 41 aim to determine that this, indeed, is a better decision, and which window, precisely, is the better decision.
A rule-based decision-making process requires the decision-making to be broken into rules and a decision tree is hard-coded in the application program. For example, one rule might specify that if a sum of frequencies for all "open window" decisions is greater than the frequency of the "indifferent" decision, then consider only "open window" decisions.
'024 1 .'.doc -18- Another rule might specify that, if only "open window" decisions are being considered, then choose a decision with highest frequency amongst the "open window" decisions.
The most obvious drawback of this approach is the need to re-program and re-compile the algorithm for the particular decision tree, if new decision types are introduced.
In the preferred implementation, a numerical discrete domain (NDD) for the decision space is used. This requires that: Sdecision types (such as window-related decisions) are represented by NDD intervals, with each interval open on the right, and a granularity representing how many digits after a decimal point is significant; and 0 similar decisions (like window-related decisions) are assigned closer numerical values from the same numerical interval.
With a finite number of decision factors (such as window size, sound level, etc), a finite number of NDD intervals are provided.
Consider for example the case with decision values as follows: 0 0 indifferent 11 open small secondary window 12 open medium secondary window 13 open large secondary window -10 do not interrupt Here three decision types are provided, represented by NDD intervals: (do not interrupt); [0,10[ (indifferent); and [10,20[ (window-related decisions).
59 u c UU -19- The notation [xa,p[ is used to indicate a range of values where caLvalue<p, i.e. the values includes ac but not P. All window-related decisions have very close values (10, 11 and 12), and "indifferent" decision has a distant value of 0.
Fig. 4 shows a schematic flow diagram of a method 500 of determining which particular decision is most appropriate given previous user behaviour patterns and the context of the decision. The method of Fig. 4 may be practiced using the conventional generalpurpose computer system 100 (Fig. 8) wherein the process of Fig. 4 is implemented as software, such as an application program executing within the computer system 100. In particular, the steps may be instructions in the software that are carried out by the computer.
The software may be stored in a computer readable medium. The software is loaded into the computer from the computer readable medium, and then executed by the computer. A computer readable medium having such software or computer program recorded on it is a computer program product. Inputs to the method 500 are the user behaviour pattern from the BPList created by the learning module 39, the context of the decision represented by the current attribute and attribute-value pairs, and a predefined NDD.
The method 500 starts in step 502 and receives the inputs in step 504. Step 506 compares current attribute and attribute-value pairs with that of each of the BPList records, and determines the number of matched attribute and attribute-value pairs for each pattern record.
Step 508 creates, from the given BPList, a list of contextualised records (c-list), where non-matched attribute-values are replaced with a value and the number of matched attribute and attribute-value pairs is added as weight parameter to each entry. The records of J L I1 O.UU the c-list are arranged in a descending order in step 510 according to their respective weight parameters.
Step 512 constructs n clusters of c-list records, where all entries within each cluster have the same weight parameter, hence the same number of matched attribute and attributevalue pairs. Within each cluster, step 514 constructs m sub-clusters records, where each subcluster has the same positive relevant attribute and attribute-value pairs.
Fig. 11A shows an example c-list created from a BPList which is not illustrated. The example c-list includes 5 attributes, those being DTV Contents Category, UMM Content Category, UMM Content Priority, Day, and Time of Day. Each entry has its attribute value indicated below the attributes. The decision value corresponding to each entry and the frequency of its occurrence are also indicated. The meaning of the decision values in the example are as follows: 0 indifferent 11 open small secondary window 9 12 open medium secondary window 13 open large secondary window The NDD intervals for the example are given as: and [10,20[.
In the example, two clusters were constructed in step 512 with the entries in cluster 1 having a weighting parameter of 4 because the entries in cluster 1 each have 4 matched attribute and attribute-value pairs. Similarly, entries in cluster 2 have a weighting parameter 5124 1 I .dc -21 of 3. Cluster 1 has 5 sub-clusters, with each sub-cluster having the same positive relevant attribute and attribute-value pairs. Cluster 2 has only 2 such sub-clusters.
Referring again to Fig. 4, step 516 creates an empty decision list X. List X is simply a temporary data structure for holding most common decisions djk described below. Step 518 sets a variable j equal to 0. This concludes the initialisation.
Step 519 increments variable j. Within cluster j (1 j step 520 retrieves for each sub-cluster k (1 k m) a most common decision djk based on the frequency. Referring again to Fig. 11 A, the most common decisions djk based on the frequency within each subcluster are also indicated.
Next, all the most common decisions djk within cluster j are averaged in step 522 (Fig.
The average of the most common decisions dll, d 12 d 13 d 14 and d 15 within cluster 1 is 9.4.
Step 524 then finds which NDD interval out of contributing intervals is closest in terms of left bounds to the average calculated in step 522. In the example the closest contributing NDD interval in terms of left bounds to the average of 9.4 is the NDD interval [10,20[ because 9.4 is the closest to the left bound of Out of all the most common decisions djk with values in the closest NDD interval, step 526 selects a decision that is most common, based on frequency, among sub-clusters. In the example, most common decisions d 12 d1 3 d 14 and d15 are within interval [10,20[, and decision d 13 with a frequency of 6 is the most common from those. Most common decision d1 3 relates to decision 12 (open medium secondary window).
However, when the closest NDD interval includes a decision type having a different number of decision factors (for example, when the closest NDD interval includes a decision 5924 1 8.dc, 22 type having both "window-related" and "volume related" decisions and a decision type having only "window-related" decisions, as described later), a decision having a larger number of decision factors is selected. In other words, when the closest NDD interval includes decisions having different granularity of decision value, a more granular more specific) decision is selected.
Referring again to Fig. 4, as a selection was possible, step 528 directs the method 500 to step 560 where this decision is returned.
Step 528 determines whether such a selection was possible. It may be that more than one decision has a highest frequency, which prevents step 526 from making such a selection.
Fig. 1lIB shows another example c-list which is similar to that shown in Fig. 11 A, but the decision values and frequencies are different.
The average of the most common decisions dl, d 12 d 13 d 14 and d15 within cluster 1 calculated in step 522 is 6.8. The NDD interval out of contributing intervals that is closest in terms of left bounds to the average of 6.8 is the NDD interval [10,20[ because 6.8 is the closest to the left bound of The most common decisions di 1 dz 12 and d1 4 are within the interval [10,20[. However, both decisions dl 1 and dl 4 with frequencies of 7 are the most common from those, and a selection can not be made between those two decisions dl and d1 4 because their frequencies are tied.
Referring again to Fig. 4, if step 528 determines that a selection was possible, then the method 500 returns the selection in step 560 and ends in step 561. Alternatively, the method 500 continues to step 530 where it is determined whether any of the tied decisions match the decision list X entries more, in terms of the number of occurrences in the list X. If one of the tied decisions does match the decision list X entries more, then that decision is selected and 4 YL lOdJUv 23 the method 500 returns the selection in step 560. Alternatively, all most common decisions djk from cluster j with their corresponding frequencies are added into the decision list X in step 532. The method 500 returns to step 519 if step 533 determines that all the clusters has not yet been considered.
Returning to the example with reference to Fig. 11 B, because decisions dll and dl 4 were tied, step 528 determined that it was not possible to make a selection and method 500 continues to step 530 where it is determined whether any of the tied decisions dll and d 1 4 match the decision list X entries more, in terms of the number of occurrences in the list X.
Because the list X is still empty, neither of the tied decisions dll and dl 4 matches the decision l0 list X entries more and step 532 adds all the most common decisions djk from cluster 1 to the X list, which now contains (11, 11,0, 12, 0}.
Because more clusters remain, method 500 returns to step 519 where j is incremented to 2. The most common decisions d 2 l and d 12 are determined in step 520 to be 11 and 13, and step 522 determines the average to be 12. The NDD interval out of contributing intervals that is closest in terms of left bounds to the average of 12 is again the NDD interval [10,20[.
Both the most common decisions dzl and d 22 are within the interval [10,20[, but again they are tied. Because decisions dil and d 14 are tied, step 528 determines that it is not possible to make a selection and step 530 determines whether any of the tied decisions d 2 1 and dz 22 match the decision list X entries, which contains 11, 11, 0, 12, more. Decision d2l, which relates to decision value 11, appears twice in the list X, whereas decision d 22 zz, which relates to decision value 13, does not appear in the list X. Decision 11 is therefore selected.
After processing all the clusters and a selection still has not been made, then step 534 averages all the decisions on the list X. Step 536 then finds which NDD interval out of W4 1 o.dUUc 24 contributing intervals is closest in terms of left bounds to the average calculated in step 534.
Out of the decisions in the list X with values in the closest NDD interval, step 538 selects a decision that is most common, based on frequency. Step 540 determines whether such a selection was possible, and if so, then the method 500 returns the selection in step 560.
Alternatively, a decision 0 (indifferent) is returned in step 550.
Consider another example having both "window-related" and "volume-related" decisions. The decision values are assigned as follows: 0 indifferent 11 open small secondary window 0 11.1 open small secondary window and reduce volume 11.2 open small secondary window and mute volume 12 open medium secondary window 12.1 open medium secondary window and reduce volume 12.2 open medium secondary window and mute volume 13 open large secondary window 13.1 open large secondary window and reduce volume 13.2 open large secondary window and mute volume 21 open full-screen window 21.1 open full-screen window and reduce volume 9 21.2 open full-screen window and mute volume -10 do not interrupt j539241 3.U 25 From the above it can be seen that secondary window decision types are considered similar, and the same secondary sized window with varying volume are assigned even closer numerical values as they are considered very similar.
The NDD intervals are given as: [0,10[; [11,12[; [12,13[; [13,14[; and [21,22[.
Also consider the following most common decisions djk from their respective subclusters and within the same cluster, their interpretation and frequency: 0 indifferent 7 (sub-cluster 1) 11.1 open small secondary window and reduce volume 4 (sub-cluster 2) 12 open medium secondary window 6 (sub-cluster 3) 13.2 open large secondary window and mute volume 2 (sub-cluster 4) 21 open full-screen window 2 (sub-cluster Starting again in step 522, all the most common decisions djk within this cluster are averaged to provide an average value of 11.46. Step 524 then finds which NDD interval out of contributing intervals is closest in terms of left bounds to the average value of 11.46. The closest contributing interval is interval [11,12[. Step 526 then selects the decision that is 5924 U8.doc -26most common out of those within interval [11,12[. There is only one decision in this interval, namely decision 11.1 (open small secondary window and reduce volume). As a selection was possible, step 528 directs the method 500 to step 560 where this decision is returned.
Further consider the following most common decisions from their respective subclusters and within the same cluster, their interpretation and frequency: 0 ask for confirmation 7 (sub-cluster 1) 11 open small secondary window 6 (sub-cluster 2) 11.1 open small secondary window and reduce volume 4 (sub-cluster 3) 13.2 open large secondary window and mute volume 2 (sub-cluster 4) S21 open full-screen window 2 (sub-cluster Starting again in step 522, all the most common decisions djk within this cluster are averaged to provide an average value of 11.26. Step 524 then finds which NDD interval out of contributing intervals is closest in terms of left bounds to the average value of 11.26. The closest contributing interval is interval [11,12[. Step 526 then selects the decision that is most common out of those within interval [11,12[. There are two decisions in this interval, namely decision 11 (open small secondary) and decision 11.1 (open small secondary window and reduce volume), and this results in "open small secondary window and reduce volume" decision Note that this decision was preferred to the "open small secondary window" decision despite being less frequent. It was, however, more specific.
As a selection was possible, step 528 directs the method 500 to step 560 where this decision is returned.
5924i8.juc -27- Operations within the planning module 41 are further described by way of an example. A user watches a soccer game on the DTV. It is Tuesday afternoon. A highpriority e-mail message addressed to the user arrives, setting the following variables: UMM.Status=NewMessage; UMM_Content_Category=56 (e-mail); and UMM_Content_Priority=99 (high priority).
The UMM 27 becomes aware of the arrival of this new message by periodically (every k minutes) checking the variable UMM.Status. Because the variable UMM.Status now have a value 'NewMessage', the UMM 27 notifies the avatar-agent 37 of the corresponding user of the values of the variables UMM_Content_Category and UMM_Content_Priority.
The planning module 41 now determines, by using method 500 described above, which particular decision is most appropriate given previous user behaviour patterns and the context of the decision. The module 41 retrieves the BPList generated by the learning module 39 from the instance file corresponding with BP type 0, the NDD which is predetermined, and request context information from the DTV agent 36. It then waits for a response. The DTV agent 36 responds with the following variables: DTV.Status=ON; (sport); Day=2(Tuesday); and Time of Day=2 (Afternoon).
If the decision is not to interrupt the user, no further action is taken. If the decision is 'indifferent', then the avatar agent 37 requests that DTV agent 36 obtains user clarification.
It then waits for a user response. When the DTV agent 36 receives user confirmation, it notifies the avatar agent 37 about the user decision.
J9t I O.UUd -28- If the decision is one of the secondary window type decisions, the planning module 41 requests that DTV agent 36 arranges a secondary window to display the UMM Content. This is a direct action. It also requests the electronic storage device 26 to start recording the current DTV content ie. the soccer game. This is a ramification.
The DTV agent 36 resizes the soccer game window and opens an e-mail window.
When the avatar agent 36 receives user confirmation from the DTV agent 36, a new instance is created. Alternatively, if the action taken based on the decision by the planning module 41, ie. to open a secondary window automatically, and this decision is not overwritten by the user by closing the window immediately, a new instance is created with this decision. The learning module 39 is notified to learn a new GPList and to create a new BPList, from the instances which now include this newly created instance.
After a while, the user finishes reading the e-mail message and selects a close e-mail option. The DTV agent 36 clears the e-mail window, and notifies the avatar agent 36. The avatar agent 36 notifies the electronic storage device 26 to stop recording.
The planning module 41 continues by determining, again by using method 500 described above, which particular decision for displaying the recorded fragment is most appropriate given previous user behaviour patterns and the context of the decision. It retrieves the BPList generated by the learning module 39 from the instance file corresponding with BP type 1, the NDD which is predetermined, and request context information from the DTV agent 36. For the particular example, the context is as follows: Duration=34 (short recording which is 3 minutes); Day=2; and J924 8do C -29- Time of Day=2.
If the decision is not to interrupt the user, no further action is taken. If the decision is 'indifferent', then the avatar agent 37 requests that DTV agent 36 obtains user clarification.
It then waits for a user response. When the DTV agent 36 receives user confirmation, it notifies the avatar agent 37 about the user decision.
If the decision is one of the secondary window type decisions, the planning module 41 requests that DTV agent 36 arranges a secondary window to display the UMM Content.
The DTV agent 36 resizes the current DTV content, which is still the soccer game in the example, and opens a secondary window in which the recorder fragment is displayed.
When the avatar agent 36 receives user confirmation from the DTV agent 36, a new instance is created. Alternatively, if the action taken based on the decision by the planning module 41 and this decision is not overwritten by the user by closing the window immediately, a new instance is created with this decision. The learning module 39 is notified to learn a new GPList and to create a new BPList.
The different tasks performed in the system 50 as described in the above example may be categorised as follows: periodic performed repeatedly at specific intervals, in other words, rescheduled upon execution for subsequent re-execution according to its period, such as the task performed by the UMM 27 for checking the variable UMM.Status every k minutes; triggered performed (sometimes repeatedly) in response to external events, such as the task performed by the avatar agent 36 when receiving a notification from the UMM 27 that a message has arrived. This notification also includes values for the variables UMM_Content_Category and UMM_Content_Priority; and 592.A O.Uoc Sknowledge-producing performed for sensing or information gathering, such as the task performed by the avatar agent 36 when receiving a message from the DTV agent 36 which includes information on the current context. This task produces a decision on the most appropriate decision given previous user behaviour patterns and the context of the decision.
These categories are not mutually exclusive. For example, a knowledge-producing task may be triggered in response to external events.
In a preferred implementation, these tasks are implemented by means of a task network. Every task in the task network can flexibly control the flow of information by either triggering other (sub-) tasks, messaging to them, or both. The control flow in a contextual task network is derived by a (sub-) plan generated by each task in response to a particular state, contextual user behaviour patterns and functional dependencies. The control flow may change dynamically if user behaviour patterns are updated by the learning module 39.
Referring to Fig. 5, a task Tk is represented by its id ik, type of invocation (trigger or message) tk, a conditional (sub-) plan Pk, and a map of (sub-) tasks ink. When the task Tk is invoked by a trigger, it reads and interprets incoming data, activates the (sub-) plan pk, and executes it. When the task Tk is invoked with a message, it just reads and interprets the incoming data. The (sub-) plan Pk is represented as a vector of plan nodes Pkj. Activation of the (sub-) plan Pk results in construction of a sequence of plan nodes Pkj, where each node Pkj is mapped to other (sub-) tasks ij, represented by the map mkj. In other words, each plan node determines which (sub-) task should be invoked at the moment and how (by a trigger or.
with a message).
3924 i .dum -31 Fig. 6 is a diagram exemplifying this process across a task network 600. In this example, the task T 5 has a conditional (sub-) plan with control flow to either (sub-) task T 6 or (sub-) task Ts, and information flow to (sub-) task T 7 Sometimes, in order to trigger or pass information to a (sub-) task Ti, a plan node needs to determine contextual user preferences, such as described with reference to Fig. 4. For example, task T 9 could be a (sub-) task responsible for such contextualisation. Before task
T
6 can trigger the (sub-) task T 8 the latest user behaviour pattern should be analysed, and therefore, the (sub-) task T 9 is triggered first.
Let it be assumed that task T 7 uses the learning algorithm performed in the learning module 39 and updates the user behaviour pattern BPList when a message i 7 is received from task Ts. This user behaviour pattern dynamically changes and is different the next time tasks
T
9 and Ts are triggered. Consequently, a message from T 9 to T 8 may dynamically change, given new context.
Typically, a particular scenario may develop when a subset of contextual task network links is activated. Given a current state of the DTV System 50 and current user behaviour patterns, a trajectory is planned by the planning module 41, and a scenario develops during run-time. In other words, all potential combinations of task flows in a network cover a variety of possible scenaria.
However, any given task network, whether contextual or not, specifies all possible trajectories in advance. Therefore, there is a limit on a number of potential scenaria supported by a given task network. Such a number may be sufficiently significant given the combinatorial nature of task networks, but nevertheless, it limits the flexibility of such contextual task networks.
112"4 1-.c -32- The flexibility of such contextual task networks may be enhanced by introduction of a scenario-independent template for task networks (SIT-TN). This feature allows a system designer to easily set up new scenaria, without affecting other modules and/or tasks.
As was noted above, a task network specifies all possible trajectories in advance, limiting a number of potential scenaria. Therefore, if a new scenario needs to be introduced, the task network must be updated, re-programmed and re-compiled.
In a preferred implementation, the SIT-TN is structured as follows: Task Name Task Link Condition Link Type Arguments Here Task Name and Task Link correspond to task ids, the former being a current task, the latter being an invokable sub-task, Condition is a Boolean expression capturing a pre-requisite for the invocation, Link Type is a type of the invokation (either TRIGGER or MESSAGE) between current task and sub-task, and Arguments is a list of parameters passed to the sub-task.
Consider for example an example task with task name OnHDDResponse performed by the planning module 41. This task is triggered when the storage device 26 replies with the value of HDD_Content_Category in response to an earlier request from the planning module 41 for this information. Code for such a task may be as follows: If response>0 then subject=HDDPartialContentPlayback clear(subject_list) add(response, subjectlist) NOTIFY (DTV, OnContentQuery, source=avatar) I'll I 1.10C cm i o l- 1^l u. ay^ -33- Else if response=0 then Monitoring=true Add(epg_id, record_list) Hence, the task OnHDDResponse is triggered by the reply that includes the variable response, which is the HDD_Content_Category. If a valid HDD_Content_Category was received, the value is positive. The variable subject is set according to the BP type. As the BP type is 1 in the case of fragments of content stored on the storage device 26, the variable subject is given a value HDDPartialContentPlayback. In the three code lines that follow, a subject_list is prepared which contains all the variables of the current context, which are to be used by another task to determined an appropriate decision. Finally, a notification is sent to a task OnContentQuery in the DTV agent 36, acting as a trigger for that task and passing the value of the variable source to that task.
By setting the variable subject and subject_list, information is made available for the task, named OnDTVContentResponse that determines the most appropriate decision. In the task network environment, this is a message.
In the alternative, when the response received is equal to 0, then the HDD_Content_Category is not valid and the task proceeds to monitor the user behaviour patterns.
The same task may be written in a SIT-TN as follows: 5921 Qd oc -34- Task Name Task Link Condition Link Type Arguments OnHDDResponse DTV::OnContentQuery response 0 TRIGGER source avatar subject OnHDDResponse OnDTVContentResponse response 0 MESSAGE HDDPartialContent Playback subject_list (response, duration) OnHDDResponse MatchEPGUserProfile response 0 MESSAGE monitoring true OnHDDResponse OnRecordedRecommendatio response 0 MESSAGE recorded_epg_id n Request epg id Which may be interpreted as follows: For the task OnHDDResponse, when a condition response>0 is received, a trigger is sent to task OnContentQuery in the DTV agent 36, passing argument source=avatar. Also, a message is sent to the task OnDTVContentResponse, passing values for variables subject and subject_list.
However, is a condition response=0 is received, a message is sent to the task MatchEPGUserProfile containing the value of variable monitoring, and a message is sent to the task OnRecordedRecommendationRequest which contains a value for recorded_epg_id.
There are, in general, a few records with the same Task Name in a SIT-TN representation, where each individual record specifies a particular link, corresponding to a plan node. The example task OnHDDResponse has 4 links.
The operation of the learning module 39 will now be described in more detail. The processes performed by the learning module 39 described in relation to Figs. 7A to 7F may be practiced using the conventional general-purpose computer system 100 (Fig. 8) wherein the processes are implemented as software, such as an application program executing within the computer system 100.
cnn IO,.V^ A procedure LEARN for performing the core function in the learning module 39 is shown in Fig. 7A. Four data structures are used in the procedure namely, the Instance-file which contains a number of instances the GPList which keeps all generalisation patterns, the BPList which keeps all behaviour patterns, and an Examined-instance-list which contains instances that are already processed. Initially, upon initiation of the procedure LEARN, the GPList, BPList and the Examined-instance-list are empty.
The procedure LEARN starts in step 200 by first obtaining a list of new instances in step 201 based on a specific BP type, and the Instance-file for that BP type from the viewer instance 24 database in step 202. Step 203 updates the Instance-file by adding the new instances.
Intersections for each instance of the Instance-file with the current GPList is obtained in the remaining steps 204 to 211. A first entry from the Instance-file is taken in step 204 and used as an input instance when calling sub-routine GEN-INSTANCE-GPList in step 205. Sub-routine GEN-INSTANCE-GPList finds the generalization patterns between the input-instance and items in the GPList and starts at step 220 in Fig. 7B. Sub-routing GEN- INSTANCE-EXAMINED-INSTANCES is called in step 206 for finding generalization patterns between the input-instance and the instances in the Examined-instance-list. Subrouting GEN-INSTANCE-EXAMINED-INSTANCES starts at step 310 in Fig. 7E. In step 207, the input-instance is added to the Examined-instance-list. Instances (Cj) are therefore progressively moved from the Instance-file to the Examined-instance-list until the Instancefile is empty. Step 208 determines whether Instance-file has any more entries. If there are any remaining entries in the Instance-file, the procedure LEARN proceeds to step 209 where the next entry from Instance-file is used as input-instance and the procedure LEARN continues to step 205.
5.241O.UdUL -36- After step 208 determines that the Instance-file is empty, step 210 filters the generalization patterns in the GPList to create a subset. Each GPList entry is examined to determine whether it provides a value in the "Decision" field, together with at least one other specific value. If so, it is regarded as a behaviour pattern and entered as an entry in the BPList. The BPList is the behaviour patterns and is produced as an output in step 211.
Procedure LEARN ends in step 212.
Referring to Fig. 7B the sub-routine GEN-INSTANCE-GPLIST is to find all generalization patterns between the input-instance and all items in the GPList. It further updates the GPList with all new generalization patterns between the input-instance and the current GPList.
GEN-INSTANCE-GPLIST starts in step 220 and receives the input instance and the current GPList as inputs in step 221. It is determined in step 222 whether the current GPList is still empty. If the current GPList is empty, then no generalization patterns between the input-instance and the GPList can exist and the sub-routine returns in step 236.
With items in the GPList, the sub-routine continues to step 223 wherein all generalization patterns between the input-instance and all items in the GPList are found by calling sub-routine G_List_GEN. The generalization patterns are put in a G_List, which contains potential generalization patterns between the input- instance and the GPList. The subroutine G_List_GEN starts at step 240 in Fig. 7C.
It is determined in step 224 whether the G_List is empty. If the G_List is empty, then no generalization patterns between the input-instance and the GPList was found by subroutine G ListGEN and the sub-routine GEN-INSTANCE-GPLIST returns in step 236.
5924 i8.doc -37- With items in the G_List, the sub-routine GEN-INSTANCE-GPLIST continues to step 225 where subroutine UG_List_GEN is called. Subroutine UG_List_GEN starts at step 260 in Fig. 7D, and forms a unique generalisation pattern list, UG_List, from the G_List.
A first item from the UG_List, UG_Item, is retrieved and the intersection, First_Intersection, from the UG_Item is extracted in step 226. A first item from the GPList, GPList_Item, is retrieved and the intersection, Second_Intersection, from the GPList_Item is extracted in step 227.
Step 228 determines whether First_Intersection and Second_Intersection match. If step 228 finds that First_Intersection and Second_Intersection match, the occurrence of GPList_Item is made equal to that of UG_Item in step 229. The sub-routine continues to step 233.
If step 228 finds that First_Intersection and Second_Intersection do not match, then step 230 determines whether all items of the GPList have been considered. If items remain, the next item in the GPList is retrieved with it's intersection as Second_Intersection in step 231, before step 228 again determines whether FirstIntersection and Second_Intersection match. If step 230 determined that all items in the GPList have been considered, then the UG Item is added to the GPList in step 232 and the sub-routine continues to step 233.
If step 233 determines that all items in the UG_List have not been considered, then the next item in the UGList is retrieved with it's intersection as First_Intersection in step 234 and followed by step 227. Alternatively, if step 233 determines that all items in the UGList have been considered, the sub-routine GEN-INSTANCE-GPLIST outputs the GPList in step 235 and returns in step 236.
Referring to Fig. 7C, sub-routine G_List_GEN is described, which determines all generalisation patterns between the input-instance with all items in the GPList. Starting at cn521 i o -38step 240 it receives as inputs the input-instance and the GPList. The inputs are obtained in step 241. A first generalisation pattern, GPList_Item is obtained from the GPList in step 242, a first feature from the input instance is retrieved in step 243 and a first feature from the intersection part of the GPList_Item is retrieved in step 244. Step 245 determines whether the retrieved feature from the input-instance is the same as the retrieved feature from the GPListItem. If no match is found in step 245, step 246 determines whether all features from the GPListItem has been dealt with. With more features in the GPList_Item remaining, step 255 retrieves a next feature from the GPListItem and continues to step 245. If all the features in the GPListItem were considered, the sub-routine continues to step 247.
If step 245 responds in the affirmative, step 252 determines whether a new generalization pattern has been created and create one in step 253 if required, or if it was already done, proceeds to step 254, where the shared feature is added to the new generalization pattern. The sub-routine continues to step 247 where it is determined whether all features from the input-instance have been dealt with. If there are remaining features in the input-instance, step 256 retrieves the next feature from the input-instance.
If step 247 determined that all the features from the input-instance were considered, step 248 determines whether a new generalisation pattern was created in step 252. If the generalisation pattern already existed in the intersection parts of the GPList, the sub-routine continues to step 249 where the occurrence of the new generalisation pattern is given a value of the GPList Item that has the same intersection plus 1. In step 250 the new generalisation pattern is added to the G_List.
If step 248 determined that the new generalisation pattern does not already exist in the GPList, then the sub-routine continues to step 251.
5L2418. .UC -39- Step 251 determines whether all items from the GPList have been considered. If an item remains in the GPList, the sub-routine continues to step 257 where the next item in the GPList is retrieved and step 243 is executed. Alternatively, with all items in the GPList considered, the sub-routine G_List_GEN returns in step 259 after producing the new G_List as output in step 258.
Referring to Fig. 7D, a sub-routine UG_List_GEN is shown which forms a unique generalisation pattern list, UG_List. Starting in step 260, the sub-routine receives in step 261 the G_List as input. In step 262 the first generalisation pattern is copied from the G_List into the UGList. A first item from the GList, G_ListItem, is retrieved and the intersection, First_Intersection, from the G_List_Item is retrieved in step 263. A first item from the UGList, UGList_Item, is retrieved and the intersection, Second_Intersection, from the UGList_Item is retrieved in step 264.
Step 265 determines whether First_Intersection and Second_Intersection match. If FirstIntersection and Second_Intersection match, the higher occurrence of the two items, G List Item and UG List Item, is determined and saved as the occurrence of UGListItem in steps 269 and 270. The sub-routine continues to step 268.
If First_Intersection and Second_Intersection do not match, step 266 determines whether all items of the UG List have been considered. If items remain, the next item in the UG_List is retrieved with its intersection as Second_Intersection in step 271, before step 265 again determines whether First_Intersection and Second_Intersection match. If step 266 determined that all items in the UGList have been considered, then the GListItem is added to the UG_List in step 267 and the sub-routine continues to step 268.
If step 268 determines that all items in the G_List have not been considered, then the next item in the G_List is retrieved with it's intersection as FirstIntersection in step 263 and 3924' .dUoc followed by step 264. Alternatively, if step 268 determines that all items in the G_List have been considered, the sub-routine UG_LIST_GEN outputs the UG_List in step 273 and returns in step 274.
Sub-routine GEN-INSTANCE-EXAMINED-INSTANCES starts in step 310 in Fig.
7E. This sub-routine finds generalisation patterns between the Examined-case-list, inputinstance from Instance-file and instances in the Examined-instance-list, which it receives as inputs in step 311. Step 312 determines whether the Examined-instance-list is empty and returns in step 313 if this is affirmative. If the Examined-instance-list has items, the subroutine continues to step 314 where the first Examined-instance-list from the Examinedinstance-list is retrieved. Step 315 calls subroutine GET-GEN-PATTERN to calculate a generalisation pattern, Gen-pattern, between the input-instance and the instance from the Examined-instance-list. Subroutine GET-GEN-PATTERN starts at step 330 in Fig. 7F.
Step 316 determines whether a Gen-pattern has been found. If a Gen-pattern has been found, the sub-routine determines in step 317 whether the Gen-pattern match any item in the GPList. If the Gen-pattern does not match any item in the GPList, then step 318 adds Genpattern to GPList as a new item and continues to step 319. If step 317 find that the Genpattern match an item in the GPList, then the sub-routine continues to step 319 where it is determined whether all instances in the Examined-instance-list have been considered.
If step 319 determines that instances in the Examined-instance-list remain to been considered, then step 320 retrieves the next instance from the Examined-instance-list and continues to step 315. Alternatively, step 321 outputs the GPList and the sub-routine GEN- INSTANCE-EXAMINED-INSTANCES returns in step 322.
Referring to Fig. 7F, wherein sub-routine GET-GEN-PATTERN for identifying a generalization pattern, Gen-pattern, between an input-instance from the Instance-file and one 5Q4 1I O.do -41 instance from the Examined-instance-list, is shown. The sub-routine starts in step 330 by obtaining input-instance and a instance from the Examined-instance-list as inputs in step 331.
It takes these two instances as input and compares their features. If some features are shared by the two instances, then these shared features will be included in the intersection part of Gen-pattern. The occurrence in Gen-pattern will be 2. Otherwise, if no features are shared between the instances, an empty Gen-pattern will be produced as output.
Step 332 retrieves the first feature from the input-instance, named First-feature, followed by step 333 where the first feature from the instance from the Examined-instancelist, named Second-feature, is retrieved. Step 334 determines whether First-feature is the same as Second--feature. If they are the same, then step 335 would keep this feature in the intersection part of Gen-patter and proceeds to step 338. If step 334 determined that the features were not the same, then step 336 determines whether all the features of the instance from the Examined-instance-list have been considered. If this is affirmative, then the subroutine continues to step 338. If not, then step 337 retrieves the next feature of the Examined-case-list, instance from the Examined-instance-list, names it Second-feature and continues to step 334. Step 338 determines whether all features of the input-instance have been considered. If this is affirmative, then the sub-routine continues to step 340. However, if step 338 determined that all the features of the input-instance have not been considered, then step 339 retrieves the next feature of the input-instance, names it First-feature and continues to step 333.
Step 340 determines whether the Gen-pattern is empty. If the Gen-pattern is not empty, then the occurrence of Gen-pattern is made 2 and the subroutine GET-GEN- PATTERN returns in step 343 after producing as output Gen-pattern in step 342. If step 340 5024 o.doc -42determines that Gen-pattern is empty, the sub-routine GET-GEN-PATTERN also returns in step 343.
The methods of Fig. 4 and Figs 7A to 7F may alternatively be implemented in dedicated hardware such as one or more integrated circuits performing the functions or sub functions of Fig. 4 and Figs 7A to 7F. Such dedicated hardware may include graphic processors, digital signal processors, or one or more microprocessors and associated memories.
The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiment being illustrative and not restrictive.
5c9248I.da 51. .docYY

Claims (29)

1. A method of determining a decision within a specified context, said specified context being defined by a first set of attributes, said method comprising the steps of: receiving user behaviour patterns, each said user behaviour pattern including a second set of attributes, a user decision and frequency of occurrence; comparing attributes of said first set of attributes with attributes of each second set of attributes to determine for each user behaviour pattern a number of matched attributes; 10 forming groups of said user behaviour patterns based upon said number of 00o0 matched attributes; and performing a decision selecting process for the group having the highest number of said matched attributes to thereby select the decision which is most appropriate to said specified context from said user behaviour patterns.
2. A method according to claim 1, further comprising the steps of: forming sub-groups of said user behaviour patterns for each of said groups, %1 ~each of said sub-groups comprising unique attributes; and wherein said decision selecting process selects the decision from said behaviour patterns having the highest frequency of occurrence in each said sub-groups for said group having the highest number of said matched attributes.
3. A method according to claim 2, wherein each of said user decisions is assigned a decision value and said decision selecting process includes a calculating process for 592418 -44- calculating an average of said decision values of said behaviour patterns having the highest frequency of occurrence in each said sub-groups and selects the decision from said behaviour patterns that is close to said average.
4. A method according to claim 3, wherein said decision selecting process includes a determining process for determining a decision interval from a predefined group of decision intervals that is closest to said average, said decision interval is defined by said decision values, and selects a decision from said sub-group that is within said determined decision interval, said selected decision having a highest frequency of occurrence.
5. A method according to claim 3, wherein said decision selecting process includes a o• determining process for determining a decision interval from a predefined group of decision go• intervals that is closest to said average, said decision interval is defined by said decision values, and selects a most specific decision from said sub-group that is within said •fog determined decision interval. fee. 0.06 go oooo S6. A method according to claim 5, wherein said decision selecting process selects the ooo•• o• decision having a highest frequency of occurrence in the case that said decision selecting process is unable to select a most specific decision.
7. A method according to claim 3, wherein said decision selecting process further comprising step of: Q9241 45 if said decision selecting process is unable to select a unique decision from said behaviour patterns having the highest frequency of occurrence in each said sub-groups, then selecting the decision having a higher number of corresponding entries in a decision list.
8. A method according to claim 7, wherein said decision selecting process further comprising steps of: if said decision selecting process is unable to select a unique decision having a higher number of corresponding entries in said decision list, then adding all entries of said sub-groups associated with the highest number of matched attributes to said decision list; and 10 repeating said decision selecting process with the sub-group associated with the ogroup with the next highest number of matched attributes.
9. A method according to claim 8, wherein said decision selecting process further comprising steps of: 15 if said decision selecting process is unable to select a unique decision after performing said repeating step, then calculating an average of said decision values of Sbehaviour patterns in said decision list; .ooo0i determining which decision interval from said predefined group of decision intervals is the closest to said average calculated for said decision list; and selecting the decision from said decision list that is within said decision interval and having a highest rate of occurrence. A method of determining a decision within a specified context, said specified context being defines by a first set of attributes, said method comprising the steps of: 592418 -46- receiving user behaviour patterns, each said user behaviour pattern including a second set of attributes, a user decision having an assigned decision value and frequency of occurrence; comparing attributes of said first set of attributes with attributes of each second set of attributes to determine for each user behaviour pattern a number of matched attributes; forming groups of said user behaviour patterns, each of said groups having a same number of said matched attributes; for each of said groups, forming sub-groups of said behaviour patterns, with each of said sub-groups comprising unique matched attributes; 10 for the group having a highest number of said matched attributes, calculating an o average of said decision values corresponding to those user behaviour patterns having a highest frequency of occurrence in said corresponding sub-groups; determining a decision interval from a predefined group of decision intervals that is closest to said average; and OS*. °00 15 selecting a decision from said sub-group that is within said determined decision interval, said decision being selected having a highest frequency of occurrence. 0.0. .0*0.0 0 r o 11. A method as claimed in claim 10, further comprising the step of: if step is unable to select a unique decision, then selecting one of the decisions selected in step that has a higher number of corresponding entries in a decision list.
12. A method as claimed in claim 11, further comprising the steps of: 592418 -47 if step is unable to select a unique decision, then adding all entries of said sub- group associated with the group with the highest number of matched attributes to said decision list; and j) repeating steps to with the sub-group associated with the group with the next highest number of matched attributes.
13. A method as claimed in claim 12 further comprising the steps of: wherein, if step is unable to select a unique decision, then calculating an average of said decision values of behaviour patterns in said decision list; determining which decision interval from said predefined group of decision intervals is the closest to said average calculated in step and selecting a decision from said decision list that is within said decision interval and having a highest rate of occurrence. S 15 14. A method as claimed in any one of claims 10 to 13, wherein each of said behaviour patterns includes context data, said context data being common to context when said user decision was made.
15. An apparatus for determining a decision within a specified context, said specified context being defined by a first set of attributes, said apparatus comprising: means for receiving user behaviour patterns, each said user behaviour pattern including a second set of attributes, a user decision and frequency of occurrence; 5924!9 -48- means for comparing attributes of said first set of attributes with attributes of each second set of attributes to determine for each user behaviour pattern a number of matched attributes; means for forming groups of said user behaviour patterns based upon said number of matched attributes; and means for performing a decision selecting process for the group having the highest number of said matched attributes, said decision selecting process is for selecting the decision which is most appropriate to said specified context from said user behaviour patterns. o 16. An apparatus according to claim 15, further comprising: means for forming sub-groups of said user behaviour patterns for each of said Sgroups, each of said sub-groups comprising unique attributes; and wherein said decision selecting process selects the decision from said behaviour patterns °f 15 having the highest frequency of occurrence in each said sub-groups for said group having the highest number of said matched attributes.
17. An apparatus according to claim 16, wherein each of said user decision is assigned a decision value and said decision selecting process includes calculating process for calculating an average of said decision values of said behaviour patterns having the highest frequency of occurrence in each said sub-groups and selects the decision from said behaviour patterns that is close to said average. 592418 -49-
18. An apparatus according to claim 17, wherein said decision selecting process includes a determining process for determining a decision interval from a predefined group of decision intervals that is closest to said average, said decision interval is defined by said decision values, and selects a decision from said sub-group that is within said determined decision interval, said selected decision having a highest frequency of occurrence.
19. An apparatus according to claim 17, wherein said decision selecting process includes a determining process for determining a decision interval from a predefined group of decision intervals that is closest to said average, said decision interval is defined by said decision values, and selects a most specific decision from said sub-group that is within said determined decision interval. oo -20. An apparatus according to claim 19, wherein said decision selecting process selects the decision having a highest frequency of occurrence in the case that said decision selecting 15 process is unable to select a most specific decision. •oo*
21. An apparatus according to claim 17, wherein said decision selecting process further comprising step of: S•if said decision selecting process is unable to select a unique decision from said behaviour patterns having the highest frequency of occurrence in each said sub-groups, then selecting the decision having a higher number of corresponding entries in a decision list.
22. An apparatus according to claim 21, wherein said decision selecting process further comprising steps of: 59?41R if said decision selecting process is unable to select a unique decision having a higher number of corresponding entries in said decision list, then adding all entries of said sub-groups associated with the highest number of matched attributes to said decision list; and repeating said decision selecting process with the sub-group associated with the group with the next highest number of matched attributes.
23. An apparatus according to claim 22, wherein said decision selecting process further comprising steps of: if said decision selecting process is unable to select a unique decision after performing said repeating step, then calculating an average of said decision values of behaviour patterns in said decision list; determining which decision interval from said predefined group of decision intervals is the closest to said average calculated for said decision list; and selecting the decision from said decision list that is within said decision interval and having a highest rate of occurrence. S24. An apparatus for determining a decision within a specified context, said specified S° context being defines by a first set of attributes, said apparatus comprising: S°means for receiving user behaviour patterns, each said user behaviour pattern including a second set of attributes, a user decision having an assigned decision value and frequency of occurrence; means for comparing attributes of said first set of attributes with attributes of each second set of attributes to determine for each user behaviour pattern a number of matched attributes; 592418 -51 means for forming groups of said behaviour patterns, each of said groups having a same number of said matched attributes; means for forming, for each of said groups, sub-groups of said behaviour patterns, with each of said sub-groups comprising unique matched attributes; means for calculating, for the group having a highest number of said matched attributes, an average of said decision values corresponding to those user behaviour patterns having a highest frequency of occurrence in said corresponding sub-groups; means for determining a decision interval from a predefined group of decision intervals that is closest to said average; and means for selecting a decision from said sub-group that is within said determined decision interval, said decision being selected having a highest frequency of occurrence. oooo
25. An apparatus as claimed in claim 24, wherein each of said behaviour patterns includes context data, said context data being common to context when said user decision 15 was made.
26. A program stored in a memory medium for determining a decision within a specified context, said specified context being defined by a first set of attributes, said program comprising: code for receiving user behaviour patterns, each said user behaviour pattern including a second set of attributes, a user decision and frequency of occurrence; code for comparing attributes of said first set of attributes with attributes of each second set of attributes to determine for each user behaviour pattern a number of matched attributes; 592418 52 code for forming groups of said user behaviour patterns based upon said number of matched attributes; and code for performing a decision selecting process for the group having the highest number of said matched attributes, said decision selecting process is for selecting the decision which is most appropriate to said specified context from said user behaviour patterns.
27. A program according to claim 26, further comprising: code for forming sub-groups of said user behaviour patterns for each of said groups, each of said sub-groups comprising unique attributes; and wherein said decision selecting process selects the decision from said behaviour patterns having the highest frequency of occurrence in each said sub-groups for said group having the highest number of said matched attributes.
28. A program according to claim 27, wherein each of said user decision is assigned a decision value and said decision selecting process includes calculating process for calculating ooo* gig* an average of said decision values of said behaviour patterns having the highest frequency of occurrence in each said sub-groups and selects the decision from said behaviour patterns that o is close to said average.
29. A program according to claim 28, wherein said decision selecting process includes a determining process for determining a decision interval from a predefined group of decision intervals that is closest to said average, said decision interval is defined by said decision 592418 -53 values, and selects a decision from said sub-group that is within said determined decision interval, said selected decision having a highest frequency of occurrence. A program according to claim 28, wherein said decision selecting process includes a determining process for determining a decision interval from a predefined group of decision intervals that is closest to said average, said decision interval is defined by said decision values, and selects a most specific decision from said sub-group that is within said determined decision interval.
31. A program according to claim 30, wherein said decision selecting process selects the decision having a highest frequency of occurrence in the case that said decision selecting process is unable to select a most specific decision.
32. A program according to claim 28, wherein said decision selecting process further comprising step of: if said decision selecting process is unable to select a unique decision from said behaviour patterns having the highest frequency of occurrence in each said sub-groups, then selecting the decision having a higher number of corresponding entries in a decision list.
33. A program according to claim 32, wherein said decision selecting process further comprising steps of: if said decision selecting process is unable to select a unique decision having a higher number of corresponding entries in said decision list, then adding all entries of said sub-groups associated with the highest number of matched attributes to said decision list; and I Q -54- repeating said decision selecting process with the sub-group associated with the group with the next highest number of matched attributes.
34. A program according to claim 33, wherein said decision selecting process further comprising steps of: if said decision selecting process is unable to select a unique decision after performing said repeating step, then calculating an average of said decision values of behaviour patterns in said decision list; determining which decision interval from said predefined group of decision intervals is the closest to said average calculated for said decision list; and o 4s selecting the decision from said decision list that is within said decision interval and having a highest rate of occurrence. r A program stored in a memory medium for determining a decision within a specified context, said specified context being defines by a first set of attributes, said program comprising: l code for receiving user behaviour patterns, each said user behaviour pattern including a second set of attributes, a user decision having an assigned decision value and frequency of occurrence; code for comparing attributes of said first set of attributes with attributes of each second set of attributes to determine for each user behaviour pattern a number of matched attributes; code for forming groups of said user behaviour patterns, each of said groups having a same number of said matched attributes; 592418 55 code for, for each of said groups, forming sub-groups of said behaviour patterns, with each of said sub-groups comprising unique matched attributes; code for for the group having a highest number of said matched attributes, calculating an average of said decision values corresponding to those user behaviour patterns having a highest frequency of occurrence in each said sub-groups; code for determining a decision interval from a predefined group of decision intervals that is closest to said average; and code for selecting a decision from said sub-group that is within said determined decision interval, said decision being selected having a highest frequency of occurrence. oe•
36. A program as claimed in claim 35, wherein each of said behaviour patterns includes context data, said context data being common to context when said user decision was made.
37. A method of determining a decision within a specified context, said method being 15 substantially as herein described with reference to Fig. 4.
38. An apparatus for determining a decision within a specified context, said apparatus being substantially as herein described with reference to Fig. 4. DATED this 20 h Day of November, 2003 Canon Kabushiki Kaisha Patent Attorneys for the Applicant SPRUSON FERGUSON 5924!S
AU35609/02A 2001-04-24 2002-04-23 Functional planning system Ceased AU778745B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU35609/02A AU778745B2 (en) 2001-04-24 2002-04-23 Functional planning system

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
AUPR4600 2001-04-24
AUPR4600A AUPR460001A0 (en) 2001-04-24 2001-04-24 Functional planning system
AU35609/02A AU778745B2 (en) 2001-04-24 2002-04-23 Functional planning system

Publications (2)

Publication Number Publication Date
AU3560902A AU3560902A (en) 2002-10-31
AU778745B2 true AU778745B2 (en) 2004-12-16

Family

ID=25623372

Family Applications (1)

Application Number Title Priority Date Filing Date
AU35609/02A Ceased AU778745B2 (en) 2001-04-24 2002-04-23 Functional planning system

Country Status (1)

Country Link
AU (1) AU778745B2 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998035297A1 (en) * 1997-02-06 1998-08-13 America Online, Inc. Consumer profiling system with analytic decision processor

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998035297A1 (en) * 1997-02-06 1998-08-13 America Online, Inc. Consumer profiling system with analytic decision processor

Also Published As

Publication number Publication date
AU3560902A (en) 2002-10-31

Similar Documents

Publication Publication Date Title
US8677416B2 (en) Method, system and software for display of multiple media channels
AU759014B2 (en) Smart agent based on habit, statistical inference and psycho-demographic profiling
US8510737B2 (en) Method and system for prioritizing tasks made available by devices in a network
KR100995440B1 (en) System and method for effectively implementing a personal channel for interactive television
US20050039177A1 (en) Method and apparatus for programme generation and presentation
US20020085090A1 (en) Method and apparatus for generation of a preferred broadcasted programs list
US20040070594A1 (en) Method and apparatus for programme generation and classification
US20080222678A1 (en) Method and Apparatus for Programme Generation and Presentation
EP1189437A2 (en) System for management of audiovisual recordings
US20020059593A1 (en) Process for dynamic navigation among multimedia documents and multimedia terminal for the implementation of the process
US20050251437A1 (en) Adapting an interest profile on a media system
CA2378304A1 (en) Interactive television system with newsgroups
MXPA01008377A (en) System and method for tailoring television and/or electronic program guide features, such as advertising.
JP2005509965A (en) Creating an agent used to recommend media content
GB2324627A (en) Interface for computer discussion technologies
US9961405B2 (en) Systems and methods of processing programming wish list data
WO2002054760A2 (en) Interactive television system
JP2005517314A (en) Method and apparatus for delivering content using a multi-stage delivery system
US7054849B2 (en) Functional planning system
CN113434757A (en) Media asset package recommendation method and display device
JP4807948B2 (en) Content browsing method, information processing apparatus, content browsing apparatus
US8255512B2 (en) System and method for tracking user interactions and navigation during rich media presentations
CN113965796A (en) Interface display method and display equipment
AU778745B2 (en) Functional planning system
US7493024B2 (en) Method of managing the recording of audiovisual documents in a terminal selected among a plurality of terminals, and an associated terminal

Legal Events

Date Code Title Description
DA3 Amendments made section 104

Free format text: THE NATURE OF THE AMENDMENT IS: SUSTITUTE PATENT REQUEST REGARDING ASSOCIATED DETAILS