WO2007015184A2 - Apparatus and method for automatically determining privacy settings for content - Google Patents

Apparatus and method for automatically determining privacy settings for content Download PDF

Info

Publication number
WO2007015184A2
WO2007015184A2 PCT/IB2006/052482 IB2006052482W WO2007015184A2 WO 2007015184 A2 WO2007015184 A2 WO 2007015184A2 IB 2006052482 W IB2006052482 W IB 2006052482W WO 2007015184 A2 WO2007015184 A2 WO 2007015184A2
Authority
WO
WIPO (PCT)
Prior art keywords
content
privacy
privacy setting
rules
recommended
Prior art date
Application number
PCT/IB2006/052482
Other languages
French (fr)
Other versions
WO2007015184A3 (en
Inventor
Mauro Barbieri
Johannes Weda
Hong Li
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2007015184A2 publication Critical patent/WO2007015184A2/en
Publication of WO2007015184A3 publication Critical patent/WO2007015184A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6209Protecting access to data via a platform, e.g. using keys or access control rules to a single file or object, e.g. in a secure envelope, encrypted and accessed using a key, or with access control rules appended to the object itself
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2101Auditing as a secondary aspect

Definitions

  • the invention relates to the automatic recommendation of privacy settings for content, such as audio-video content, photographs, and other documents. Particularly, aspects of the invention pertain to making recommendations based upon attributes of the audio-video content and previous recommendations.
  • a method for sharing based on a face in an image where the method comprises steps of defining a sharing rule based on face identifying information and applying face identifying information associated with the image to the sharing rule to determine the one or more recipients with which the image should be shared.
  • Such a method does provide users with an automated process of sharing imported pictures but has proved to be difficult to manage and maintain. This is because the rules are static in nature and require continuous maintenance. This is especially so in relation to the sharing and privacy of personal audio-video content since there are often many variables leading to complicated rules.
  • Privacy settings may be defined which describe what actions should be performed on content taking into privacy aspects, for example, recommendations to be performed upon the content could comprise the backup, archiving, printing, deleting, duplicating or organizing. This has an analogy with the filtering of unwanted email, often termed Spam. Early email programs introduced rule based filtering, which all too soon became unmanageable for the average user. Returning once more to context of photographs an example will illustrate the difficulties in managing a set of pre-determined rules. Suppose that an apparatus has rule 1 that says: 1. If (photo includes Bob) then share with Alice, where Alice is Bob's wife. Then a new photo is taken that includes Bob and Brenda, where Brenda has a secret relationship with Bob.
  • an apparatus for automatically determining privacy settings for content comprising: a content source for content; an intrinsic content analyser unit, communicatively coupled to the content source, the intrinsic content analyser unit being arranged to analyse and extract intrinsic information from the content; a privacy rules base comprising privacy setting rules; and a privacy engine, communicatively coupled to the intrinsic content analyser unit and the privacy rules base, the privacy engine being arranged to determine a recommended privacy setting for the content based upon the privacy setting rules and the intrinsic information extracted from the content; wherein the apparatus further comprises a privacy rules update unit, communicatively coupled to the intrinsic content analyser unit and to the privacy engine and to the privacy rules base, the privacy rules update unit being arranged to update the privacy setting rules based upon the recommended privacy setting for the content and the intrinsic information extracted from the content. In this way the apparatus learns from past decisions and improves the recommendations.
  • the measure as defined in claim 2 has the advantage that the privacy setting rules are updated taking into account the complete history of recommended privacy settings determined by the privacy engine and the complete history of intrinsic information improving the quality of recommendations.
  • the means defined in claim 3 are applied to allow a user to provide feedback to the apparatus on recommended privacy settings since the apparatus can further improve the quality of recommendations provided within a limited number of iterations taking into account subjective aspects of value to the user.
  • a decision tree learning process to create a decision tree as basis for recommending and updating the privacy setting rules provides a simple, yet efficient, learning algorithm for improving the quality of recommendations based upon observations and previous decisions.
  • a decision tree can be induced upon command from any set of historical information allowing a decision tree to be recovered, a new decision tree, or set thereof, to be created.
  • the measures of claim 6 provide an apparatus that stores content taking into account the recommended privacy settings thereby allowing the future retrieval of the content.
  • a secure storage unit may be applied ensuring that the recommended privacy settings are enforced.
  • the measures defined in claim 8 allow users to present themselves to an apparatus and thereby store and retrieve content in a secure manner taking into account recommended privacy settings.
  • Advantageously recommended privacy settings are determined based upon a wide range of intrinsic information as defined in claim 9 enabling the privacy engine and the privacy rules update unit to determine the most suitable privacy settings rules.
  • the object is realized by a method as defined in claim 10. Further advantageous measures are defined in claims 11 through 15.
  • a third aspect of the present invention provides a computer-readable recording medium containing a program to realize the object of the invention as defined in claim 16. According to a fourth aspect of the present invention the object is realized by providing a program for controlling an information processing apparatus as claimed in claim 17.
  • FIG. 1 is a schematic diagram of a first embodiment of the present invention
  • FIG. 2 is a table containing learning examples and a resulting recommended privacy setting
  • FIG. 3 is a diagram showing the decision tree induced from the learning examples of FIG. 2;
  • FIG. 4 is a flowchart diagram illustrating method steps performed in a first embodiment of the invention
  • FIG. 5 is a schematic diagram of a second embodiment of the present invention
  • FIG. 6 is a flowchart diagram illustrating method steps performed in a second embodiment of the invention
  • FIG. 7 is a schematic diagram of a third embodiment of the present invention
  • FIG. 8 is a flowchart diagram illustrating method steps performed in a third embodiment of the invention
  • FIG. 9 is a schematic diagram of a fourth embodiment of the present invention.
  • FIG.1. is a block diagram showing an apparatus for automatically determining the privacy settings for content, such as, audio-video content, photographs and other documents that is easy to manage.
  • Audio-video content is sourced by, for example a camera 9, such as a digital still camera or video camera, and imported by an import unit 2.
  • the actual method of sourcing the content is not essential and audio-video content can be retrieved by any known means, such as retrieved from a fixed or removable storage medium, or from a network such as the Internet.
  • An intrinsic content analyser 3 analyses the content and extracts metadata or attributes denoted as intrinsic information 10 from the content by known means.
  • Extracting embedded metadata may be performed according to a known metadata standard such as, for example, the EXIF standard, the MPEG 7 standard, the MusicPhoto Video (MPV) standard or any other proprietary solution.
  • Useful metadata may comprise the location of creation, by the use of Global Positioning System (GPS) information, creation time and date, resolution, focal length, etc.
  • GPS Global Positioning System
  • Content analysis techniques may also be used to extract inherent information contained within the audio-video content. For example, face detection and face recognition can be performed to identify people contained within the audio-video content. Objects and locations, for example indoor/outdoor classification, can be detected by similar content analysis procedures.
  • Low level features can further indicate useful attributes of the audio-video content that may be suitable attributes upon which the privacy settings of the audio-video content may be determined.
  • a privacy engine 4 accepts as input the intrinsic information 10 and privacy setting rules. The privacy engine 4 determines a recommended privacy setting 8 based upon the intrinsic information 10 and the privacy setting rules. The privacy engine 4 can make use, for example, of a decision tree, though other machine learning techniques could be used. Such a decision tree is known from the prior art of machine learning.
  • the privacy setting rules are stored in a privacy rules base 5 which is in fact the current valid set of privacy setting rules upon which recommendations are made. In a preferred embodiment the privacy rules base 5 would be protected by encryption in a secure manner.
  • the privacy rules base 5 may also store multiple sets of privacy setting rules, based upon multiple hypotheses, upon which recommendations are made and may further continuously evaluate each set during operation choosing the most suitable at the time a decision must be made.
  • the privacy engine 4 may be termed a performance element since it makes decisions based upon the incoming attributes of the audio-video content and the current valid set of privacy setting rules and so determines the performance of the recommendation.
  • the recommended privacy setting 8 may be used to immediately share, for example, via a network, or store, on a storage unit 7, the audio-video content.
  • the recommended privacy setting 8 may be used to recommend other actions to be performed upon the audio-video content, such as, the backup, archiving, printing, deleting, duplicating or organizing.
  • a privacy rules update unit 6 accepts as input the recommended privacy setting 8 and the intrinsic information 10 and adapts the privacy setting rules contained within the privacy rules base 5 in such a manner that the privacy setting rules are also consistent with the latest recommendation.
  • the privacy rules update unit 6 is therefore a learning element that modifies the behaviour of the privacy engine 4 to improve the recommendations and therefore the performance.
  • the privacy rules update unit 6 generates a simple hypothesis consistent with the information, in doing so subscribing to the theory of Ockhams's razor.
  • the hypothesis may be based upon a set of examples relating the recommended privacy setting 8 to the intrinsic information 10.
  • the set of examples may be a pre-determined set of training data or be generated from the initial usage of the apparatus.
  • the privacy rules update unit 6 therefore alleviates the need for a user to manually modify the privacy setting rules contained within the privacy rules base 5 and is therefore easier to manage. Furthermore, inconsistent or contradictory privacy setting rules are prevented further increasing the manageability of the privacy setting rules.
  • FIG.2 provides an example set of learning examples or training data relating to the example as introduced earlier.
  • the intrinsic information 10 or attributes are defined as Bob, Brenda and Colleagues.
  • the recommended privacy setting 8 is defined as Share, with the resulting recommendation being Yes or No.
  • the privacy rules update unit 6 determined the privacy setting rules that are shown graphically in FIG. 3.
  • the decision tree is constructed by the privacy rules update unit 6 and may be based upon the amount of information according to Shannon and Weaver provided at each node in the decision tree. Other methods known in the prior art could, of course, be used to create the decision tree. Such methods may be based on gain ratios or GINI indexes.
  • the information content is directly related to the probability of a certain outcome and is defined as
  • P(V 1 ) defines the probability of v, occurring.
  • the training set approximates the ground truth probabilities, P(V 1 ), and is given by
  • n indicates the total number of negative outcomes
  • p indicates the number of positive and negative outcomes respectively given attribute A.
  • the information gained from a specific test upon an attribute, A is defined as the information gain and is
  • the privacy engine 4 makes use of the privacy setting rules to make a recommendation to the user.
  • the remainder can be calculated by using the dataset shown in FIG. 2, for example,
  • Gain ⁇ Brenda l( — , — ]- 0.394
  • the information gain for the Brenda attribute is 0.198, whereas the information gain for Bob and Colleagues both equate to only 0.128. Therefore, the Brenda node 11 shown in the privacy setting rules of FIG. 3 has the highest information content.
  • the privacy rules update unit 6 therefore chooses the Brenda node 11 as the first decision point of the decision tree 16.
  • the left branch 17 from the Brenda node 11 always results in a Yes decision, therefore there is no information provided by the attributes Bob or Colleagues.
  • the privacy rules update unit 6 determines that the information gain for Colleagues is 0.918, whilst for Bob it is only 0.251, therefore the Colleagues node 13 is created. Thereafter, the attribute Bob does not provide any further information gain. Therefore no further branches are necessary.
  • the privacy engine 4 determines the recommended privacy setting 8 based upon the privacy setting rules. Assuming that the intrinsic information 10 implies that the photo contained Bob and Brenda, but no Colleagues then the privacy engine 4 would traverse the decision tree 16 of FIG.3. Firstly, at the Brenda node 11 since Brenda is present the right branch 12 would be taken. At the Colleagues node 13 again the right branch 14 would be taken. The recommended privacy setting 8 is therefore provided by the leaf 15, which in this example would be a recommendation not to share the photo, which in this case is probably the correct recommendation given the circumstances.
  • FIG. 4 shows a flowchart comprising the method steps of the invention.
  • the flowchart of FIG. 4 is helpful when, as is common in the consumer electronic industry at this time, the invention is implemented in software making use of a processor and memory, as is well known to the skilled person.
  • audio-video content is received by any means well known to the skilled person, thereafter at step 41 the audio-video content is analysed to extract the intrinsic information 10.
  • the recommended privacy setting is determined as already described, based upon the privacy setting rules.
  • the privacy setting rules are updated in preparation for the next recommendation. The next recommendation is made by returning to step 40.
  • a privacy setting history 50 provides a location to store a historical collection of recommended privacy settings 8.
  • an intrinsic information history 51 provides a location to store a historical collection of intrinsic information 10, or attributes, extracted from the audio-video content.
  • the privacy rules update unit 6 is then in a position to re-evaluate the complete set of privacy setting rules comprised in the privacy rules base 5 after each new recommendation taking into account all information previously encountered.
  • a subset of the previous historical information may also be used to reduce the storage requirements of the historical information.
  • FIG. 6 details the method steps required in processing audio-video content as performed by an embodiment such as that of FIG. 5.
  • audio-video content is received in an identical manner to that described in FIG.4.
  • the audio-video content is analysed to extract the intrinsic information 10.
  • the recommended privacy setting 8 is, as usual, determined based upon the privacy setting rules.
  • the recommended privacy setting 8 is stored in a privacy setting history 50.
  • the intrinsic information 10 is stored in an intrinsic information history 51.
  • the privacy setting rules are updated in preparation for the next recommendation.
  • the updating in step 43 may comprise an analysis of the complete historical information available or be limited to a subset thereof.
  • the next recommendation may be made by returning to step 40.
  • a third embodiment disclosed in FIG. 7 will now be elucidated upon. In FIG.
  • the recommended privacy setting 8 is presented to a user 72 by means of a display means 70.
  • the display means 70 can, for example, be a monitor, a television screen or a simple display device.
  • the user 72 is invited to give user feedback 73 via an input means 71 on the recommended privacy setting 8 as determined by the privacy engine 4.
  • Such feedback on the ground truth is useful in the learning process and can dramatically improve the quality of the recommended privacy setting 8. It is, of course, not necessary that the user 72 be consulted on all recommendations.
  • the privacy setting history 50 and the intrinsic information history 51 may be statistically analysed to produce a confidence level in the recommended privacy setting 8 and only in cases of low confidence would the user be troubled with supplying user feedback 73.
  • FIG. 7 also has a well-defined process flow that is described with reference to the flowchart of FIG. 8.
  • audio-video content is received and in step 41 the audio-video content is analysed to extract the intrinsic information 10. Both of these steps are performed in the usual manner as described earlier.
  • the recommended privacy setting 8 is determined based upon the privacy setting rules, also in the usual manner.
  • the recommended privacy setting 8 is presented to a user 72 by means of a display means 70.
  • user feedback 73 is received.
  • the privacy setting rules are again updated in preparation for the next recommendation.
  • the updating in step 43 may further comprise an analysis of the user feedback 73 allowing the recommended privacy setting 8 to be improved, for example, by modification.
  • the next recommendation may be made by returning to step 40 in the usual manner.
  • FIG. 9 shows a fourth embodiment in which the storage unit 7 is a secure storage unit 92.
  • the secure storage unit 92 may accept as input for storage the audio-video content, the recommended privacy setting 8, the intrinsic information 10 and may further comprise a secure channel 91 communicatively coupled to a secure user identification unit 90.
  • a secure user identification unit 90 will be a smartcard, an input means for a personal identification number or username and password combination or even a biometric device registering fingerprints. Any known secure storage unit means may be used as known to the skilled person.
  • the secure storage unit 92 may further comprise an encryptor and controller for managing access rights and communicating securely via the secure channel 91 to the secure user identification unit 90.
  • FIG. 9 shows a fourth embodiment in which the storage unit 7 is a secure storage unit 92.
  • the secure storage unit 92 may accept as input for storage the audio-video content, the recommended privacy setting 8, the intrinsic information 10 and may further comprise a secure channel 91 communicatively coupled to a secure user identification unit 90.
  • the invention may also be embodied as a computer program product, storable on a storage medium and enabling a computer to be programmed to execute the method according to the invention.
  • the computer can be embodied as a general-purpose computer like a personal computer or network computer, but also as a dedicated consumer electronics device with a programmable processing core.

Abstract

An apparatus and method are provided for automatically determining privacy settings for content, such as audio-video content, photographs and other documents. The apparatus (1) comprises an audio-video content source (2), an intrinsic content analyser (3) that is communicatively connected to an audio-visual source (2), such as a picture or film source (9). The content is analysed for intrinsic information (10). Further, the system comprises a privacy engine (4) communicatively connected to the intrinsic content analyser. A recommended privacy setting (8) is determined based upon privacy setting rules from a privacy rules base (5) and the intrinsic information (10). The privacy setting rules are updated, by a privacy rules update unit (6), taking into account the recommended privacy setting (8) and the intrinsic information (10). The audio-video content is then stored, in a storage unit (7), according to the recommended privacy setting (8).

Description

Apparatus and method for automatically determining privacy settings for content
BACKGROUND OF THE INVENTION
The invention relates to the automatic recommendation of privacy settings for content, such as audio-video content, photographs, and other documents. Particularly, aspects of the invention pertain to making recommendations based upon attributes of the audio-video content and previous recommendations.
The growth in digital media created by users is tremendous, however, users rarely have the time necessary to annotate and sort their individual audio-video content. This is inconvenient since it hampers their ability to retrieve and share their own content even though it may possess enormous emotional value. For example, photographs or video from important family occasions are now created digitally by digital still cameras or digital camcorders and are often stored in a haphazard manner on personal computers. When a user wishes to share such audio-video the user is left with little option but to manually sort the content. In such a case, each piece of audio-video content is judged upon privacy aspects of the content. Some assistance for the user is provided for in the prior art, for example, in US patent application US 2004/0243671 A9 a method is presented for sharing based on a face in an image where the method comprises steps of defining a sharing rule based on face identifying information and applying face identifying information associated with the image to the sharing rule to determine the one or more recipients with which the image should be shared. Such a method does provide users with an automated process of sharing imported pictures but has proved to be difficult to manage and maintain. This is because the rules are static in nature and require continuous maintenance. This is especially so in relation to the sharing and privacy of personal audio-video content since there are often many variables leading to complicated rules. Privacy settings may be defined which describe what actions should be performed on content taking into privacy aspects, for example, recommendations to be performed upon the content could comprise the backup, archiving, printing, deleting, duplicating or organizing. This has an analogy with the filtering of unwanted email, often termed Spam. Early email programs introduced rule based filtering, which all too soon became unmanageable for the average user. Returning once more to context of photographs an example will illustrate the difficulties in managing a set of pre-determined rules. Suppose that an apparatus has rule 1 that says: 1. If (photo includes Bob) then share with Alice, where Alice is Bob's wife. Then a new photo is taken that includes Bob and Brenda, where Brenda has a secret relationship with Bob. Since Alice should not know about the secret relationship between Bob and Brenda, this photo should not be shared with Alice. In the rule-based system, the rule 1 should therefore be changed into: Ia. If (photo includes Bob) AND (photo includes Brenda) then do NOT share with Alice. But suppose now that Brenda is also a colleague of Bob and a group photo is taken including them and all the colleagues from the office. It would be strange not to share this photo with Alice, so a new rule should be added: 2. If (photo includes Bob) AND (photo includes Brenda) AND (photo includes colleague) then share with Alice. But this is still not enough. The apparatus has now two contradicting rules: Ia and 2. To remove the contradiction, rule Ia should be changed into: Ib. If (photo includes Bob) AND (photo includes Brenda) AND (photo NOT include colleague) then do NOT share with Alice. Understanding the changes that have to be applied to the set of rules is clearly very difficult to manage for users. It is therefore highly desirable to automate the management of privacy settings for content.
BRIEF SUMMARY OF THE INVENTION It is an object of the present invention to provide an apparatus and method for automatically determining the privacy settings for content that is easy to manage.
Accordingly there is provided, in a first aspect, an apparatus for automatically determining privacy settings for content, the apparatus comprising: a content source for content; an intrinsic content analyser unit, communicatively coupled to the content source, the intrinsic content analyser unit being arranged to analyse and extract intrinsic information from the content; a privacy rules base comprising privacy setting rules; and a privacy engine, communicatively coupled to the intrinsic content analyser unit and the privacy rules base, the privacy engine being arranged to determine a recommended privacy setting for the content based upon the privacy setting rules and the intrinsic information extracted from the content; wherein the apparatus further comprises a privacy rules update unit, communicatively coupled to the intrinsic content analyser unit and to the privacy engine and to the privacy rules base, the privacy rules update unit being arranged to update the privacy setting rules based upon the recommended privacy setting for the content and the intrinsic information extracted from the content. In this way the apparatus learns from past decisions and improves the recommendations.
The measure as defined in claim 2 has the advantage that the privacy setting rules are updated taking into account the complete history of recommended privacy settings determined by the privacy engine and the complete history of intrinsic information improving the quality of recommendations.
It is advantageous if the means defined in claim 3 are applied to allow a user to provide feedback to the apparatus on recommended privacy settings since the apparatus can further improve the quality of recommendations provided within a limited number of iterations taking into account subjective aspects of value to the user.
Beneficially, the use of a decision tree learning process to create a decision tree as basis for recommending and updating the privacy setting rules provides a simple, yet efficient, learning algorithm for improving the quality of recommendations based upon observations and previous decisions. By storing a history of intrinsic information in combination with a history of recommended privacy settings a decision tree can be induced upon command from any set of historical information allowing a decision tree to be recovered, a new decision tree, or set thereof, to be created.
Advantageously the measures of claim 6 provide an apparatus that stores content taking into account the recommended privacy settings thereby allowing the future retrieval of the content.
Favourably, a secure storage unit may be applied ensuring that the recommended privacy settings are enforced.
The measures defined in claim 8 allow users to present themselves to an apparatus and thereby store and retrieve content in a secure manner taking into account recommended privacy settings.
Advantageously recommended privacy settings are determined based upon a wide range of intrinsic information as defined in claim 9 enabling the privacy engine and the privacy rules update unit to determine the most suitable privacy settings rules. According to a second aspect of the present invention the object is realized by a method as defined in claim 10. Further advantageous measures are defined in claims 11 through 15.
A third aspect of the present invention provides a computer-readable recording medium containing a program to realize the object of the invention as defined in claim 16. According to a fourth aspect of the present invention the object is realized by providing a program for controlling an information processing apparatus as claimed in claim 17.
These and other aspects, features and/or advantages of the present invention will be apparent from and elucidated with reference to the embodiments described hereinafter.
BRIEF DESCRIPTION OF THE DRAWINGS
Preferred embodiments of the invention will now be described in detail with reference to the drawings in which:
FIG. 1 is a schematic diagram of a first embodiment of the present invention; FIG. 2 is a table containing learning examples and a resulting recommended privacy setting;
FIG. 3 is a diagram showing the decision tree induced from the learning examples of FIG. 2;
FIG. 4 is a flowchart diagram illustrating method steps performed in a first embodiment of the invention;
FIG. 5 is a schematic diagram of a second embodiment of the present invention; FIG. 6 is a flowchart diagram illustrating method steps performed in a second embodiment of the invention;
FIG. 7 is a schematic diagram of a third embodiment of the present invention; FIG. 8 is a flowchart diagram illustrating method steps performed in a third embodiment of the invention; and FIG. 9 is a schematic diagram of a fourth embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
FIG.1. is a block diagram showing an apparatus for automatically determining the privacy settings for content, such as, audio-video content, photographs and other documents that is easy to manage. Audio-video content is sourced by, for example a camera 9, such as a digital still camera or video camera, and imported by an import unit 2. The actual method of sourcing the content is not essential and audio-video content can be retrieved by any known means, such as retrieved from a fixed or removable storage medium, or from a network such as the Internet. An intrinsic content analyser 3 analyses the content and extracts metadata or attributes denoted as intrinsic information 10 from the content by known means. Extracting embedded metadata may be performed according to a known metadata standard such as, for example, the EXIF standard, the MPEG 7 standard, the MusicPhoto Video (MPV) standard or any other proprietary solution. Useful metadata may comprise the location of creation, by the use of Global Positioning System (GPS) information, creation time and date, resolution, focal length, etc. Content analysis techniques may also be used to extract inherent information contained within the audio-video content. For example, face detection and face recognition can be performed to identify people contained within the audio-video content. Objects and locations, for example indoor/outdoor classification, can be detected by similar content analysis procedures. Low level features, such as colour histograms, detected edges, and motion, etc, can further indicate useful attributes of the audio-video content that may be suitable attributes upon which the privacy settings of the audio-video content may be determined. A privacy engine 4 accepts as input the intrinsic information 10 and privacy setting rules. The privacy engine 4 determines a recommended privacy setting 8 based upon the intrinsic information 10 and the privacy setting rules. The privacy engine 4 can make use, for example, of a decision tree, though other machine learning techniques could be used. Such a decision tree is known from the prior art of machine learning. The privacy setting rules are stored in a privacy rules base 5 which is in fact the current valid set of privacy setting rules upon which recommendations are made. In a preferred embodiment the privacy rules base 5 would be protected by encryption in a secure manner. The privacy rules base 5 may also store multiple sets of privacy setting rules, based upon multiple hypotheses, upon which recommendations are made and may further continuously evaluate each set during operation choosing the most suitable at the time a decision must be made. The privacy engine 4 may be termed a performance element since it makes decisions based upon the incoming attributes of the audio-video content and the current valid set of privacy setting rules and so determines the performance of the recommendation.
The recommended privacy setting 8 may be used to immediately share, for example, via a network, or store, on a storage unit 7, the audio-video content. The recommended privacy setting 8 may be used to recommend other actions to be performed upon the audio-video content, such as, the backup, archiving, printing, deleting, duplicating or organizing. A privacy rules update unit 6 accepts as input the recommended privacy setting 8 and the intrinsic information 10 and adapts the privacy setting rules contained within the privacy rules base 5 in such a manner that the privacy setting rules are also consistent with the latest recommendation. The privacy rules update unit 6 is therefore a learning element that modifies the behaviour of the privacy engine 4 to improve the recommendations and therefore the performance. The privacy rules update unit 6 generates a simple hypothesis consistent with the information, in doing so subscribing to the theory of Ockhams's razor. The hypothesis may be based upon a set of examples relating the recommended privacy setting 8 to the intrinsic information 10. The set of examples may be a pre-determined set of training data or be generated from the initial usage of the apparatus. The privacy rules update unit 6 therefore alleviates the need for a user to manually modify the privacy setting rules contained within the privacy rules base 5 and is therefore easier to manage. Furthermore, inconsistent or contradictory privacy setting rules are prevented further increasing the manageability of the privacy setting rules. FIG.2 provides an example set of learning examples or training data relating to the example as introduced earlier. The intrinsic information 10 or attributes are defined as Bob, Brenda and Colleagues. The recommended privacy setting 8 is defined as Share, with the resulting recommendation being Yes or No. Based upon the learning examples of FIG. 2 the privacy rules update unit 6 determined the privacy setting rules that are shown graphically in FIG. 3. The decision tree is constructed by the privacy rules update unit 6 and may be based upon the amount of information according to Shannon and Weaver provided at each node in the decision tree. Other methods known in the prior art could, of course, be used to create the decision tree. Such methods may be based on gain ratios or GINI indexes. The information content is directly related to the probability of a certain outcome and is defined as
1(P(V1),...,P(vn)) = ∑-pk )\og2 P(v, )
where P(V1) defines the probability of v, occurring. The training set approximates the ground truth probabilities, P(V1), and is given by
—l loKJg&2
Figure imgf000007_0001
p + n where/? indicates the total number of positive outcomes, i.e. sharing the content in this context, and n indicates the total number of negative outcomes, i.e. not sharing the content. The testing of each attribute, A, of intrinsic information 10 provides some information, but not all. The remaining information, or remainder is known from the prior art to be
remainder{A) =
Figure imgf000008_0001
where/? again indicates the total number of positive outcomes, n indicates the total number of negative outcomes, andp, and n, indicate the number of positive and negative outcomes respectively given attribute A.
The information gained from a specific test upon an attribute, A, is defined as the information gain and is
Figure imgf000008_0002
The privacy engine 4 makes use of the privacy setting rules to make a recommendation to the user. Returning again to the example, for the attribute Brenda the remainder can be calculated by using the dataset shown in FIG. 2, for example,
Brendα yes f Share yes Share no |
, /„ , x Brenda yes + Brenda no Share yes + Share no ' Share yes + Share no remainderyBrenda) = V Λ
+- Brenda no J Share yes Share no )
Brenda yes + Brenda no I Share yes + Share no Share yes + Share no
0 remainderiBrenda) = / , + / —
3 + 4 ^2 + I 2 + l J 3 + 4 [4+0 4+0 remainderiBrenda) = — - - log2 — - - log2 - |+ — (- 1 log21 - 0 log2 θ)
7 l 3 ά2 3 3 "182 S j 7 remaindeλBrenda) = 0.394+0 The gain of information from the Brenda attribute is then
Gain{βrendd) = l\ , - remainderyBrenda)
I share yes + share no share yes + share no I
Gain{Brenda) = l( — , — ]- 0.394 Gain(Brenda) = 0.592-0.394 = 0.198
In the present example, the information gain for the Brenda attribute is 0.198, whereas the information gain for Bob and Colleagues both equate to only 0.128. Therefore, the Brenda node 11 shown in the privacy setting rules of FIG. 3 has the highest information content. The privacy rules update unit 6 therefore chooses the Brenda node 11 as the first decision point of the decision tree 16. The left branch 17 from the Brenda node 11 always results in a Yes decision, therefore there is no information provided by the attributes Bob or Colleagues. For the right branch 12 the privacy rules update unit 6 determines that the information gain for Colleagues is 0.918, whilst for Bob it is only 0.251, therefore the Colleagues node 13 is created. Thereafter, the attribute Bob does not provide any further information gain. Therefore no further branches are necessary. Returning now to the example, the privacy engine 4 determines the recommended privacy setting 8 based upon the privacy setting rules. Assuming that the intrinsic information 10 implies that the photo contained Bob and Brenda, but no Colleagues then the privacy engine 4 would traverse the decision tree 16 of FIG.3. Firstly, at the Brenda node 11 since Brenda is present the right branch 12 would be taken. At the Colleagues node 13 again the right branch 14 would be taken. The recommended privacy setting 8 is therefore provided by the leaf 15, which in this example would be a recommendation not to share the photo, which in this case is probably the correct recommendation given the circumstances.
For a more complete understanding of the invention FIG. 4 shows a flowchart comprising the method steps of the invention. The flowchart of FIG. 4 is helpful when, as is common in the consumer electronic industry at this time, the invention is implemented in software making use of a processor and memory, as is well known to the skilled person. In the method at step 40 audio-video content is received by any means well known to the skilled person, thereafter at step 41 the audio-video content is analysed to extract the intrinsic information 10. At step 42 the recommended privacy setting is determined as already described, based upon the privacy setting rules. At step 43 the privacy setting rules are updated in preparation for the next recommendation. The next recommendation is made by returning to step 40.
A second embodiment of the invention is described with reference to FIG. 5. Identical elements from the first embodiment of FIG. 1 have the same numerical references. In the embodiment of FIG. 5 a privacy setting history 50 provides a location to store a historical collection of recommended privacy settings 8. Furthermore, an intrinsic information history 51 provides a location to store a historical collection of intrinsic information 10, or attributes, extracted from the audio-video content. The privacy rules update unit 6 is then in a position to re-evaluate the complete set of privacy setting rules comprised in the privacy rules base 5 after each new recommendation taking into account all information previously encountered. Of course, a subset of the previous historical information may also be used to reduce the storage requirements of the historical information.
FIG. 6 details the method steps required in processing audio-video content as performed by an embodiment such as that of FIG. 5. In the method at step 40 audio-video content is received in an identical manner to that described in FIG.4. During step 41 the audio-video content is analysed to extract the intrinsic information 10. At step 42 the recommended privacy setting 8 is, as usual, determined based upon the privacy setting rules. In step 60 the recommended privacy setting 8 is stored in a privacy setting history 50. Furthermore, at step 61 the intrinsic information 10 is stored in an intrinsic information history 51. At step 43 the privacy setting rules are updated in preparation for the next recommendation. The updating in step 43 may comprise an analysis of the complete historical information available or be limited to a subset thereof. Finally, the next recommendation may be made by returning to step 40. A third embodiment disclosed in FIG. 7 will now be elucidated upon. In FIG.
7 the recommended privacy setting 8 is presented to a user 72 by means of a display means 70. The display means 70 can, for example, be a monitor, a television screen or a simple display device. The user 72 is invited to give user feedback 73 via an input means 71 on the recommended privacy setting 8 as determined by the privacy engine 4. Such feedback on the ground truth is useful in the learning process and can dramatically improve the quality of the recommended privacy setting 8. It is, of course, not necessary that the user 72 be consulted on all recommendations. For example, the privacy setting history 50 and the intrinsic information history 51 may be statistically analysed to produce a confidence level in the recommended privacy setting 8 and only in cases of low confidence would the user be troubled with supplying user feedback 73.
The embodiment of FIG. 7 also has a well-defined process flow that is described with reference to the flowchart of FIG. 8. Initially at step 40 audio-video content is received and in step 41 the audio-video content is analysed to extract the intrinsic information 10. Both of these steps are performed in the usual manner as described earlier. At step 42 the recommended privacy setting 8 is determined based upon the privacy setting rules, also in the usual manner. In step 80 the recommended privacy setting 8 is presented to a user 72 by means of a display means 70. At step 81 user feedback 73 is received. At step 43 the privacy setting rules are again updated in preparation for the next recommendation. However, the updating in step 43 may further comprise an analysis of the user feedback 73 allowing the recommended privacy setting 8 to be improved, for example, by modification. The next recommendation may be made by returning to step 40 in the usual manner.
FIG. 9 shows a fourth embodiment in which the storage unit 7 is a secure storage unit 92. The secure storage unit 92 may accept as input for storage the audio-video content, the recommended privacy setting 8, the intrinsic information 10 and may further comprise a secure channel 91 communicatively coupled to a secure user identification unit 90. Typically a secure user identification unit 90 will be a smartcard, an input means for a personal identification number or username and password combination or even a biometric device registering fingerprints. Any known secure storage unit means may be used as known to the skilled person. For example, the secure storage unit 92 may further comprise an encryptor and controller for managing access rights and communicating securely via the secure channel 91 to the secure user identification unit 90. The embodiment of FIG. 9 enforces the recommended privacy settings 8 giving peace of mind to the user. It will be apparent to a person skilled in the art that the invention may also be embodied as a computer program product, storable on a storage medium and enabling a computer to be programmed to execute the method according to the invention. The computer can be embodied as a general-purpose computer like a personal computer or network computer, but also as a dedicated consumer electronics device with a programmable processing core.
In the foregoing, it will be appreciated that reference to the singular is also intended to encompass the plural and vice versa. Moreover, expressions such as "include", "comprise", "has", "have", "incorporate", "contain" and "encompass" are to be construed to be non-exclusive, namely such expressions are to be construed not to exclude other items being present.
Although the present invention has been described in connection with preferred embodiments, it is not intended to be limited to the specific form set forth herein. Rather, the scope of the present invention is limited only by the accompanying claims.

Claims

CLAIMS:
1. An apparatus (1) for automatically determining privacy settings for content, the apparatus comprising: a content source (2) for content; an intrinsic content analyser unit (3), communicatively coupled to the content source (2), the intrinsic content analyser unit (3) being arranged to analyse and extract intrinsic information (10) from the content; a privacy rules base (5) comprising privacy setting rules; and a privacy engine (4), communicatively coupled to the intrinsic content analyser unit (3) and the privacy rules base (5), the privacy engine (4) being arranged to determine a recommended privacy setting (8) for the content based upon the privacy setting rules and the intrinsic information (10) extracted from the content; wherein the apparatus further comprises a privacy rules update unit (6), communicatively coupled to the intrinsic content analyser unit (3) and to the privacy engine (4) and to the privacy rules base (5), the privacy rules update unit (6) being arranged to update the privacy setting rules based upon the recommended privacy setting (8) for the content and the intrinsic information (10) extracted from the content.
2. The apparatus of claim 1, further comprising: a privacy setting history (50) for storing the recommended privacy setting (8) for the content; an intrinsic information history (51) for storing the intrinsic information (10) extracted from the content, wherein the privacy rules update unit (6) is arranged to further update the privacy setting rules based upon the privacy setting history (50) and the intrinsic information history (51).
3. The apparatus of claim 1 or 2, further comprising: a display means (70) for displaying the recommended privacy setting (8) to a user (72); an input means (71) for receiving user feedback (73) from the user (72) regarding the recommended privacy setting (8), wherein the privacy rules update unit (6) is further arranged to update the privacy setting rules based upon the user feedback (73).
4. The apparatus of any of the preceding claims wherein the privacy rules base
(5) comprises a decision tree (16) derived from a decision tree learning process.
5 The apparatus of claim 4, wherein the privacy rules update unit (6) is arranged to induce the decision tree (16) from the privacy setting history (50) and the intrinsic information history (51).
6. The apparatus of any of the preceding claims wherein the content is stored in a storage unit (7) according to the recommended privacy setting (8).
7. The apparatus of claim 6 wherein the storage unit (7) is a secure storage unit (92).
8. The apparatus of claim 7 further comprising: a secure user identification unit (90); and a secure channel (91), wherein the secure user identification unit (90) is communicatively coupled to the secure storage unit (92) via the secure channel (91).
9. The apparatus of any of the preceding claims wherein the intrinsic information (10) comprises at least one of: a time of creation of the content; a location depicted by the content; a camera orientation during creation of the content; a creator of the content; a time of modification of the content; an annotation of people depicted within the content; an annotation of the identity of people depicted within the content; an annotation of objects depicted within the content; a user annotation of the content.
10. A method for automatically determining privacy settings for content, the method comprising the steps of: receiving (40) content from a content source (2); analysing (41) the content to extract intrinsic information (10); determining (42) a recommended privacy setting (8) for the content based upon the intrinsic information (10) and privacy setting rules comprised within a privacy rules base (5); updating (43) the privacy setting rules based upon the recommended privacy setting (8) for the content and the intrinsic information (10).
11. The method of claim 10, further comprising the steps of: storing (60) the recommended privacy setting (8) for the content in a privacy setting history (50); and storing (61) the intrinsic information (10) in an intrinsic information history (51); wherein updating (43) the privacy setting rules is further based upon the privacy setting history (50) and the intrinsic information history (51)
12. The method of claim 10, further comprising the steps of: displaying (80) the recommended privacy setting (8) for the content to a user
(72); and receiving (81) user feedback (73) regarding the recommended privacy setting (8) for the content; wherein updating (43) the privacy setting rules is further based upon the user feedback (73).
13. The method of claim 10, wherein determining (42) a recommended privacy setting (8) for the content makes use of a decision tree (16) derived from a decision tree learning process.
14. The method of claim 10, iurther comprising the steps of: storing the content in a storage unit (7) according to the recommended privacy setting (8).
15. The method of claim 14, wherein storing the content in a storage unit (7) according to the recommended privacy setting (8) is performed in a secure manner.
16. A computer-readable recording medium containing a program for controlling an information processing apparatus for automatically determining privacy settings for content, said program enabling said information processing apparatus to perform the method steps of: receiving (40) content from a content source (2); analysing (41) the content to extract intrinsic information (10); determining (42) a recommended privacy setting (8) for the content based upon privacy setting rules comprised within a privacy rules base (5) and the intrinsic information; updating (43) the privacy setting rules based upon the recommended privacy setting (8) for the content and the intrinsic information (10).
17. A program for controlling an information processing apparatus for automatically determining privacy settings for content, said program enabling said information processing apparatus to perform the method steps of: receiving (40) content from a content source (2); analysing (41) the content to extract intrinsic information (10); determining (42) a recommended privacy setting (8) for the content based upon privacy setting rules comprised within a privacy rules base (5) and the intrinsic information; updating (43) the privacy setting rules based upon the recommended privacy setting (8) for the content and the intrinsic information (10).
PCT/IB2006/052482 2005-08-04 2006-07-20 Apparatus and method for automatically determining privacy settings for content WO2007015184A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP05107198.3 2005-08-04
EP05107198 2005-08-04

Publications (2)

Publication Number Publication Date
WO2007015184A2 true WO2007015184A2 (en) 2007-02-08
WO2007015184A3 WO2007015184A3 (en) 2007-05-31

Family

ID=37570456

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2006/052482 WO2007015184A2 (en) 2005-08-04 2006-07-20 Apparatus and method for automatically determining privacy settings for content

Country Status (1)

Country Link
WO (1) WO2007015184A2 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010058061A1 (en) * 2008-11-18 2010-05-27 Nokia Corporation Method, apparatus, and computer program product for determining media item privacy settings
US8234688B2 (en) * 2009-04-03 2012-07-31 International Business Machines Corporation Managing privacy settings for a social network
US20130174213A1 (en) * 2011-08-23 2013-07-04 James Liu Implicit sharing and privacy control through physical behaviors using sensor-rich devices
US9153195B2 (en) 2011-08-17 2015-10-06 Microsoft Technology Licensing, Llc Providing contextual personal information by a mixed reality device
US9443098B2 (en) 2012-12-19 2016-09-13 Pandexio, Inc. Multi-layered metadata management system
US9491258B2 (en) 2014-11-12 2016-11-08 Sorenson Communications, Inc. Systems, communication endpoints, and related methods for distributing images corresponding to communication endpoints
US9536350B2 (en) 2011-08-24 2017-01-03 Microsoft Technology Licensing, Llc Touch and social cues as inputs into a computer
US9767524B2 (en) 2011-08-09 2017-09-19 Microsoft Technology Licensing, Llc Interaction with virtual objects causing change of legal status
US9773000B2 (en) 2013-10-29 2017-09-26 Pandexio, Inc. Knowledge object and collaboration management system
US10019962B2 (en) 2011-08-17 2018-07-10 Microsoft Technology Licensing, Llc Context adaptive user interface for augmented reality display
US10747898B2 (en) 2016-10-20 2020-08-18 International Business Machines Corporation Determining privacy for a user and a product in a particular context
US10789656B2 (en) 2009-07-31 2020-09-29 International Business Machines Corporation Providing and managing privacy scores

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030161499A1 (en) * 2002-02-28 2003-08-28 Hugh Svendsen Automated discovery, assignment, and submission of image metadata to a network-based photosharing service
US20030226038A1 (en) * 2001-12-31 2003-12-04 Gil Raanan Method and system for dynamic refinement of security policies
US20040268251A1 (en) * 2003-06-30 2004-12-30 Vladimir Sadovsky System and method for rules-based image acquisition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030226038A1 (en) * 2001-12-31 2003-12-04 Gil Raanan Method and system for dynamic refinement of security policies
US20030161499A1 (en) * 2002-02-28 2003-08-28 Hugh Svendsen Automated discovery, assignment, and submission of image metadata to a network-based photosharing service
US20040268251A1 (en) * 2003-06-30 2004-12-30 Vladimir Sadovsky System and method for rules-based image acquisition

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8301659B2 (en) 2008-11-18 2012-10-30 Core Wireless Licensing S.A.R.L. Method, apparatus, and computer program product for determining media item privacy settings
US9058501B2 (en) 2008-11-18 2015-06-16 Core Wireless Licensing S.A.R.L. Method, apparatus, and computer program product for determining media item privacy settings
WO2010058061A1 (en) * 2008-11-18 2010-05-27 Nokia Corporation Method, apparatus, and computer program product for determining media item privacy settings
US8234688B2 (en) * 2009-04-03 2012-07-31 International Business Machines Corporation Managing privacy settings for a social network
US10789656B2 (en) 2009-07-31 2020-09-29 International Business Machines Corporation Providing and managing privacy scores
US9767524B2 (en) 2011-08-09 2017-09-19 Microsoft Technology Licensing, Llc Interaction with virtual objects causing change of legal status
US10019962B2 (en) 2011-08-17 2018-07-10 Microsoft Technology Licensing, Llc Context adaptive user interface for augmented reality display
US9153195B2 (en) 2011-08-17 2015-10-06 Microsoft Technology Licensing, Llc Providing contextual personal information by a mixed reality device
US10223832B2 (en) 2011-08-17 2019-03-05 Microsoft Technology Licensing, Llc Providing location occupancy analysis via a mixed reality device
US20130174213A1 (en) * 2011-08-23 2013-07-04 James Liu Implicit sharing and privacy control through physical behaviors using sensor-rich devices
US9536350B2 (en) 2011-08-24 2017-01-03 Microsoft Technology Licensing, Llc Touch and social cues as inputs into a computer
US11127210B2 (en) 2011-08-24 2021-09-21 Microsoft Technology Licensing, Llc Touch and social cues as inputs into a computer
US9881174B2 (en) 2012-12-19 2018-01-30 Pandexio, Inc. Multi-layered metadata management system
US9443098B2 (en) 2012-12-19 2016-09-13 Pandexio, Inc. Multi-layered metadata management system
US9773000B2 (en) 2013-10-29 2017-09-26 Pandexio, Inc. Knowledge object and collaboration management system
US10592560B2 (en) 2013-10-29 2020-03-17 Pandexio, Inc. Knowledge object and collaboration management system
US9959014B2 (en) 2014-11-12 2018-05-01 Sorenson Ip Holdings, Llc Systems, communication endpoints, and related methods for distributing images corresponding to communication endpoints
US9491258B2 (en) 2014-11-12 2016-11-08 Sorenson Communications, Inc. Systems, communication endpoints, and related methods for distributing images corresponding to communication endpoints
US10747898B2 (en) 2016-10-20 2020-08-18 International Business Machines Corporation Determining privacy for a user and a product in a particular context

Also Published As

Publication number Publication date
WO2007015184A3 (en) 2007-05-31

Similar Documents

Publication Publication Date Title
WO2007015184A2 (en) Apparatus and method for automatically determining privacy settings for content
US8897508B2 (en) Method and apparatus to incorporate automatic face recognition in digital image collections
JP5801395B2 (en) Automatic media sharing via shutter click
US8473525B2 (en) Metadata generation for image files
US10043059B2 (en) Assisted photo-tagging with facial recognition models
EP2973013B1 (en) Associating metadata with images in a personal image collection
US9495583B2 (en) Organizing images by correlating faces
US8032539B2 (en) Method and apparatus for semantic assisted rating of multimedia content
US20140198986A1 (en) System and method for image selection using multivariate time series analysis
CN112860943A (en) Teaching video auditing method, device, equipment and medium
CN104813674A (en) System and method for optimizing videos
US20130066872A1 (en) Method and Apparatus for Organizing Images
CN104933077B (en) Rule-based multifile information analysis method
US11783072B1 (en) Filter for sensitive data
CN117859141A (en) Shared visual content filtering during virtual meetings
CN117591485B (en) Solid state disk operation control system and method based on data identification
US20230113131A1 (en) Self-Supervised Learning of Photo Quality Using Implicitly Preferred Photos in Temporal Clusters
US20230205812A1 (en) Ai-powered raw file management
CN117590981A (en) Display method, display device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06780142

Country of ref document: EP

Kind code of ref document: A2