WO2004077710A2 - Minimizing unsolicited e-mail based on prior communications - Google Patents

Minimizing unsolicited e-mail based on prior communications Download PDF

Info

Publication number
WO2004077710A2
WO2004077710A2 PCT/US2004/005867 US2004005867W WO2004077710A2 WO 2004077710 A2 WO2004077710 A2 WO 2004077710A2 US 2004005867 W US2004005867 W US 2004005867W WO 2004077710 A2 WO2004077710 A2 WO 2004077710A2
Authority
WO
WIPO (PCT)
Prior art keywords
party
trust
prior
communication
communications
Prior art date
Application number
PCT/US2004/005867
Other languages
French (fr)
Other versions
WO2004077710A3 (en
Inventor
Matthias Grossglauser
Original Assignee
Businger, Peter, A.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Businger, Peter, A. filed Critical Businger, Peter, A.
Priority to EP04715665A priority Critical patent/EP1606718A4/en
Publication of WO2004077710A2 publication Critical patent/WO2004077710A2/en
Publication of WO2004077710A3 publication Critical patent/WO2004077710A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/21Monitoring or handling of messages
    • H04L51/212Monitoring or handling of messages using filtering or selective blocking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/48Message addressing, e.g. address format or anonymous messages, aliases

Definitions

  • the invention relates to network communications and, more particularly, to filtering and prioritizing messages being communicated to an addressee.
  • Spammers have become increasingly sophisticated in identifying message targets, legitimately or otherwise, e.g. by "'mining" the web including user groups and the like, or even based on mere guesses, yielding email addresses of millions of users. To alleviate the burden on users and networks, measures are sought for inhibiting the spread of spam.
  • blacklists are central databases collecting reports of email addresses at which spam originates. In this regard there has been concern with legitimate addresses being falsely included, disrupting normal email operation. Individual users or organizations may also establish whitelists, i.e. collections of email addresses that they deem legitimate. Whitelists in general do not contain the large number of legitimate email addresses that may communicate with a recipient at some point, considered as too "remote" for explicit inclusion.
  • a concomitant automated technique disallows an originator to establish a communication with an intended recipient unless a trust relationship exists between the parties.
  • Establishment of the trust relationship can depend on past communications between originator and recipient, between such parties and third parties, and between third and fourth parties, for example. Other attributes may be taken into account, such as explicit user feedback and message content.
  • Fig. 1 is a trust graph illustrating an instance where a message should be communicated regularly, rather than treated as spam.
  • Fig. 2 is a trust graph illustrating an instance where a message should be rejected as spam.
  • Fig. 3 is a trust graph more generally illustrating establishment of different levels of trust.
  • nodes identified by upper-case letters represent users/email addresses
  • solid lines with an arrow represent past communications in the direction of the arrow.
  • a broken line with an arrow represents a potential/attempted communication on which a decision is to be made.
  • multiple broken-line arrows follow paths through the graph of prior communications, illustrating how multiple independent paths can build additional trust.
  • the figures represent illustrative examples, without limiting the ways in which trust can be derived from past communications.
  • the invention can be appreciated as based on the premise that past communication patterns between parties permit inference of a notion of trust between two or more parties of an attempted communication.
  • the inference can be automated to yield one or more indicators which, on a proposed communication, can be automatically interrogated in deciding whether or not the communication will be allowed to proceed, and under what conditions.
  • the communication e.g. delivery .of an email message, or the placing a phone call will be permitted automatically to proceed only if sufficient trust exists between the initiator(s) and the recipient(s) of the communication. In case of insufficient trust, effecting the communication may still be provided for, but requiring one or several additional action(s) by the initiating party or parties, e.g. payment of a fee.
  • Such a requirement can serve to discourage abusive or annoying communications, e.g. spam email, sales calls and the like.
  • insufficient trust may trigger alteration of the communication in some way, e.g. by compressing to use less space, or by removing attachments.
  • Using past communication patterns for estimating the trust between initiator(s) and recipient(s) of a new communication can reduce the likelihood that such actions will be invoked too frequently.
  • FIG. 1 there are prior communications 1 and 2 between users A and B establishing trust in both directions, and a communication 3 establishing trust from B to C.
  • A may trust a communication from C.
  • the relationship of trust can be appreciated in view of the graph on following contiguous solid lines in the direction opposite to their arrows. It may be noted that based on the instant situation C need not trust A, who may be a spammer still.
  • Fig. 3 illustrates how different notions of trust can be defined.
  • multiple broken-line arrows follow trusted paths through the graph of prior communications. Specifically, there is a path of five links from E via J, F, C, B to A, and two paths from I to A, namely I - G - F - C - B - A including five links, and I - G - H - B - A including four links.
  • trust between two or several parties depends not only on communications having occurred directly between these parties, but also between these and other parties.
  • a measure of trust can be established between two parties that have never communicated with each other, provided that trust can be established through one or several other intermediate parties. For example, if a user X wishes to send a message to a user Y, where X and Y have never communicated before, the communication may be allowed to proceed if there is information of a third party Z that has communicated with both X and Y in the past.
  • trust between X and Y may depend (i) on the frequency and timing of the past communications, (ii) more generally on the structure of the "trust graph" linking X and Y, and/or (iii) on some attributes and/or the content of the message, e.g. its size, media types, number of concurrent recipients, and the like.
  • the inferred trust relationships can be used in other ways as well, e.g., to prioritize, order and categorize communications by the sender and/or recipient. For example, email messages can automatically be grouped according to the degree/amount of trust between sender and recipient, e.g. by placement into respective "folders". Trust relationships can be used further to assist communicating parties in searching for addresses and other information related to the parties, e.g. by auto-completing an unrecognized identification such as an address or phone number with likely, i.e. highly trusted similar identification.
  • received emails can be displayed according to the strength of the trust relationship, or the relationship can be used to prioritize among several concurrent phone calls.
  • a measure of trust can be used to assist in other decision-making processes, e.g. fraud avoidance in e-commerce, targeting legitimate advertisement or other relevant information, searching for people based on criteria involving trust and personal relationships, allocating resources for future communications, and the like.
  • Indicators of trust relationships can be maintained centrally or in a distributed way, mindful of tradeoffs between control, scalability, the need for cryptographic methods for authentication, and the like. A further consideration is with the amount of control a user is afforded over his trust relationships. Where such control is desired, users may be allowed to explicitly manage and rate trust with other users. For example, users can provide feedback by rating the relevance or quality of messages they receive. Such feedback can be incorporated into the trust graph to further improve the filtering decisions.
  • Email messages can be filtered according to a trust relationship between the sender and the recipient(s) of the message.
  • email messages can be exchanged only between parties in a trust inference system.
  • the system can refuse to deliver a message if the sender does not have a sufficient trust relationship with the recipient. Refusal may be on account of one or more reasons such as e.g. (i) the sender has never sent a message to anybody, (ii) the sender has attempted to send undesired email messages to the present or other recipients before, or (iii) the parties with whom the sender has trust relationships in turn do not possess sufficiently strong trust relationships with the intended recipient.
  • the sender can have an option of a prescribed action to establish trust through other means.
  • the sender can be allowed to call the intended recipient on the phone, contact him/her through a traditional email message, or make a required payment to the recipient or a third party such as an internet service provider, for example.
  • Explicit Trust Manipulation In the trust system a user can be allowed to explicitly manipulate and modify its trust relationship with others, in combination with inference of trust relationships from past communications. Such a feature can simplify the establishment of communication between the one user and he others. For example, a user can establish a list of other users permitted to send him/her email messages.
  • a trust relationship inferred from past communication patterns can be complemented and enhanced by explicit feedback from parties on actual or attempted communications.
  • an email client can offer two ways of deleting a message, (a) a "normal" delete of a message that is not needed any more, and (b) a "reject" of a message that was deemed fraudulent, disruptive, or otherwise undesired.
  • a determination can be made by the intended recipient of the message, e.g. when the recipient is allowed to inspect the attempted communication and decide whether to reject it, to wait for sender action, or to permit delivery.
  • feedback can be generated at different stages, e.g. as a message remains pendent or after its delivery.
  • a "reject” can be used to lower the trust between the originator and the recipient of the message, and can be used even to affect the level of trust between other parties.
  • the parties of an attempted communication can be afforded a certain amount of control over the trust relationship with other parties, and over how the trust relationships lead to decisions on whether an attempted communication is accepted or not.
  • a user can be provided with control over a "trust threshold" parameter whose level determines whether or not the system lets attempted communications pass.
  • email messages from non-participating originators can be required to include a prescribed code or "cookie", e.g. in the subject field.
  • the code is generated when the originator first obtains permission to send to one of the participating recipients. Unless the originator includes the code when sending a message to any participating recipient, the message will be rejected.
  • an attacker now needs not only the address of a trusted legitimate originator, but also its code. Further variations include using sequences of codes / one-time passwords, encryption-based authentication methods, and the like.
  • a decentralized implementation can offer improved scaling and greater robustness to failures and attacks.
  • a decentralized implementation can extend an existing email environment, consisting of email clients, a message transfer agent such as the sendmail program, and the like, with software for interacting with its counterparts in other locations or domains to implement the same or similar functionality as described above for the centralized case.
  • parties can establish a-priori trust relationships by providing so-called "whitelists" of potential senders a recipient wishes to allow, and "blacklists" of senders to be blocked.
  • Such lists may be shared, e.g. as there are organizations collecting and distributing blacklists, and lists may be updated/modified based on user feedback on attempted or occurred communications.
  • Such trust relationships can be combined with trust inferred by observing past communications, e.g. as follows: If two users, A and B who trust each other share their whitelists, and if user A allows messages from a sender C, then B would automatically receive messages from C without himself having to whitelist C. Altering Communication Based on Trust.
  • a message can be restricted, e.g. so that larger messages are truncated, large attachments are cut off, execution of an attached program is denied, several messages from the sender are combined into one, and/or the like.
  • a message can be blocked and a request sent to the sender for action on his part, e.g. for reduction of the size of the message, providing authentication and/or the like.
  • Trust Policy may be set by an individual communicating party, there can be circumstances where the policy at least in part is set by another. In a company, for example, management can exercise a measure of control over how trust is established between its employees, and between outsiders and its employees.
  • a measure of trust can be determined between classes of parties, based on one or several attributes of the parties. For example, if the attribute is the user's domain, then trust can be inferred between a user A at one domain, Dl, and a user B at another domain, D2, provided other users in domains Dl and D2 have communicated before.
  • a measure of trust can be shared by members of a class, e.g. the employees of a company.
  • attributes for determining a class for trust sharing are geographic location, e.g. of cell/mobile phones and location-driven services, membership in a group or community, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

In a technique for minimizing unsolicited communications such as spam email, a level of trust between parties is determined on the basis of past communications between parties. A new message from one party to another is passed automatically provided trust has been established. Otherwise, the message may be dropped or temporarily blocked, with ultimate delivery made contingent on further action by the sender.

Description

Communications Filtering and Prioritizing Using Prior Communications
Technical Field
The invention relates to network communications and, more particularly, to filtering and prioritizing messages being communicated to an addressee.
Background of the Invention
With respect to network communications such as by telephone or email, for example, certain messages considered as unwelcome, bothersome, intrusive or annoying are generally categorized as "spam". Included are unwelcome sales calls by telephone, advertisement text messages on cell phones, e.g., through Short Message Service (SMS), and commercial email and pop-up messages transmitted over the Internet. Spam places an unwelcome burden also on the communications infrastructure.
Spammers have become increasingly sophisticated in identifying message targets, legitimately or otherwise, e.g. by "'mining" the web including user groups and the like, or even based on mere guesses, yielding email addresses of millions of users. To alleviate the burden on users and networks, measures are sought for inhibiting the spread of spam.
Known techniques for filtering against spam rely on the content of an email message, its header and its body, to decide whether the message is legitimate or not. Learning techniques have been used to improve the quality of filters over time, based on user feedback. In these methods the decision as to whether or not to accept a message depends on the message itself, and on local information pertaining to the intended recipient. Increasingly, such filters are being bypassed by sophisticated spammers, and there is a further concern with the potential for false positives, i.e. messages that are wrongly classified as spam and effectively lost.
Other approaches include blacklists, which are central databases collecting reports of email addresses at which spam originates. In this regard there has been concern with legitimate addresses being falsely included, disrupting normal email operation. Individual users or organizations may also establish whitelists, i.e. collections of email addresses that they deem legitimate. Whitelists in general do not contain the large number of legitimate email addresses that may communicate with a recipient at some point, considered as too "remote" for explicit inclusion.
Summary of the Invention
We have recognized that communication between a message originator and a recipient can be made conditional on a trust relationship established between the parties. A concomitant automated technique disallows an originator to establish a communication with an intended recipient unless a trust relationship exists between the parties. Establishment of the trust relationship can depend on past communications between originator and recipient, between such parties and third parties, and between third and fourth parties, for example. Other attributes may be taken into account, such as explicit user feedback and message content.
Brief Description of the Drawing
Fig. 1 is a trust graph illustrating an instance where a message should be communicated regularly, rather than treated as spam.
Fig. 2 is a trust graph illustrating an instance where a message should be rejected as spam.
Fig. 3 is a trust graph more generally illustrating establishment of different levels of trust.
In the illustrative graphs, nodes identified by upper-case letters represent users/email addresses, and solid lines with an arrow represent past communications in the direction of the arrow. In Fig. 1 and 2 a broken line with an arrow represents a potential/attempted communication on which a decision is to be made. In Fig. 3, multiple broken-line arrows follow paths through the graph of prior communications, illustrating how multiple independent paths can build additional trust. The figures represent illustrative examples, without limiting the ways in which trust can be derived from past communications. Detailed Description
The invention can be appreciated as based on the premise that past communication patterns between parties permit inference of a notion of trust between two or more parties of an attempted communication. The inference can be automated to yield one or more indicators which, on a proposed communication, can be automatically interrogated in deciding whether or not the communication will be allowed to proceed, and under what conditions. The communication, e.g. delivery .of an email message, or the placing a phone call will be permitted automatically to proceed only if sufficient trust exists between the initiator(s) and the recipient(s) of the communication. In case of insufficient trust, effecting the communication may still be provided for, but requiring one or several additional action(s) by the initiating party or parties, e.g. payment of a fee. Such a requirement can serve to discourage abusive or annoying communications, e.g. spam email, sales calls and the like. Also, insufficient trust may trigger alteration of the communication in some way, e.g. by compressing to use less space, or by removing attachments. Using past communication patterns for estimating the trust between initiator(s) and recipient(s) of a new communication can reduce the likelihood that such actions will be invoked too frequently.
In the example of Fig. 1 there are prior communications 1 and 2 between users A and B establishing trust in both directions, and a communication 3 establishing trust from B to C. With B trusted by A and C trusted by B, A may trust a communication from C. The relationship of trust can be appreciated in view of the graph on following contiguous solid lines in the direction opposite to their arrows. It may be noted that based on the instant situation C need not trust A, who may be a spammer still.
In the example of Fig. 2 there are prior communications 1 and 2 establishing trust between users A and B, and communications 3 and 4 establishing trust between C and D. But without an inference of trust for a communication from C to A, the message in question may be blocked. Users A and C may be said to belong to different islands in the trust graph.
Fig. 3 illustrates how different notions of trust can be defined. In Fig. 3, multiple broken-line arrows follow trusted paths through the graph of prior communications. Specifically, there is a path of five links from E via J, F, C, B to A, and two paths from I to A, namely I - G - F - C - B - A including five links, and I - G - H - B - A including four links. With trust relationships inferred from user E to user A and from user I to user A, it is meaningful to rate the trust from I to A as stronger than from E to A, because (a) for user E there is only a single trusted path, whereas for user I there is a plurality which indicates that more users vouch for the trustworthiness of I, and (b) one of the paths for user I is shorter than the path for E.
Generally speaking, trust between two or several parties depends not only on communications having occurred directly between these parties, but also between these and other parties. Thus, in a preferred embodiment of the technique, a measure of trust can be established between two parties that have never communicated with each other, provided that trust can be established through one or several other intermediate parties. For example, if a user X wishes to send a message to a user Y, where X and Y have never communicated before, the communication may be allowed to proceed if there is information of a third party Z that has communicated with both X and Y in the past. Thus, for example, trust between X and Y may depend (i) on the frequency and timing of the past communications, (ii) more generally on the structure of the "trust graph" linking X and Y, and/or (iii) on some attributes and/or the content of the message, e.g. its size, media types, number of concurrent recipients, and the like.
The inferred trust relationships can be used in other ways as well, e.g., to prioritize, order and categorize communications by the sender and/or recipient. For example, email messages can automatically be grouped according to the degree/amount of trust between sender and recipient, e.g. by placement into respective "folders". Trust relationships can be used further to assist communicating parties in searching for addresses and other information related to the parties, e.g. by auto-completing an unrecognized identification such as an address or phone number with likely, i.e. highly trusted similar identification.
Among further benefits, received emails can be displayed according to the strength of the trust relationship, or the relationship can be used to prioritize among several concurrent phone calls. And a measure of trust can be used to assist in other decision-making processes, e.g. fraud avoidance in e-commerce, targeting legitimate advertisement or other relevant information, searching for people based on criteria involving trust and personal relationships, allocating resources for future communications, and the like.
Indicators of trust relationships can be maintained centrally or in a distributed way, mindful of tradeoffs between control, scalability, the need for cryptographic methods for authentication, and the like. A further consideration is with the amount of control a user is afforded over his trust relationships. Where such control is desired, users may be allowed to explicitly manage and rate trust with other users. For example, users can provide feedback by rating the relevance or quality of messages they receive. Such feedback can be incorporated into the trust graph to further improve the filtering decisions.
Example: Email. Without limiting the technique to electronic mail, this example represents a specific instantiation for helping to defend against unsolicited email or spam. Features described here are readily adaptable to other types of communications.
Email messages can be filtered according to a trust relationship between the sender and the recipient(s) of the message. In a first embodiment, email messages can be exchanged only between parties in a trust inference system. The system can refuse to deliver a message if the sender does not have a sufficient trust relationship with the recipient. Refusal may be on account of one or more reasons such as e.g. (i) the sender has never sent a message to anybody, (ii) the sender has attempted to send undesired email messages to the present or other recipients before, or (iii) the parties with whom the sender has trust relationships in turn do not possess sufficiently strong trust relationships with the intended recipient.
If a message is rejected, the sender can have an option of a prescribed action to establish trust through other means. For example, the sender can be allowed to call the intended recipient on the phone, contact him/her through a traditional email message, or make a required payment to the recipient or a third party such as an internet service provider, for example. Explicit Trust Manipulation. In the trust system a user can be allowed to explicitly manipulate and modify its trust relationship with others, in combination with inference of trust relationships from past communications. Such a feature can simplify the establishment of communication between the one user and he others. For example, a user can establish a list of other users permitted to send him/her email messages.
Explicit Trust Feedback. A trust relationship inferred from past communication patterns can be complemented and enhanced by explicit feedback from parties on actual or attempted communications. For example, an email client can offer two ways of deleting a message, (a) a "normal" delete of a message that is not needed any more, and (b) a "reject" of a message that was deemed fraudulent, disruptive, or otherwise undesired. In this respect, a determination can be made by the intended recipient of the message, e.g. when the recipient is allowed to inspect the attempted communication and decide whether to reject it, to wait for sender action, or to permit delivery. Thus, feedback can be generated at different stages, e.g. as a message remains pendent or after its delivery. A "reject" can be used to lower the trust between the originator and the recipient of the message, and can be used even to affect the level of trust between other parties.
User Control of Trust Policy. The parties of an attempted communication, and in particular the destination(s) of an attempted communication, can be afforded a certain amount of control over the trust relationship with other parties, and over how the trust relationships lead to decisions on whether an attempted communication is accepted or not. For example, a user can be provided with control over a "trust threshold" parameter whose level determines whether or not the system lets attempted communications pass.
Interoperation With Users Not Equipped With the Technique. Special considerations arise concerning interoperation with other email parties that are not equipped with the technique. Specifically, the address of the originator of an email message, i.e. the "from" address is easy to falsify, giving a rogue sender or attacker the ability to circumvent the system by pretending to be a trusted originator. But, while determining or guessing the addresses of trusted parties for a given recipient may be feasible, it is certainly more difficult than simply generating random source addresses, so that the technique still offers a benefit. For more complete protection, a user can have the option not to receive communications from users not enabled with trust-based discrimination.
The technique as described so far can be augmented in various ways. For example, email messages from non-participating originators can be required to include a prescribed code or "cookie", e.g. in the subject field. The code is generated when the originator first obtains permission to send to one of the participating recipients. Unless the originator includes the code when sending a message to any participating recipient, the message will be rejected. To circumvent the technique, an attacker now needs not only the address of a trusted legitimate originator, but also its code. Further variations include using sequences of codes / one-time passwords, encryption-based authentication methods, and the like.
Decentralized Implementation. While a centralized implementation of the technique is conceptually simpler, a decentralized implementation can offer improved scaling and greater robustness to failures and attacks. A decentralized implementation can extend an existing email environment, consisting of email clients, a message transfer agent such as the sendmail program, and the like, with software for interacting with its counterparts in other locations or domains to implement the same or similar functionality as described above for the centralized case.
Interaction with Explicit Trust Relationships. In known systems, parties can establish a-priori trust relationships by providing so-called "whitelists" of potential senders a recipient wishes to allow, and "blacklists" of senders to be blocked. Such lists may be shared, e.g. as there are organizations collecting and distributing blacklists, and lists may be updated/modified based on user feedback on attempted or occurred communications. Such trust relationships can be combined with trust inferred by observing past communications, e.g. as follows: If two users, A and B who trust each other share their whitelists, and if user A allows messages from a sender C, then B would automatically receive messages from C without himself having to whitelist C. Altering Communication Based on Trust. Beyond simply letting messages through if there is trust and block them otherwise, more complex interactions can be provided for. For example, where trust in a sender is partial, a message can be restricted, e.g. so that larger messages are truncated, large attachments are cut off, execution of an attached program is denied, several messages from the sender are combined into one, and/or the like. Further where trust is partial, a message can be blocked and a request sent to the sender for action on his part, e.g. for reduction of the size of the message, providing authentication and/or the like.
Control of Trust Policy. While trust policy may be set by an individual communicating party, there can be circumstances where the policy at least in part is set by another. In a company, for example, management can exercise a measure of control over how trust is established between its employees, and between outsiders and its employees.
Trust Between Classes of Participants. In addition to taking into account past communications between individual parties, a measure of trust can be determined between classes of parties, based on one or several attributes of the parties. For example, if the attribute is the user's domain, then trust can be inferred between a user A at one domain, Dl, and a user B at another domain, D2, provided other users in domains Dl and D2 have communicated before. Thus, a measure of trust can be shared by members of a class, e.g. the employees of a company. Among further attributes for determining a class for trust sharing are geographic location, e.g. of cell/mobile phones and location-driven services, membership in a group or community, and the like.

Claims

1. A communications service for communicating between parties operationally coupled via a communications network, wherein forwarding of a new communication from a first party source to at least one second party destination is contingent on at least one measure of trust inferred based on at least one prior communication in the network.
2. The service of claim 1, wherein the at least one prior communication is between the source and the at least one destination of the new communication.
3. The service of claim 2, wherein the at least one prior communication is from at least one destination of the new communication to the source of the new communication.
4. The service of claim 1, wherein the at least one prior communication comprises a one prior communication between the source of the new communication and a third party, and another prior communication between the third party and the destination of the new communication.
5. The service of claim 4, wherein one of the prior communications is from the destination of the new communication to the third party, and another of the other prior communication is from the third party to the source of the new communication.
6. The service of claim 1, wherein the measure of trust is inferred from a set of paths of trusted communications connecting the source to the at least one destination in a graph formed by parties and their prior communications.
7. The service of claim 6, wherein the paths are from each destination to the source, in direction opposite to prior communications.
8. The service of claim 6, wherein the measure of trust depends on attributes of the prior and/or new communications and the parties on the paths.
9. The service of claim 1, wherein the at least one prior communication comprises all prior communications between all parties.
10. The service of claim 1, wherein the prior communication is via the communications network.
11. The service of claim 1, wherein the prior communication is via means other than the communications network.
12. The service of claim 1, wherein the communications network comprises at least one of an electronic mail network, an electronic messaging network and a telephone network.
13. The service of claim 1, wherein one party is enabled to modify a trust relationship it has with another party.
14. The service of claim 13, wherein the modification depends on feedback from the one party regarding a prior communication from the other party.
15. The service of claim 1, wherein an entity other than a party is enabled to modify a trust relationship between at least one and at least one other party.
16. The service of claim 1, wherein forwarding is contingent further on an a- priori criterion.
17. The service of claim 1, wherein, depending on the measure of trust, the message is forwarded in altered form.
18. The service of claim 1, wherein treatment of the message at the destination depends on the measure of trust.
19. The service of claim 1, wherein forwarding is contingent on the source meeting a condition demanded.
20. The service of claim 19, wherein the condition comprises payment of a fee.
21. The service of claim 1, wherein the measure of trust is utilized for further action.
22. The service of claim 21, wherein the further action comprises at least one of prioritizing, ordering, categorizing, repair of a deficient identification, fraud avoidance, advertisement targeting, people search and resource allocation for future communications .
23. The service of claim 1, wherein the measure of trust is contingent on a policy that applies to a class of parties.
24. The service of claim 1, wherein the measure of trust is shared between members of a class of parties.
25. A communications system for communicating between parties operationally coupled via a communications network, wherein forwarding of a new communication from a first party source to at least one second party destination is contingent on at least one measure of trust inferred based on at least one prior communication in the network.
26. The system of claim 25, wherein the at least one prior communication is between the source and the at least one destination of the new communication.
27. The system of claim 26, wherein the at least one prior communication is from at least one destination of the new communication to the source of the new communication.
28. The system of claim 25, wherein the at least one prior communication comprises a one prior communication between the source of the new communication and a third party, and another prior communication between the third party and the destination of the new communication.
29. The system of claim 28, wherein one of the prior communications is from the destination of the new communication to the third party, and another of the other prior communications is from the third party to the source of the new communication.
30. The system of claim 25, wherein the measure of trust is inferred from a set of paths of trusted communications connecting the source to the at least one destination in a graph formed by parties and their prior communications.
31. The system of claim 30, wherein the paths are from each destination to the source, in direction opposite to prior communications.
32. The system of claim 30, wherein the measure of trust depends on attributes of the prior and/ or the new communications and the parties on the paths.
33. The system of claim 25, wherein the at least one prior communication comprises all prior communications between all parties.
34. The system of claim 25, wherein the prior communication is via the communications network.
35. The system of claim 25, wherein the prior communication is via means other than the communications network.
36. The system of claim 25, wherein the communications network comprises at least one of an electronic mail network, an electronic messaging network and a telephone network.
37. The system of claim 25, wherein one party is enabled to modify a trust relationship it has with another party.
38. The system of claim 37, wherein the modification depends on feedback from the one party regarding a prior communication from the other party.
39. The system of claim 25, wherein an entity other than a party is enabled to modify a trust relationship between at least one and at least one other party.
40. The system of claim 25, wherein forwarding is contingent further on an a- priori criterion.
41. The system of claim 25, wherein, depending on the measure of trust, the message is forwarded in altered form.
42. The system of claim 25, wherein treatment of the message at the destination depends on the measure of trust.
43. The system of claim 25, wherein forwarding is contingent on the source meeting a condition demanded.
44. The system of claim 43, wherein the condition comprises payment of a fee.
45. The system of claim 25, wherein the measure of trust is utilized for further action.
46. The system of claim 45, wherein the further action comprises at least one of prioritizing, ordering, categorizing, repair of a deficient identification, fraud avoidance, advertisement targeting, people search and resource allocation for future communications.
47. The system of claim 25, wherein the measure of trust is contingent on a policy that applies to a class of parties.
48. The system of claim 25, wherein the measure of trust is shared between members of a class of parties.
49. A communications method for communicating between parties operationally coupled via a communications network, comprising: inferring, based on at least one prior communication in the network, at least one measure of trust between a first party and at least one second party; and forwarding, contingent on the measure of trust, a new communication from a first party source to at least one second party destination.
50. The method of claim 49, wherein the at least one prior communication is between the source and the at least one destination of the new communication.
51. The method of claim 50, wherein the at least one prior communication is from at least one destination of the new communication to the source of the new communication.
52. The method of claim 49, wherein the at least one prior communication comprises a one prior communication between the source of the new communication and a third party, and another prior communication between the third party and the destination of the new communication.
53. The method of claim 52, wherein one of the prior communications is from the destination of the new communication to the third party, and the another of the prior communications is from the third party to the source of the new communication.
54. The method of claim 49, wherein the measure of trust is inferred from a set of paths of trusted communications connecting the source to the at least one destination in a graph formed by parties and their prior communications.
55. The method of claim 54, wherein the paths are from each destination to the source, in direction opposite to prior communications.
56. The method of claim 54, wherein the measure of trust depends on attributes of the prior and/or new communications and the parties on the paths.
57. The method of claim 49, wherein the at least one prior communication comprises all prior communications between all parties.
58. The method of claim 49, wherein the prior communication is via the communications network.
59. The method of claim 49, wherein the prior communication is via means other than the communications network.
60. The method of claim 49, wherein the communications network comprises at least one of an electronic mail network, an electronic messaging network and a telephone network.
61. The method of claim 49, wherein one party is enabled to modify a trust relationship it has with another party.
62. The method of claim 61, wherein the modification depends on feedback from the one party regarding a prior communication from the other party.
63. The method of claim 49, wherein an entity other than a party is enabled to modify a trust relationship between at least one and at least one other party.
64. The method of claim 49, wherein forwarding is contingent further on an a- priori criterion.
65. The method of claim 49, wherein, depending on the measure of trust, the message is forwarded in altered form.
66. The method of claim 49, wherein treatment of the message at the destination depends on the measure of trust.
67. The method of claim 49, wherein forwarding is contingent on the source meeting a condition demanded.
68. The method of claim 67, wherein the condition comprises payment of a fee.
69. The method of claim 49, wherein the measure of frust is utilized for further action.
70. The method of claim 69, wherein the further action comprises at least one of prioritizing, ordering, categorizing, repair of a deficient identification, fraud avoidance, advertisement targeting, people search and resource allocation for future communications.
71. The method of claim 49, wherein the measure of trust is contingent on a policy that applies to a class of parties.
72. The method of claim 49, wherein the measure of trust is shared between members of a class of parties.
PCT/US2004/005867 2003-02-27 2004-02-27 Minimizing unsolicited e-mail based on prior communications WO2004077710A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP04715665A EP1606718A4 (en) 2003-02-27 2004-02-27 Communications filtering and prioritizing using prior communications

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US45057903P 2003-02-27 2003-02-27
US60/450,579 2003-02-27

Publications (2)

Publication Number Publication Date
WO2004077710A2 true WO2004077710A2 (en) 2004-09-10
WO2004077710A3 WO2004077710A3 (en) 2005-03-24

Family

ID=32927671

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2004/005867 WO2004077710A2 (en) 2003-02-27 2004-02-27 Minimizing unsolicited e-mail based on prior communications

Country Status (2)

Country Link
EP (1) EP1606718A4 (en)
WO (1) WO2004077710A2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008142440A1 (en) * 2007-05-18 2008-11-27 Surfcontrol On-Demand Limited Method and apparatus for electronic mail filtering
EP2069948A2 (en) * 2006-09-01 2009-06-17 Nuxo Technologies, Inc. Method and apparatus for filtering electronic messages
US8255987B2 (en) 2009-01-15 2012-08-28 Microsoft Corporation Communication abuse prevention
US9241259B2 (en) 2012-11-30 2016-01-19 Websense, Inc. Method and apparatus for managing the transfer of sensitive information to mobile devices
US20160357961A1 (en) * 2015-06-04 2016-12-08 Accenture Global Services Limited Security risk-based resource allocation

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2458094A (en) 2007-01-09 2009-09-09 Surfcontrol On Demand Ltd URL interception and categorization in firewalls

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6192114B1 (en) 1998-09-02 2001-02-20 Cbt Flint Partners Method and apparatus for billing a fee to a party initiating an electronic mail communication when the party is not on an authorization list associated with the party to whom the communication is directed
US20020124053A1 (en) 2000-12-28 2002-09-05 Robert Adams Control of access control lists based on social networks
WO2002080512A1 (en) 2001-03-30 2002-10-10 Elisa Communications Oyj Controlling method for contact requests using recommendations by approved contact-requesting parties
US20020199095A1 (en) 1997-07-24 2002-12-26 Jean-Christophe Bandini Method and system for filtering communication

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6675153B1 (en) * 1999-07-06 2004-01-06 Zix Corporation Transaction authorization system
US6993564B2 (en) * 2000-12-22 2006-01-31 At&T Corp. Method of authorizing receipt of instant messages by a recipient user
US20030009698A1 (en) * 2001-05-30 2003-01-09 Cascadezone, Inc. Spam avenger
WO2003054719A1 (en) * 2001-12-19 2003-07-03 Secluda Technologies, Inc. Message processor
US6842807B2 (en) * 2002-02-15 2005-01-11 Intel Corporation Method and apparatus for deprioritizing a high priority client
US20030216982A1 (en) * 2002-05-17 2003-11-20 Tyler Close Messaging gateway for incentivizing collaboration
US7219148B2 (en) * 2003-03-03 2007-05-15 Microsoft Corporation Feedback loop for spam prevention

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020199095A1 (en) 1997-07-24 2002-12-26 Jean-Christophe Bandini Method and system for filtering communication
US6192114B1 (en) 1998-09-02 2001-02-20 Cbt Flint Partners Method and apparatus for billing a fee to a party initiating an electronic mail communication when the party is not on an authorization list associated with the party to whom the communication is directed
US20020124053A1 (en) 2000-12-28 2002-09-05 Robert Adams Control of access control lists based on social networks
WO2002080512A1 (en) 2001-03-30 2002-10-10 Elisa Communications Oyj Controlling method for contact requests using recommendations by approved contact-requesting parties

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP1606718A4

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2069948A2 (en) * 2006-09-01 2009-06-17 Nuxo Technologies, Inc. Method and apparatus for filtering electronic messages
EP2069948A4 (en) * 2006-09-01 2010-05-05 Nuxo Technologies Inc Method and apparatus for filtering electronic messages
WO2008142440A1 (en) * 2007-05-18 2008-11-27 Surfcontrol On-Demand Limited Method and apparatus for electronic mail filtering
AU2008252599B2 (en) * 2007-05-18 2012-08-16 Websense Hosted R&D Limited Method and apparatus for electronic mail filtering
US8255987B2 (en) 2009-01-15 2012-08-28 Microsoft Corporation Communication abuse prevention
US8863244B2 (en) 2009-01-15 2014-10-14 Microsoft Corporation Communication abuse prevention
US9241259B2 (en) 2012-11-30 2016-01-19 Websense, Inc. Method and apparatus for managing the transfer of sensitive information to mobile devices
US10135783B2 (en) 2012-11-30 2018-11-20 Forcepoint Llc Method and apparatus for maintaining network communication during email data transfer
US20160357961A1 (en) * 2015-06-04 2016-12-08 Accenture Global Services Limited Security risk-based resource allocation
US9798877B2 (en) * 2015-06-04 2017-10-24 Accenture Global Services Limited Security risk-based resource allocation

Also Published As

Publication number Publication date
WO2004077710A3 (en) 2005-03-24
EP1606718A2 (en) 2005-12-21
EP1606718A4 (en) 2009-03-25

Similar Documents

Publication Publication Date Title
US9083695B2 (en) Control and management of electronic messaging
EP1523837B1 (en) Method and system for controlling messages in a communication network
EP1675333B1 (en) Detection of unwanted messages (spam)
US20040024823A1 (en) Email authentication system
AU782333B2 (en) Electronic message filter having a whitelist database and a quarantining mechanism
US7644274B1 (en) Methods of protecting against spam electronic mail
JP2005518173A (en) Email management service
Leiba et al. A Multifaceted Approach to Spam Reduction.
WO2004077710A2 (en) Minimizing unsolicited e-mail based on prior communications
JP4659096B2 (en) System and method for preventing unsolicited electronic message delivery by key generation and comparison
KR100996709B1 (en) Apparatus for blocking ?? application spam and method thereof
JP2003018324A (en) User filtering system and method for communication service
KR20020030704A (en) An E-mail service system with anti-spam mail using virtual E-mail addresses and method therefor
US20080177846A1 (en) Method for Providing E-Mail Spam Rejection Employing User Controlled and Service Provider Controlled Access Lists
KR20100013989A (en) Device and method for blocking spam based on turing test in voip service
US11916873B1 (en) Computerized system for inserting management information into electronic communication systems
Park et al. Spam Detection: Increasing Accuracy with A Hybrid Solution.
CN102598009A (en) Method and apparatus for filtering information
Helman Spam-a-Lot: The States' Crusade against Unsolicited E-Mail in Light of the Can-Spam Act and the Overbreadth Doctrine
WO2008122409A1 (en) Trust manager and method for enhanced protection against spam
Chaisamran et al. Trust-based SPIT detection by using call duration and social reliability
Shah et al. FLeSMA: a firewall level spam mitigation approach through a genetic classifier model
JP2003169095A (en) Electronic mail system and electronic mail distributing method
Uppin Hybrid Soft Computing Approach For Spam Filtering
Hameed et al. LENS: LEveraging Social Networking and trust against Spam

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2004715665

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2004715665

Country of ref document: EP