WO2002042874A2 - Method and architecture for serving and caching web objects on the modern internet - Google Patents
Method and architecture for serving and caching web objects on the modern internet Download PDFInfo
- Publication number
- WO2002042874A2 WO2002042874A2 PCT/US2001/043594 US0143594W WO0242874A2 WO 2002042874 A2 WO2002042874 A2 WO 2002042874A2 US 0143594 W US0143594 W US 0143594W WO 0242874 A2 WO0242874 A2 WO 0242874A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- proxy cache
- client
- server
- cache
- request
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/2866—Architectures; Arrangements
- H04L67/2876—Pairs of inter-processing entities at each side of the network, e.g. split proxies
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/957—Browsing optimisation, e.g. caching or content distillation
- G06F16/9574—Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/04—Protocols for data compression, e.g. ROHC
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/30—Definitions, standards or architectural aspects of layered protocol stacks
- H04L69/32—Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
- H04L69/322—Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
- H04L69/329—Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the application layer [OSI layer 7]
Definitions
- a proxy is an intermediary program that acts as both a server and a client for the purpose of making requests on behalf of other clients Requests are serviced internally or may be passed to another server
- the use of a cache attached to the proxy lets the proxy provide static data from local storage rather than acquiring the data through the WAN for each request
- a proxy cache 130 on the client side 100 of the connection some improvements are introduced First, because the client 100 and proxy cache 130 are typically connected via a LAN (not shown) instead of a WAN, the client 100 no longer has to have direct internet access via the WAN 110 The client 100 only needs to be able to connect locally (through the LAN) and talk to the proxy cache 130. The proxy cache 130 will then broker the transaction, as necessary, with the server 120 through the LAN. This allows the client 100 to stay within the security of the LAN and not be exposed to the internet. The proxy cache 130 needs to be able to directly access the server 120. This allows the networking infrastructure to be optimized for web access to the proxy cache 130, and not every single client 100.
- the use of a cache attached to a proxy lets the proxy provide static data from the cache rather than acquiring the data through the WAN for each request. This is done through a reactive technique.
- the client makes a request, if the request exists in the cache it is served directly from the proxy, with no access to the origin server. This serving is done at the speed of the LAN or WAN, and with very little delay, or latency.
- One drawback is that the origin server has no knowledge or log of this transaction taking place. If the object that the client request does not exist in the cache then the proxy contacts the origin server and retrieves the object. The proxy stores this object in the cache and in parallel serves the object to the client. Now, every request made for this object should be fetched from the cache if possible.
- the invention comprises a method and architecture for serving and caching web objects comprising the steps of requesting, from a client, a request over a network; and receiving and fulfilling the request by a proxy cache located on a client side of the network; wherein the fulfillment of the request by the proxy cache is enabled by information passed to the proxy cache by at least one of a server and a reverse proxy cache located at a server side of the network via a pi-bonded connection with the proxy cache.
- Figure 1 depicts a known server/client architecture
- Figure 2 depicts a known server/client/proxy cache architecture
- Figure 3 depicts a serving and caching architecture in accordance with the present invention
- Figure 4 depicts the architecture illustrated in Figure 3, and further illustrates the use of a pi-bonded connection between the reverse proxy cache and the proxy cache;
- Figure 5 depicts the architecture illustrated in Figure 4, however with a pi-bonded connection between the server and the proxy cache.
- FIG. 3 Unlike the architecture shown in prior art Figure 2, Figure 3, illustrates a presently preferred embodiment of the present invention wherein a reverse proxy cache on the server side of the architecture. Specifically, cache 300, WAN, 310, server 320, and proxy cache 330 are provided, as shown. As used herein, the term "reverse proxy cache” means a server that proxies content on the server side of the network.
- the client requests that is, requests from either or both of client 300 and proxy cache 330
- the reverse proxy cache 340 thus acts as a virtual server from the point of view of the proxy cache 330 and the client 300. That is, the reverse proxy cache 340 brokers this transaction between the server 320 and the client side request, following the same or similar rules of the proxy cache 330. This allows the actual server 320 to be sitting behind the private network, thus more secure, than having it fully exposed to the internet.
- the reverse proxy cache 340 further allows the server 320 to serve the content at LAN speeds to the proxy cache 330, and frees up resources faster, than having to wait for the request to finish at WAN speeds.
- Origin servers such as server 320, need to be powerful, and thus an expensive resource, to handle dynamic content. Dynamic content comes
- a pi-bonded connection 350 is used to connect proxy cache 330 with reverse proxy cache 340
- the term "pi-bonded connection” means the communication between the server-side proxy cache and the client-side proxy cache, such that the two caches can intelligently share content to streamline the overall caching process In this fashion, as further discussed below the advantages of compression can be better exploited over traditional caching schemes
- the pi-bond connection between devices can extend current protocols such as HTTP and RTSP or other communications means such as XML Other protocols such as UDP or multicast may also be supported by the pi-bond configuration of the present invention
- XML also is a good carrier of information and could carry HTTP and RTSP within the XML
- the pi-bond not only carries control information but it also controls and/or carries information that aids the originating suurce in determining whether the cache or request is permissible
- proxy cache 330 and reverse proxy cache 340 may alternatively be connected via other configurations
- present invention will be further discussed in the context of a pi-bonded connection
- Another advantage resulting from the present invention is the ability to push objects to client-side proxy caches 330 that would normally not be cacheable because they would need to be tracked by the logs As will be understood by those of skill in the art, this may be achieved because traditionally people tag content as "not cacheable” because they lose control of the content and are uncomfortable with that As such, proxies won't cache content that is marked non-cacheable
- the present invention allows those entities to keep some measure of control even when the content is served from third party networks
- the pi-bonded control channel 350 allows the server 320 (as the owner or hoster of the content) to keep control of the flow and caching of the cached data Server 320 may thus control of the content and the benefits in terms of cost and performance of allowing the client-side proxy cache 330 to serve the content
- the pi-bonded connection 350 may utilize special rules or protocols about what is and is not compressed
- one of the benefits of compression is to be able to decrease the amount of redundant data
- the full advantage of compression has not previously been realized
- the present invention permits groups of small web-objects to be compressed together, thus taking full advantages of the inter-object redundancies of data and achieving far greater compression ratios
- the present invention makes use of the fact that when browsing a web page, usually the browsers tend to request objects on the page in a similar pattern from someone else browsing the page That is, by having pi- bonded connection 350, the present invention enables the caching of content and gate access via the pi-bond
- This architecture can be exploited according to the present invention Specifically, when a request is made, between pi-bonded enabled proxies 330 and 340, according to the present invention extra information may be sent down the pi-bonded channel to the client-side proxy cache 330 about what is the statistically highest requested object or objects after the presently-requeste ⁇ object
- other useful statistical information may be sent as well to the client-side proxy cache 330, such as, for example, client side bandwidth, location (in terms of country, area code, etc ), personal data, and others
- the reverse proxy cache 340 can start to send these additional objects, identified through statistics or other intelligence resident in server 320, before the objects
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Security & Cryptography (AREA)
- Information Transfer Between Computers (AREA)
- Computer And Data Communications (AREA)
Abstract
A new and novel method and architecture for serving and caching web objects on the modern internet is provided. A client requests a request over a network. The request is received and fulfilled by a proxy cache located on the client side of the network. The proxy cache is enabled to fulfill the request by information passed to the proxy cache by at least one of a server and a reverse proxy cache located at a server side of the network via a pi-bonded connection with the proxy cache.
Description
Method and Architecture for Serving and Caching Web Objects on the Modern Internet
Cross Reference to Related Application
[001] This application is related to and claims priority under 35 U S C
§119(e) from U S Provisional Application No 60/252,542, filed November 22, 2000, the entire contents of which are hereby incorporated by reference
Background of the Invention
[002] Currently, when a user desires to request and retrieve an object from the internet, an application program (i e , a "client" 100) in the user's computer (not shown) establishes a connection with a network such as a WAN 110, to a server 120 on which the requested object resides or is to be created This simple architecture is shown in Figure 1
[003] As a function of the server/client architecture illustrated in Figure 1 , all requests and connections are made directly from the client 100 through the WAN 110 to the server 120 That is, every object that is needed by the client 100 is requested by the client 100 from the server 120 each time Similarly, the requested object, which is either resident on or created by the server 120, is passed by server 120 to client 100 back through WAN 110 This architecture puts a large burden on the server 120, since every request from client 100 must be individually serviced by the server 120 This architecture also suffers from a large consumption of bandwidth because the server 120 must send every byte of every request into the WAN 1 10 The client 100 must also receive every byte of every request from the WAN 110 As a result, web object requests are fetched at the speed of the slowest link between the origin server 120 and client 100
[004] In an effort to address and improve the above drawbacks of the traditional architecture illustrated in Figure 1 , it is also known to provide a proxy cache 130 on the client side of the WAN This architecture is illustrated in Figure 2
[005] A proxy is an intermediary program that acts as both a server and a client for the purpose of making requests on behalf of other clients Requests are serviced internally or may be passed to another server The use of a cache attached to the proxy lets the proxy provide static data from local storage rather than acquiring the data through the WAN for each request By introducing a proxy cache 130 on the client side 100 of the connection some improvements are introduced First, because the client 100 and proxy cache 130 are typically connected via a LAN (not shown) instead of a WAN, the client 100 no longer has to have direct internet access via the WAN 110 The client 100 only needs to be
able to connect locally (through the LAN) and talk to the proxy cache 130. The proxy cache 130 will then broker the transaction, as necessary, with the server 120 through the LAN. This allows the client 100 to stay within the security of the LAN and not be exposed to the internet. The proxy cache 130 needs to be able to directly access the server 120. This allows the networking infrastructure to be optimized for web access to the proxy cache 130, and not every single client 100.
[006] As stated above, the use of a cache attached to a proxy lets the proxy provide static data from the cache rather than acquiring the data through the WAN for each request. This is done through a reactive technique. The client makes a request, if the request exists in the cache it is served directly from the proxy, with no access to the origin server. This serving is done at the speed of the LAN or WAN, and with very little delay, or latency. One drawback, is that the origin server has no knowledge or log of this transaction taking place. If the object that the client request does not exist in the cache then the proxy contacts the origin server and retrieves the object. The proxy stores this object in the cache and in parallel serves the object to the client. Now, every request made for this object should be fetched from the cache if possible.
[007] Some of the benefits of this configuration are that is allows a decrease in the use of WAN bandwidth for fetching web objects. This configuration also speeds of the clients requests by serving the content at LAN speeds, which usually are significantly faster, than WAN speed.
[008] However, because of the dynamic nature of some web objects, or the need to log the transactions, certain objects cannot, or should not, be cached. These objects just pass through the proxy and are not stored in the cache. As a result, there remains a need for an architecture that will realize the improvements offered by the use of proxy caches while permitting the caching of object not previously cacheable.
Summary of the Invention
[009] To address the foregoing and other disadvantages of the various architectures previously known in the art and to fulfill the above-stated needs in the industry, applicants have invented a new and novel method and architecture for serving and caching web objects on the modern internet. As set forth and described in detail below, the invention comprises a method and architecture for serving and caching web objects comprising the steps of requesting, from a client, a request over a network; and receiving and fulfilling the request by a proxy cache located on a client side of the network; wherein the fulfillment of the request by the proxy cache is enabled by information passed to the proxy cache by at least one of a server and a reverse proxy cache located at a server side of the network via a pi-bonded connection with the proxy cache.
Brief Description of the Drawings
[010] The present invention will now be described in detail with reference to the accompanying drawings, in which:
[011] Figure 1 depicts a known server/client architecture;
[012] Figure 2 depicts a known server/client/proxy cache architecture;
[013] Figure 3 depicts a serving and caching architecture in accordance with the present invention;
[014] Figure 4 depicts the architecture illustrated in Figure 3, and further illustrates the use of a pi-bonded connection between the reverse proxy cache and the proxy cache; and
[015] Figure 5 depicts the architecture illustrated in Figure 4, however with a pi-bonded connection between the server and the proxy cache.
Detailed Description of the Invention
[016] Reference is now made to Figure 3. Unlike the architecture shown in prior art Figure 2, Figure 3, illustrates a presently preferred embodiment of the present invention wherein a reverse proxy cache on the server side of the architecture. Specifically, cache 300, WAN, 310, server 320, and proxy cache 330 are provided, as shown. As used herein, the term "reverse proxy cache" means a server that proxies content on the server side of the network.
[017] The introduction of a reverse proxy cache 340 in front of the server
320 allows a significant load to be removed from the server 320. Specifically, the client requests, that is, requests from either or both of client 300 and proxy cache 330, are sent to the reverse proxy cache 340. The reverse proxy cache 340 thus acts as a virtual server from the point of view of the proxy cache 330 and the client 300. That is, the reverse proxy cache 340 brokers this transaction between the server 320 and the client side request, following the same or similar rules of the proxy cache 330. This allows the actual server 320 to be sitting behind the private network, thus more secure, than having it fully exposed to the internet. The reverse proxy cache 340 further allows the server 320 to serve the content at LAN speeds to the proxy cache 330, and frees up resources faster, than having to wait for the request to finish at WAN speeds.
[018] Origin servers, such as server 320, need to be powerful, and thus an expensive resource, to handle dynamic content. Dynamic content comes
j -
from a multitude of source, databases, scripts, applications, etc By putting a reverse proxy cache 340 in front of these servers, it allows the offloading of the non-dynamic, thus cacheable, objects to be served from the reverse proxy cache 340 instead of tying up the valuable resources of the origin server Allowing the origin server to have more cycles of dynamic content to spend, means more dynamic content can be served
[019] Further according to the present invention, all the requests are still request in a reactive manner That is, a web object is only cached if the object is requested
[020] Additional new and novel features of the present invention will now be discussed in connection with Figure 4 As shown therein, a pi-bonded connection 350 is used to connect proxy cache 330 with reverse proxy cache 340 As used herein, the term "pi-bonded connection" means the communication between the server-side proxy cache and the client-side proxy cache, such that the two caches can intelligently share content to streamline the overall caching process In this fashion, as further discussed below the advantages of compression can be better exploited over traditional caching schemes According to the present invention, the pi-bond connection between devices can extend current protocols such as HTTP and RTSP or other communications means such as XML Other protocols such as UDP or multicast may also be supported by the pi-bond configuration of the present invention XML also is a good carrier of information and could carry HTTP and RTSP within the XML In addition, it should be noted that as described herein the pi-bond not only carries control information but it also controls and/or carries information that aids the originating suurce in determining whether the cache or request is permissible
[021 ] Of course, as will be understood by those skilled in the art, the present invention is not limited to pi-bonded connections For example, proxy cache 330 and reverse proxy cache 340 may alternatively be connected via other configurations For discussion purposes only, however the present invention will be further discussed in the context of a pi-bonded connection
[022] By virtue of the architecture of the present invention a special level intelligence may be created that can be passed between the pi-bonded proxies and servers, reverse, forward, and transparent, better optimizations can be achieved between the proxies and/or serving solutions, than simple reactive caching That is, with traditional find and fetch configurations, there is little or no time to inject user defined requirements By virtue of the construction of the present invention, however, the ultimate originating source can have input for control purposes at a third party location As a result, certain features not heretofore available can be enabled For example, once the reverse proxy cache 340 has identified a possible client-side proxy cache 330 with which it may establish a pi-bonded connection, the reverse proxy cache 340 may choose to pin a subset of it's content to the proxy cache 330 By "pinning" the content it is
meant a manual function of passing the content to 'proxy cache 330 Thus, as a result of the present invention, content can be put in the proxy cache 330 either by reverse proxy cache 340 or directly from server 320 (as shown in Figure 5) before even the first user requests the data That is, even if an edge device (such as proxy cache 330) has not seen an object requested the origin server has and can give information so that the requesting device knows that if one object has been requested certain others are likely to follow Such other likely objects therefore can be requested in front of the actual client side request This intelligence may also be advantageous at the cache side with regard to client requests There are many advantages resulting from this configuration For example, if the content provider behind server 320 knows that a spike of requests might be coming it's way from clients, server resources may be saved by pinning some of the content into client-side proxy caches 330 to help smooth out the initial spike of requests, thus significantly reducing the time necessary for the client 300 fetch the requests Notably, the enhancement of the end-user experience resulting from the increased speed and efficiency of the present invention apply even for the first users to fetch the requested objects This is especially attractive for bandwidth-intensive objects, such as video and other heavy objects, as they can be thus be cached at the proxy cache 330 and accessed locally by client 300 by a LAN (not shown)
[023] Another advantage resulting from the present invention is the ability to push objects to client-side proxy caches 330 that would normally not be cacheable because they would need to be tracked by the logs As will be understood by those of skill in the art, this may be achieved because traditionally people tag content as "not cacheable" because they lose control of the content and are uncomfortable with that As such, proxies won't cache content that is marked non-cacheable The present invention, however, allows those entities to keep some measure of control even when the content is served from third party networks Additionally, the pi-bonded control channel 350 allows the server 320 (as the owner or hoster of the content) to keep control of the flow and caching of the cached data Server 320 may thus control of the content and the benefits in terms of cost and performance of allowing the client-side proxy cache 330 to serve the content
[024] By virtue of the structure and function of the present invention, including the provision of pi-bonded connection 350 between the server-side and client-side proxy caches, a richer set of compression is possible between caches More specifically, the pi-bonded connection 350 may utilize special rules or protocols about what is and is not compressed Of course, one of the benefits of compression is to be able to decrease the amount of redundant data But with individual small web objects the full advantage of compression has not previously been realized By using pi-bonding, as explained above, the present invention permits groups of small web-objects to be compressed together, thus taking full advantages of the inter-object redundancies of data and achieving far greater compression ratios
-
[025] Additionally, as will be apparent to those skilled in the art, with some added intelligence and management on the reverse proxy cache 340, very intelligent and complex cache pushing can be achieved by the present invention, such as may be useful in special high-speed, highly optimized and fast web surfing Established jobs can be done at specified time windows, with a maximum pipe size specified by the lesser of the server and receiver These can be done with any type of web object
[026] First, a table a statistics is maintained by the proxy, such as proxy cache 330 Next, the present invention makes use of the fact that when browsing a web page, usually the browsers tend to request objects on the page in a similar pattern from someone else browsing the page That is, by having pi- bonded connection 350, the present invention enables the caching of content and gate access via the pi-bond This architecture can be exploited according to the present invention Specifically, when a request is made, between pi-bonded enabled proxies 330 and 340, according to the present invention extra information may be sent down the pi-bonded channel to the client-side proxy cache 330 about what is the statistically highest requested object or objects after the presently-requesteα object Of course, other useful statistical information may be sent as well to the client-side proxy cache 330, such as, for example, client side bandwidth, location (in terms of country, area code, etc ), personal data, and others The reverse proxy cache 340 can start to send these additional objects, identified through statistics or other intelligence resident in server 320, before the objects are even requested by the user from the server 330, through a simple cache pin, as described above Then when the client goes to make the request, the requested object or objects are already in the client-side proxy cache 330 and thus readily and quickly available to client 300 via the LAN Further, the proxy cache 330 can also start to prepare for the potential request, even before the request is made by the client 300, as discussed above, for example, in paragraph 22 Significantly, according to the present invention, this also can be done for objects that are not traditionally treated as cacheable, since the cached objects will only reside in the client-side proxy cache 330 for a finite amount of time, and are pinned to the proxy cache 330 just shortly before the client 300 would normally request the objects As a result, the present invention permits some forms of dynamic data to get all of the speed benefits of a cache, without "actually" caching the object End-users, through clients 300, can thus surf the internet at speeds that exceed the theoretical limits possible without the pi-bonding configuration of the present invention
[027] The present invention has been described above in the context of the presently preferred embodiments known to the inventors Of course, other embodiments and variations will be apparent to those of skill in the art without departing from the spirit or scope of the present invention
Claims
Claims
1. A method of serving and caching web objects comprising the steps of: requesting, from a client, a request over a network; receiving and fulfilling said request by a proxy cache located on a client side of said network; wherein the fulfillment of said request by said proxy cache is enabled by information passed to the proxy cache by at least one of a server and a reverse proxy cache located at a server side of said network via a pi-bonded connection with said proxy cache.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2002241497A AU2002241497A1 (en) | 2000-11-22 | 2001-11-23 | Method and architecture for serving and caching web objects on the modern internet |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US25254200P | 2000-11-22 | 2000-11-22 | |
US60/252,542 | 2000-11-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2002042874A2 true WO2002042874A2 (en) | 2002-05-30 |
Family
ID=22956450
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2001/043594 WO2002042874A2 (en) | 2000-11-22 | 2001-11-23 | Method and architecture for serving and caching web objects on the modern internet |
Country Status (2)
Country | Link |
---|---|
AU (1) | AU2002241497A1 (en) |
WO (1) | WO2002042874A2 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7171469B2 (en) * | 2002-09-16 | 2007-01-30 | Network Appliance, Inc. | Apparatus and method for storing data in a proxy cache in a network |
US7284030B2 (en) | 2002-09-16 | 2007-10-16 | Network Appliance, Inc. | Apparatus and method for processing data in a network |
US7552223B1 (en) | 2002-09-16 | 2009-06-23 | Netapp, Inc. | Apparatus and method for data consistency in a proxy cache |
-
2001
- 2001-11-23 WO PCT/US2001/043594 patent/WO2002042874A2/en not_active Application Discontinuation
- 2001-11-23 AU AU2002241497A patent/AU2002241497A1/en not_active Withdrawn
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7171469B2 (en) * | 2002-09-16 | 2007-01-30 | Network Appliance, Inc. | Apparatus and method for storing data in a proxy cache in a network |
US7284030B2 (en) | 2002-09-16 | 2007-10-16 | Network Appliance, Inc. | Apparatus and method for processing data in a network |
US7552223B1 (en) | 2002-09-16 | 2009-06-23 | Netapp, Inc. | Apparatus and method for data consistency in a proxy cache |
Also Published As
Publication number | Publication date |
---|---|
AU2002241497A1 (en) | 2002-06-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9497284B2 (en) | Apparatus and method for caching of compressed content in a content delivery network | |
Touch et al. | LSAM proxy cache: a multicast distributed virtual cache | |
Davison | A web caching primer | |
US6701415B1 (en) | Selecting a cache for a request for information | |
US6959318B1 (en) | Method of proxy-assisted predictive pre-fetching with transcoding | |
KR100571059B1 (en) | Distributed Systems and Methods for Prefetching | |
US6892218B2 (en) | Extending network services using mobile agents | |
EP1175651B1 (en) | Handling a request for information provided by a network site | |
US6587928B1 (en) | Scheme for segregating cacheable and non-cacheable by port designation | |
US20020007404A1 (en) | System and method for network caching | |
US20110219109A1 (en) | System and method for sharing transparent proxy between isp and cdn | |
US8166198B2 (en) | Method and system for accelerating browsing sessions | |
JP2000013779A (en) | Data distributing method and proxy internet server | |
US6807572B1 (en) | Accessing network databases | |
US20180302489A1 (en) | Architecture for proactively providing bundled content items to client devices | |
WO2019052299A1 (en) | Sdn switch, and application and management method for sdn switch | |
WO2002042874A2 (en) | Method and architecture for serving and caching web objects on the modern internet | |
Eden et al. | Web latency reduction via client-side prefetching | |
Selvakumar et al. | Implementation and comparison of distributed caching schemes | |
Chi et al. | Proxy-cache aware object bundling for web access acceleration | |
KR100394189B1 (en) | Method for servicing web contents by using a local area network | |
Rangarajan et al. | A technique for user specific request redirection in a content delivery network | |
Shen et al. | Streaming media caching with transcoding-enabled proxies | |
Ahuja et al. | Cache on demand | |
Hussain | Intelligent Prefetching |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WA | Withdrawal of international application | ||
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |