CN103873562A - Cache method and cache system - Google Patents

Cache method and cache system Download PDF

Info

Publication number
CN103873562A
CN103873562A CN201410067920.5A CN201410067920A CN103873562A CN 103873562 A CN103873562 A CN 103873562A CN 201410067920 A CN201410067920 A CN 201410067920A CN 103873562 A CN103873562 A CN 103873562A
Authority
CN
China
Prior art keywords
memory
content
destination address
memory location
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410067920.5A
Other languages
Chinese (zh)
Inventor
张意
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Che Zhi Interconnect (beijing) Technology Co Ltd
Original Assignee
Che Zhi Interconnect (beijing) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Che Zhi Interconnect (beijing) Technology Co Ltd filed Critical Che Zhi Interconnect (beijing) Technology Co Ltd
Priority to CN201410067920.5A priority Critical patent/CN103873562A/en
Publication of CN103873562A publication Critical patent/CN103873562A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention provides a cache method and a cache system, and solves the problems of low utilization rate of storage space in traditional cache techniques. The cache method is performed in a request server to handle an access request from a user, wherein the access request from the user comprises a target address. The cache method comprises the steps: searching for a storage location associated with the target address in an addressed memory; reading the content at the storage location of a storage device of the request server as target content when the storage location associated with the target address is found; forwarding the access request to a source server corresponding to the access request, obtaining corresponding content from the source server and setting the obtained content as the target content when the storage location associated with the target address is not found; returning the target content to the user. The technology provided by the invention can be applied to the field of the cache technology.

Description

Caching method and caching system
Technical field
The present invention relates to caching technology field, relate in particular to a kind of caching method and caching system.
Background technology
Along with the development of information technology and network technology, caching technology becomes a hot topic and indispensable field gradually.Caching technology solved to a certain extent due to unstable networks etc. is former thereby cause cannot complete content and download the problem of (such as the download of webpage etc.), the response speed of user's request is also improved.
But the memory capacity of traditional buffer storage is normally limited, makes the content of its storage also more limited, thereby cause the utilance of its memory space lower.How further to optimize buffer storage memory space, improve its utilance, be particularly important problem.
In addition, in traditional buffer storage, exceed spatial cache if the content of buffer memory is too many, need to delete old cache contents to hold new cache contents from spatial cache, but old cache contents is in fact also not out of date, and this mode has increased the burden of network.Meanwhile, system manager need to be rolled off the production line current buffer storage and the content of institute's buffer memory be emptied, and changes larger spatial cache, finally again buffer storage is reached the standard grade.Therefore need a kind ofly can expand spatial cache expediently and can not affect the new buffering scheme of the use of buffer storage.
Summary of the invention
Provide hereinafter about brief overview of the present invention, to the basic comprehension about some aspect of the present invention is provided.Should be appreciated that this general introduction is not about exhaustive general introduction of the present invention.It is not that intention is determined key of the present invention or pith, and nor is it intended to limit the scope of the present invention.Its object is only that the form of simplifying provides some concept, using this as the preorder in greater detail of discussing after a while.
Given this, the invention provides a kind of caching method and caching system, at least to solve or to alleviate the existing the problems referred to above of traditional caching technology.
According to an aspect of the present invention, a kind of caching method is provided, this caching method carries out to process the access request from user in request server, wherein user's access request comprises destination address, and this caching method comprises: in addressed memory, search the memory location being associated with destination address; In the time finding the memory location being associated with destination address, from this memory location reading of content of the memory device of request server, as object content; In the time not finding the memory location corresponding with destination address, access request is forwarded to the source server corresponding with this access request, obtain corresponding content from source server, and using obtained content as object content; And object content is returned to user.
Alternatively, in caching method according to the present invention, obtain from source server not finding the memory location corresponding with destination address corresponding content as object content after, this caching method also comprises step: calculate memory location according to destination address; Store object content into the memory location place of calculating in memory device; And the entry that foundation is associated described destination address and the memory location of calculating in addressed memory.Wherein, memory device comprises multiple memory cell, memory location comprises memory cell mark, the step of calculating memory location according to described destination address comprises: determine one of multiple memory cell according to destination address, and comprise the memory cell mark corresponding with determined memory cell in memory location.
According to another aspect of the present invention, a kind of caching system is also provided, this caching system resides in request server, this request server is connected with client communication, to receive the request from client, this caching system comprises: addressed memory, and it is for storing the entry that address is associated with the memory location of the memory device of request server; Request processor, it is for receiving the access request from client, to search the memory location being associated with destination address included in access request from client in addressed memory; Contents extraction unit, it is when finding when request processor the memory location being associated with destination address, from this memory location reading of content of memory device, as object content and send to output unit; Forwarding processor, it is not when finding the memory location corresponding with destination address when request processor, access request is forwarded to the source server corresponding with this access request, obtain corresponding content from source server, and obtained content is sent to output unit as object content; And output unit, it is for returning to client by the object content of reception.
Alternatively, also comprise controller according to caching system of the present invention, it is not when finding the memory location corresponding with destination address at request processor, obtain from source server at forwarding processor corresponding content as object content after, calculate memory location according to destination address, object content is stored into the memory location place of being calculated in described memory device, and in addressed memory, set up the entry that destination address and the memory location of calculating are associated.
Alternatively, memory device comprises multiple memory cell, memory location comprises memory cell mark, and controller is configured to: determine one of multiple memory cell according to destination address, and comprise the memory cell mark corresponding with determined memory cell in memory location.
Above-mentioned according to the caching method of the embodiment of the present invention and caching system, it wherein searches the memory location in memory device corresponding to destination address in user access request in address storage, and select to extract content from memory device or in source server according to lookup result and return to user, it can obtain one of at least following benefit: the memory space of optimizing memory device; Improve the Buffer Utilization of memory device; And the response speed of raising server to user's request.
In addition, according to buffering scheme of the present invention, comprise multiple memory cell for the memory device of memory buffers content.In current memory technology, can add online new memory cell or change memory cell and need not make memory device break-off.Therefore, in the time that the insufficient memory of memory device or certain memory cell break down, can directly add or replace this memory cell, from only needing new content caching to new memory cell or the content in buffer memory and the buffer unit breaking down again.The maintenance of convenience to buffer memory device so greatly.
By the detailed description to most preferred embodiment of the present invention below in conjunction with accompanying drawing, these and other advantage of the present invention will be more obvious.
Brief description of the drawings
The present invention can, by reference to hereinafter given description and being better understood by reference to the accompanying drawings, wherein use same or analogous Reference numeral to represent identical or similar parts in institute's drawings attached.Described accompanying drawing comprises in this manual and forms the part of this specification together with detailed description below, and is used for further illustrating the preferred embodiments of the present invention and explains principle and advantage of the present invention.In the accompanying drawings:
Fig. 1 is the block diagram of a kind of example structure of schematically illustrated caching system according to an embodiment of the invention;
Fig. 2 is the block diagram of the another kind of example structure of schematically illustrated caching system according to an embodiment of the invention;
Fig. 3 is the flow chart of a kind of exemplary process of schematically illustrated caching method according to an embodiment of the invention; And
Fig. 4 is the flow chart of the another kind of exemplary process of schematically illustrated caching method according to an embodiment of the invention.
It will be appreciated by those skilled in the art that the element in accompanying drawing is only used to simply and for the purpose of clear illustrate, and not necessarily draw in proportion.For example, in accompanying drawing, the size of some element may have been amplified with respect to other elements, to contribute to improve the understanding to the embodiment of the present invention.
Embodiment
In connection with accompanying drawing, example embodiment of the present invention is described hereinafter.All features of actual execution mode are not described for clarity and conciseness, in specification.But, should understand, in the process of any this practical embodiments of exploitation, must make much decisions specific to execution mode, to realize developer's objectives, for example, meet and those restrictive conditions of system and traffic aided, and these restrictive conditions may change to some extent along with the difference of execution mode.In addition,, although will also be appreciated that development is likely very complicated and time-consuming, concerning having benefited from those skilled in the art of present disclosure, this development is only routine task.
At this, also it should be noted is that, for fear of the details because of unnecessary fuzzy the present invention, only show in the accompanying drawings with according to the closely-related apparatus structure of the solution of the present invention and/or treatment step, and omitted other details little with relation of the present invention.
Embodiments of the invention provide a kind of caching system, this caching system resides in request server, this request server is connected with client communication, to receive the request from client, this caching system comprises: addressed memory, and it is for storing the entry that address is associated with the memory location of the memory device of request server; Request processor, it is for receiving the access request from client, to search the memory location being associated with destination address included in access request from client in addressed memory; Contents extraction unit, it is when finding when request processor the memory location being associated with destination address, from this memory location reading of content of memory device, as object content and send to output unit; Forwarding processor, it is not when finding the memory location corresponding with destination address when request processor, access request is forwarded to the source server corresponding with this access request, obtain corresponding content from source server, and obtained content is sent to output unit as object content; And output unit, it is for returning to client by the object content of reception.
Describe an example of caching system according to an embodiment of the invention in detail below in conjunction with Fig. 1.
It should be noted that, this caching system resides in request server.Wherein, request server is connected with client communication, to receive the request from client.Request from client for example can comprise the access request from client, and in this access request, for example comprises destination address (URL).
As shown in Figure 1, caching system 100 comprises addressed memory 110, request processor 120, contents extraction unit 130, forwarding processor 140 and output unit 150 according to an embodiment of the invention.
In addressed memory 110, store the entry that the memory location in the memory device of address and request server is associated.In a kind of implementation of caching system according to an embodiment of the invention, addressed memory 110 can be for example (key) taking destination address as key, memory location taking the object content that is associated in memory device is the memory of key-value (key-value) mode of value (value).For example, addressed memory 110 can be the nosql database of key-value mode.
In the time that user sends to request server the access request that comprises destination address (being URL to be visited) by its client, request processor 120 receives this access request, and in addressed memory 110, searches the memory location being associated with destination address included in access request according to this access request.
If store the memory location being associated with above-mentioned destination address in addressed memory 110, also be, in the time that request processor 120 finds the memory location being associated with destination address, contents extraction unit 130 from this memory location of memory device (, the memory location being associated with destination address) locate to read corresponding content, and send to output unit 150 using read content as object content.
On the other hand, if there is not the memory location being associated with above-mentioned destination address in addressed memory 110, also be, in the time that request processor could not enough find the memory location corresponding with destination address in addressed memory 110, the above-mentioned access request from client is forwarded to the source server corresponding with this access request by 140 of forwarding processors, obtain corresponding content from source server, and send to output unit 150 using obtained content as object content.
In an implementation, obtaining from source server corresponding content, forwarding processor 140 can directly send to output unit 150 using the content of obtaining as above-mentioned object content.
In another implementation, obtaining from source server corresponding content, forwarding processor 140 also can judge whether this content to compress according to the size of obtained content, and the execution compression corresponding with judged result or do not compress processing, then will execute compression or not compress content after processing as object content.For example, forwarding processor 140 can judge whether the size of obtained content is greater than predetermined value (predetermined value can be for example 1K byte), and in the time that the size of this content is greater than above-mentioned predetermined value, this content is compressed, and using the content after compression as object content; And if forwarding processor 140 is judged when the size of this content is less than or equal to above-mentioned predetermined value, this content is not compressed, and directly using this content itself as object content.It should be noted that, above-mentioned predetermined value can be based on experience value or be determined by the method for test, and is not limited to the 1K byte of mentioning in above example.Like this, can make the content (content of storing also comprise memory device the caching system 200 of another example according to the present invention in) of obtaining from source server is partly the content through overcompression, can optimize the memory space of memory device, make the quantity of documents of buffer memory under same capability condition relatively many, and then improved the Buffer Utilization of memory device.In addition,, because the file of buffer memory increases, make it possible to further improve the response speed of request server to user's request.
In addition, in other implementation, obtaining from source server corresponding content, after forwarding processor 140 also can compress obtained content without judgement, content after compression sends to output unit 150 as above-mentioned object content.Like this, can make the content (content of storing also comprise memory device the caching system 200 of another example according to the present invention in) of obtaining from source server is the content through overcompression fully, can optimize the memory space of memory device, make the quantity of documents of buffer memory under same capability condition relatively many, and then improved the Buffer Utilization of memory device.In addition,, because the file of buffer memory increases, make it possible to further improve the response speed of request server to user's request.
Like this, output unit 150 receives object content mentioned above from contents extraction unit 130 or from forwarding processor 140, then the above-mentioned object content receiving is returned to client.
In an implementation, output unit 150 can directly return to client by the object content of reception, that is, do not need the object content to receiving to process and directly forward.
In addition, in another implementation, output unit 150 also can be after receiving above-mentioned object content, first judge whether the client of sending access request supports compressed file, and come object content to carry out corresponding processing according to judged result, then object content after treatment is returned to client.For example, if the client of access request is not supported compressed file, output unit 150 carries out decompression to object content, and then the object content after decompressing is returned to above-mentioned client; And if above-mentioned client is supported compressed file, output unit 150 is directly transmitted to client by the above-mentioned object content receiving.It should be noted that, in examples more as described above, object content may be the content of uncompressed, in this case, judging client when output unit 150 does not support in the situation of compressed file, no longer this object content is decompressed, but directly this object content is transmitted to client.
Below in conjunction with Fig. 2, another example of caching system is according to an embodiment of the invention described.
In example as shown in Figure 2, caching system 200, except comprising addressed memory 210, request processor 220, contents extraction unit 230, forwarding processor 240 and output unit 250, also comprises controller 260.
As shown in Figure 2, addressed memory 210 stores the entry that the memory location in the memory device of address and request server is associated, and can there is the 26S Proteasome Structure and Function identical with addressed memory 110 in the caching system 100 shown in Fig. 1, and can reach similar technique effect, repeat no more here.
In addition, as shown in Figure 2, request processor 220, contents extraction unit 230, forwarding processor 240 and output unit 250 can distinguish accordingly carry out with caching system 100 described in conjunction with Figure 1 above in request processor 120, contents extraction unit 130, forwarding processor 140 and the similar processing of output unit 150, in other words, can have respectively with caching system 100 in request processor 120, contents extraction unit 130, forwarding processor 140 and the identical 26S Proteasome Structure and Function of output unit 150, and can reach similar technique effect, here describe in detail no longer one by one.Emphasis is described caching system 200 place different from caching system 100 below.
In the caching system 200 shown in Fig. 2, with the similar ground of the caching system 100 shown in Fig. 1, request processor 220 searches according to the access request receiving the memory location being associated with destination address included in access request in addressed memory 210: in the time finding the memory location being associated with above-mentioned destination address, contents extraction unit 230 reads corresponding content from this memory location of memory device, and sends to output unit 250 using read content as object content; And in the time not finding the memory location being associated with above-mentioned destination address, the above-mentioned access request from client is forwarded to the source server corresponding with this access request by forwarding processor 240, obtain corresponding content from source server, and send to output unit 250 using obtained content as object content.
But, in the caching system 200 shown in Fig. 2, different from the caching system 100 shown in Fig. 1 is, when request processor 220 in addressed memory 210, do not find the memory location corresponding with destination address and obtain from source server by forwarding processor 240 corresponding content (for example compress or without compression) as object content after, controller 260 calculates memory location according to destination address, thus object content is stored into this memory location in memory device (, the memory location that controller 260 calculates according to destination address) locate, and in addressed memory 210, set up corresponding entry, also be, set up the entry that destination address and the above-mentioned memory location of calculating are associated.
In a kind of implementation of caching system according to an embodiment of the invention, the memory device of request server can comprise multiple memory cell, and memory location for example can comprise memory cell mark.In this implementation, controller 260 for example can be determined one of multiple memory cell according to destination address, and comprises the memory cell mark corresponding with determined memory cell in memory location.In an example of this implementation, controller 260 can utilize the consistency hash algorithm (for example can be with reference to following data: http://baike.***.com/link url=tpK61XvTV6wIEgMsvmmb3lQPbLH-Z PYCd8fxjmU1tjeuybbZejpgHKuiI9Ujw4yN9lZvpiq_UzwQ_ZVCu6gJZ a) calculates destination address, and with respect to the number delivery of memory cell, to select one of above-mentioned multiple memory cell that the memory cell corresponding with the mould value being calculated be used as determining in multiple memory cell.It should be noted that, in memory device, the quantity of included memory cell for example can be determined based on experience value or by the method for test, repeats no more here.
By above processing, can calculate the position that the content of fetching from source server will be stored according to destination address, can set up the corresponding relation between destination address and the content memory location on memory device, and can facilitate in subsequent treatment the correspondence in addressed memory in the time that user accesses this destination address again to search, can improve the speed of searching, and then improve the response speed to user's request.
Describe an application example of caching system 200 according to an embodiment of the invention below in detail.
In this application example, the memory device of supposing request server comprises 10 hard disks (as the example of memory cell), and in memory device, store the content being associated with URL, and these contents in the situation that meeting predetermined condition through overcompression (such as gzip compression).For example, if size when content uncompressed exceedes 1K byte, this content just stores in above-mentioned memory device after overcompression; Otherwise this content is unpressed.Addressed memory 210 is for example the nosql database of key-value mode, and memory location taking URL as key, taking the content that is associated with URL on memory device is as value.
Suppose that user UA sends access request RQ by its client to the resident request server of caching system 200 1, and access request RQ 1comprise certain destination address URL 1.Like this, receive access request RQ 1afterwards, request processor 220 starts in addressed memory 210, to search and destination address URL 1the memory location being associated.
In this application example, suppose that request processor 220 fails in addressed memory 210, to find and destination address URL 1the memory location being associated, forwarding processor 240 is by access request RQ 1be forwarded to and access request RQ 1corresponding source server, and obtain and access request RQ from this source server 1corresponding content.Whether the size of the foregoing that then, judgement is obtained is greater than 1K byte (as the example of predetermined value).
Then, forwarding processor 240 judges whether the size of the foregoing obtaining in this application example is greater than 1K byte.In this application example, suppose that the size of the foregoing obtaining is greater than 1K byte.So forwarding processor 240 compresses this content, and send to output unit 250 to be used as the object content of family UA to be returned to the content after compression.
Now, controller 260 is according to destination address URL 1calculate corresponding memory location P 1.For example, utilize consistency hash algorithm to destination address URL 1calculate, and with respect to number 10 deliverys of hard disk, obtain the value between 0~9, suppose that calculated mould value is 2, the hard disk of 2 correspondences can be defined as to hard disk to be selected, and make destination address URL 1corresponding memory location comprises the hard disk mark of mould value 2 corresponding hard disks.In an example, suppose the included A dish of 10 hard disks, B dish, C dish ..., J coil corresponding mould value be respectively 0,1,2 ...., 9.So the hard disk of mould value 2 correspondences is C dish, therefore can make destination address URL 1corresponding memory location P 1comprise the hard disk mark of C dish.Then, controller 260 can store the content after above-mentioned compression into the memory location P of memory device 1place, deposits C dish, and sets up destination address URL in addressed memory 210 1with memory location P 1the entry being associated.
It should be noted that when the quantity of memory cell be that N(N is greater than 1 integer) time, the mould value obtaining by said method is the value between 0~N-1.
In addition, send to output unit 250 forwarding processor 240 compresses the foregoing obtaining from source server after, output unit 250 can be processed according to the user agent of user UA request: the browser (as the example of client) of for example its user agent's indicating user UA is not supported compressed file, can will after the object content that returns to user UA decompresses, return to user UA by output unit 250 again.
In addition, in this application example, suppose at user UA access URL, 1afterwards, another user UB request access destination address URL again 1.In other words, user UB sends access request RQ by its client to the resident request server of caching system 200 2, and access request RQ 2in included destination address and access request RQ 2in destination address URL 1identical.Like this, when request processor 220 is searched and destination address URL again in addressed memory 210 1be associated memory location time, can find and URL 1the memory location P being associated 1.Then, contents extraction unit 230 is from the memory location P of memory device 1place's (being C dish) reads corresponding content, and the content reading is sent to output unit 250 as the object content of user UB to be returned to.Then, output unit 250 can be processed according to the user agent of user UB request: for example the browser of its user agent's indicating user UB is supported compressed file, can the object content of user UB above-mentioned to be returned to directly be returned to user UB by output unit 250.
Known by above description, above-mentioned caching system according to an embodiment of the invention, it wherein searches the memory location in memory device corresponding to destination address in user access request in address storage, and selects to extract content from memory device or in source server according to lookup result and return to user.In certain embodiments, for example, by make object content be compressed content or object content is carried out to compression with good conditionsi while being greater than 1K byte (mentioned above compress), to optimize the memory space of memory device, improve its Buffer Utilization.By the optimization to memory space, under identical memory capacity, can make the number of files of memory device institute buffer memory more, and then can respond quickly more user's request.In addition, in certain embodiments, by utilizing consistency hash algorithm to calculate destination address and with respect to computings such as Number of Storage Units deliverys, result of calculation can be associated with memory location, and memory location corresponding to content corresponding different target address store into according to the calculating mould value of its correspondence thus, facilitate the processing of searching when user accesses this address again in subsequent treatment, and then improved the response speed of server to user's request.
Embodiments of the invention also provide a kind of caching method, this caching method carries out to process the access request from user in request server, wherein user's access request comprises destination address, and this caching method comprises: in addressed memory, search the memory location being associated with destination address; In the time finding the memory location being associated with destination address, from this memory location reading of content of the memory device of request server, as object content; In the time not finding the memory location corresponding with destination address, access request is forwarded to the source server corresponding with this access request, obtain corresponding content from source server, and using obtained content as object content; And object content is returned to user.
A kind of exemplary process of above-mentioned caching method is described below in conjunction with Fig. 3.
It should be noted that, this caching method is carried out in request server, in order to process the access request from user.Wherein, user's access request comprises destination address (URL).
As shown in Figure 3, the handling process 300 of caching method starts from step S310 according to an embodiment of the invention, then performs step S320.
In step S320, in addressed memory, search the memory location being associated with destination address.Request server comprises memory device, and address above mentioned memory can be stored the entry that address is associated with the memory location in above-mentioned memory device.The processing example of performed " the searching the memory location being associated with destination address " of step S320 as can with caching system 100 described in conjunction with Figure 1 above in the performed processing of request processor 120 identical, and can reach similar function and technique effect, repeat no more here.
In an implementation of caching method according to an embodiment of the invention, address above mentioned memory can be memory location taking destination address as key, taking the object content that is associated in the memory device memory as key-value mode of value.For example, address above mentioned memory can be the no sql database of key-value mode.
In step S320, if find the memory location being associated with destination address in addressed memory, perform step S330.For sake of convenience, below by " memory location being associated with destination address " referred to as " association store position P ".
In step S330, from the association store position P reading of content of the memory device of request server, as the object content corresponding with destination address.Then, execution step S350.In step S330, performed processing example is as identical in processing that can be performed with the contents extraction unit 130 in caching system 100 described in conjunction with Figure 1 above, and can reach similar function and technique effect, repeats no more here.
In addition,, in step S320, if do not find above-mentioned association store position P in addressed memory, perform step S340.
In step S340, user's access request is forwarded to the source server corresponding with this access request, obtain the content corresponding with access request from this source server, and using obtained content as the object content corresponding with destination address.Then, execution step S350.In step S340, performed processing example is as identical in processing that can be performed with the forwarding processor 140 in caching system 100 described in conjunction with Figure 1 above, and can reach similar function and technique effect, repeats no more here.
In step S350, current object content is returned to user, and finish the method in step S360.In step S350, performed processing example is as identical in processing that can be performed with the output unit 150 in caching system 100 described in conjunction with Figure 1 above, and can reach similar function and technique effect, repeats no more here.
In an implementation of caching method according to an embodiment of the invention, in step S350, object content is for example the content through overcompression.Like this, can before being returned to user, object content first judge whether the client of sending access request supports compressed file: if this client is supported compressed file, object content is directly sent to user; If this client is not supported compressed file, after object content being decompressed, send to again user.Wherein, the compressed format of object content is for example gzip form.In an example, can make the content of storing in memory device be the content after compression, and make the content obtained from source server in step S340 after overcompression as object content.Can make " current object content " in step S350 to be the content through overcompression by the processing in this example.
In addition,, in another implementation of caching method according to an embodiment of the invention, in step S340, in the time getting the content corresponding with above-mentioned access request from source server, can determine whether it to compress according to the size of this content.For example, if the size of this content of obtaining is greater than predetermined value, this content is compressed, and storage is through the content of overcompression, and correspondingly in step S350, the foregoing of compression returned to user; If the size of this content of obtaining is less than or equal to predetermined value, it not to be compressed, the same content that storage is not compressed in memory device, correspondingly, in step S350, directly returns to user by this content.
Wherein, above-mentioned predetermined value can be for example 1K byte.It should be noted that, above-mentioned predetermined value can be determined based on experience value or by the method for test, be not limited to above-described 1K byte.
In addition,, in an implementation, the content of storing in memory device can be also partly the content through overcompression.For example, the size in the time of content uncompressed is greater than predetermined value (such as above-mentioned 1K byte) to be compressed this content, otherwise does not compress.
By above processing, making the content of storing in memory device or the content of obtaining from source server is the content through overcompression fully or partly, can optimize the memory space of memory device, make the quantity of documents of buffer memory under same capability condition relatively many, and then improved the Buffer Utilization of memory device.In addition,, because the file of buffer memory increases, make it possible to further improve the response speed of request server to user's request.
The another kind of exemplary process of above-mentioned caching method is described below in conjunction with Fig. 4.
As shown in Figure 4, the handling process 400 of caching method starts from step S410 according to an embodiment of the invention, then performs step S420~S440.
Wherein, in handling process 400, step S420~S440 can distinguish accordingly and to carry out the processing identical with the performed processing of step S320~S340 in handling process 100 described in conjunction with Figure 1 above, and can reach similar technique effect, repeats no more here.
As shown in Figure 4, after execution of step S440, execution step S450 before, execution step S442~S446.
Wherein, in step S442, can calculate memory location according to destination address.Then, execution step S444.
In an implementation of caching method according to an embodiment of the invention, memory device can comprise multiple memory cell, for example, comprise multiple hard disks, and wherein each hard disk is respectively a memory cell.In addition, in this implementation, memory location for example can comprise memory cell mark, such as hard disk mark.
Like this, in this implementation, processing example in step S442 is as realized by following process: determine one of them memory cell according to destination address in multiple memory cell, and make memory location comprise the memory cell mark corresponding with determined memory cell.
In step S444, store object content into memory location in memory device, that calculate place.Then, execution step S446.
In step S446, in addressed memory, set up the entry that destination address and the memory location of calculating are associated.Then, execution step S450.
By above processing, can calculate the position that the content of fetching from source server will be stored according to destination address, can set up the corresponding relation between destination address and the content memory location on memory device, and can facilitate in subsequent treatment the correspondence in addressed memory in the time that user accesses this destination address again to search, can improve the speed of searching, and then improve the response speed to user's request.
In an example, in step S442, determine that in multiple memory cell the processing of one of them memory cell can realize in the following manner according to destination address above: utilize consistency hash algorithm to calculate destination address, and with respect to the number delivery of memory cell, to select the memory cell corresponding with the mould value being calculated as determined memory cell in multiple memory cell.
As shown in Figure 4, in step S450, current object content is returned to user, carry out and the identical processing of step S350 in the handling process 300 of above description, and can reach similar technique effect, no longer repeat here.Then, in step S460, finish this handling process 400.
In addition, it should be noted that, the realization of the processing of step S450 can adopt any the possible implementation of the step S350 in described handling process 300 above or son to process and realize, and can reach similar technique effect, repeats no longer one by one here.
Describe an application example of caching method according to an embodiment of the invention below in detail.
In this application example, the access request of user UA comprises certain destination address, and this destination address is for example designated as URL 1.
The memory device of request server for example comprises 10 hard disks (as the example of memory cell).It should be noted that, can be based on experience value or determine by the method for test the hard disk quantity that memory device is included, and this quantity is not limited to above-describedly 10, can be also other positive integers.
In addition, in memory device, store the content being associated with URL, and these contents in the time meeting predetermined condition through overcompression (such as gzip compression).For example, if size when content uncompressed exceedes 1K byte, this content just stores in above-mentioned memory device after overcompression; Otherwise this content is unpressed.
Addressed memory is for example the nosql database of key-value mode, and memory location taking URL as key, taking the content that is associated with URL on memory device is as value.
Suppose not find and URL in address above mentioned memory 1the memory location being associated.Then, the above-mentioned access request of user UA is forwarded to the source server corresponding with this access request, and obtains the content corresponding with this access request from this source server.Whether the size of the foregoing that then, judgement is obtained is greater than 1K byte (as the example of predetermined value).
The size of supposing the foregoing obtaining is greater than 1K byte, then can compress this content, and using the content after compression as the object content of user UA to be returned to.
In addition, according to the destination address URL of user UA 1calculate corresponding memory location P 1.For example, utilize consistency hash algorithm to destination address URL 1calculate, and with respect to number 10 deliverys of hard disk, obtain the value between 0~9, suppose that calculated mould value is 2, the hard disk of 2 correspondences can be defined as to hard disk to be selected, and make destination address URL 1corresponding memory location comprises the hard disk mark of mould value 2 corresponding hard disks.In an example, suppose the included A dish of 10 hard disks, B dish, C dish ..., J coil corresponding mould value be respectively 0,1,2 ...., 9.So the hard disk of mould value 2 correspondences is C dish, therefore can make destination address URL 1corresponding memory location P 1comprise the hard disk mark of C dish.Then, the content after above-mentioned compression can be stored into the memory location P of memory device 1place, deposits C dish.In addition in addressed memory, set up destination address URL, 1with the memory location P calculating 1the entry being associated.
It should be noted that, when the quantity of memory cell is that N(N is greater than 1 integer) time, the mould value obtaining by said method is the value between 0~N-1.
Simultaneously, after the foregoing obtaining from source server is compressed, can process according to the user agent of user UA request: the browser (as the example of client) of for example its user agent's indicating user UA is not supported compressed file, can will after the object content that returns to user UA decompresses, return to again user UA.
In addition, suppose in this application example, at user UA access URL 1afterwards, another user UB request access destination address URL again 1.Like this, in addressed memory, again search and URL 1be associated memory location time, can find and URL 1the memory location P being associated 1.Then, from the memory location P of memory device 1place's (be C dish) reads corresponding content, and object content using the content reading as user UB to be returned to.Then, can process according to the user agent of user UB request: for example the browser of its user agent's indicating user UB is supported compressed file, the object content of user UB above-mentioned to be returned to directly can be returned to user UB.
Known by above description, above-mentioned caching method according to an embodiment of the invention, it wherein searches the memory location in memory device corresponding to destination address in user access request in address storage, and selects to extract content from memory device or in source server according to lookup result and return to user.In certain embodiments, for example, by make object content be compressed content or object content is carried out to compression with good conditionsi while being greater than 1K byte (mentioned above compress), to optimize the memory space of memory device, improve its Buffer Utilization.By the optimization to memory space, under identical memory capacity, can make the number of files of memory device institute buffer memory more, and then can respond quickly more user's request.In addition, in certain embodiments, by utilizing consistency hash algorithm to calculate destination address and with respect to computings such as Number of Storage Units deliverys, result of calculation can be associated with memory location, and memory location corresponding to content corresponding different target address store into according to the calculating mould value of its correspondence thus, facilitate the processing of searching when user accesses this address again in subsequent treatment, and then improved the response speed of server to user's request.
It should be noted that, above-mentioned can carry out respectively according to the step in the caching method of the embodiment of the present invention, sub-step with above in conjunction with Fig. 1 and 2 in corresponding unit or the identical processing of the performed processing of part in any described caching system, and can reach similar function and effect, repeat no longer one by one here.
In the above in the description of the specific embodiment of the invention, describe and/or the feature that illustrates can be used in same or similar mode in one or more other execution mode for a kind of execution mode, combined with the feature in other execution mode, or substitute the feature in other execution mode.
A7, as the caching method as described in any in A1-6, wherein said addressed memory is memory location taking destination address as key, taking the object content that is associated in the memory device memory as key-value mode of value.A8, as the caching method as described in any in A1-7, also comprise step: judge whether the client of sending access request supports compressed file, if so, described object content is directly sent to described user, otherwise send to described user after described object content is decompressed.B13. according to the caching system described in any in B9-12, wherein, forwarding processor is also configured to: from described source server obtains corresponding content, judge from the size of obtained content, in the time that the size of obtained content is greater than predetermined value, after obtained content is compressed as described object content.B14. according to the caching system described in B13, wherein, described predetermined value is 1K byte.B15, as the caching system as described in any in B9-14, wherein, described addressed memory is memory location taking destination address as key, taking the object content that is associated in the described memory device memory as key-value mode of value.B16, as the caching system as described in any in B9-15, wherein, described output unit is also configured to: judge whether the client of sending access request supports compressed file, if, described object content is directly sent to described client, otherwise send to described client after described object content is decompressed.
In addition, during the method for various embodiments of the present invention is not limited to specifications, describe or accompanying drawing shown in time sequencing carry out, also can be according to other time sequencing, carry out concurrently or independently.The execution sequence of the method for therefore, describing in this specification is not construed as limiting technical scope of the present invention.
Finally, also it should be noted that, in this article, relational terms such as left and right, first and second etc. is only used for an entity or operation to separate with another entity or operating space, and not necessarily requires or imply and between these entities or operation, have the relation of any this reality or sequentially.And, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thereby the process, method, article or the equipment that make to comprise a series of key elements not only comprise those key elements, but also comprise other key elements of clearly not listing, or be also included as the intrinsic key element of this process, method, article or equipment.The in the situation that of more restrictions not, the key element being limited by statement " comprising ... ", and be not precluded within process, method, article or the equipment that comprises described key element and also have other identical element.

Claims (10)

1. a caching method carries out to process the access request from user in request server, and wherein user's access request comprises destination address, and described caching method comprises:
In addressed memory, search the memory location being associated with described destination address;
In the time finding the memory location being associated with described destination address, from this memory location reading of content of the memory device of described request server, as object content;
In the time not finding the memory location corresponding with described destination address, described access request is forwarded to the source server corresponding with this access request, obtain corresponding content from described source server, and using obtained content as described object content; And
Described object content is returned to user.
2. caching method according to claim 1, wherein obtain from source server not finding the memory location corresponding with destination address corresponding content as object content after, described caching method also comprises step:
Calculate memory location according to described destination address;
Store described object content into the memory location place of calculating in described memory device; And
In described addressed memory, set up the entry that described destination address and the memory location of calculating are associated.
3. caching method according to claim 2, wherein, described memory device comprises multiple memory cell, and described memory location comprises memory cell mark, and the described step of calculating memory location according to described destination address comprises:
Determine one of described multiple memory cell according to described destination address, and comprise the memory cell mark corresponding with determined memory cell in described memory location.
4. caching method according to claim 3, wherein saidly determine that according to destination address the step of one of described multiple memory cell comprises:
Utilize consistency hash algorithm to calculate described destination address, and with respect to the number delivery of memory cell, to select the memory cell corresponding with the mould value being calculated as determined memory cell in described multiple memory cell.
5. according to the caching method described in any in claim 1-4, also comprise step:
Judge the size of the content of obtaining from described source server, in the time that the size of obtained content is greater than predetermined value, obtained content is compressed.
6. caching method according to claim 5, wherein, described predetermined value is 1K byte.
7. a caching system, resides in request server, and this request server is connected with client communication, and to receive the request from client, this caching system comprises:
Addressed memory, it is for storing the entry that address is associated with the memory location of the memory device of described request server;
Request processor, it is for receiving the access request from described client, to search the memory location being associated with destination address included in access request from described client in described addressed memory;
Contents extraction unit, it is in the time that described request processor finds the memory location being associated with described destination address, from this memory location reading of content of described memory device, as object content and send to output unit;
Forwarding processor, it is not in the time that described request processor finds the memory location corresponding with described destination address, described access request is forwarded to the source server corresponding with this access request, obtain corresponding content from described source server, and obtained content is sent to described output unit as described object content; And
Output unit, it is for returning to described client by the object content of reception.
8. caching system according to claim 7, also comprises:
Controller, it is not in the time that described request processor finds the memory location corresponding with described destination address, obtain from source server at described forwarding processor corresponding content as object content after, calculate memory location according to described destination address, described object content is stored into the memory location place of being calculated in described memory device, and in described addressed memory, set up the entry that described destination address and the memory location of calculating are associated.
9. caching system according to claim 8, wherein, described memory device comprises multiple memory cell, and described memory location comprises memory cell mark, and described controller is configured to:
Determine one of described multiple memory cell according to described destination address, and comprise the memory cell mark corresponding with determined memory cell in described memory location.
10. caching system according to claim 9, wherein said controller is configured to:
Utilize consistency hash algorithm to calculate described destination address, and with respect to the number delivery of memory cell, to select the memory cell corresponding with the mould value being calculated as determined memory cell in described multiple memory cell.
CN201410067920.5A 2014-02-27 2014-02-27 Cache method and cache system Pending CN103873562A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410067920.5A CN103873562A (en) 2014-02-27 2014-02-27 Cache method and cache system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410067920.5A CN103873562A (en) 2014-02-27 2014-02-27 Cache method and cache system

Publications (1)

Publication Number Publication Date
CN103873562A true CN103873562A (en) 2014-06-18

Family

ID=50911677

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410067920.5A Pending CN103873562A (en) 2014-02-27 2014-02-27 Cache method and cache system

Country Status (1)

Country Link
CN (1) CN103873562A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016070431A1 (en) * 2014-11-07 2016-05-12 华为技术有限公司 Memory access method and apparatus, and computer device
WO2017117734A1 (en) * 2016-01-06 2017-07-13 华为技术有限公司 Cache management method, cache controller and computer system
CN112351109A (en) * 2020-11-27 2021-02-09 中国农业银行股份有限公司 Accessory processing method and device
CN112948279A (en) * 2019-11-26 2021-06-11 伊姆西Ip控股有限责任公司 Method, apparatus and program product for managing access requests in a storage system
CN115277566A (en) * 2022-05-20 2022-11-01 鸬鹚科技(深圳)有限公司 Load balancing method and device for data access, computer equipment and medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08331173A (en) * 1995-05-29 1996-12-13 Toshiba Corp Electronic mail system with data compression function, transmitting and receiving method therefor and transmitter
US20050131995A1 (en) * 2003-12-11 2005-06-16 International Business Machines Corporation Autonomic evaluation of web workload characteristics for self-configuration memory allocation
CN101075241A (en) * 2006-12-26 2007-11-21 腾讯科技(深圳)有限公司 Method and system for processing buffer
CN101656985A (en) * 2009-08-18 2010-02-24 中兴通讯股份有限公司 Method for managing url resource cache and device thereof
CN101984405A (en) * 2010-10-11 2011-03-09 中兴通讯股份有限公司 Method of software version upgrade and terminal and system
CN102375882A (en) * 2011-09-19 2012-03-14 奇智软件(北京)有限公司 Method, device and browser for rapidly accessing webpage
CN103096126A (en) * 2012-12-28 2013-05-08 中国科学院计算技术研究所 Method and system of collaborative type cache for video-on-demand service in collaborative type cache cluster
WO2013076777A1 (en) * 2011-11-25 2013-05-30 日立コンシューマエレクトロニクス株式会社 Image transmission device, image transmission method, image reception device, and image reception method
CN103188247A (en) * 2011-12-31 2013-07-03 深圳市金蝶友商电子商务服务有限公司 Method and system of data transmission
CN103488709A (en) * 2013-09-09 2014-01-01 东软集团股份有限公司 Method and system for building indexes and method and system for retrieving indexes
CN103595616A (en) * 2012-08-17 2014-02-19 腾讯科技(深圳)有限公司 Mail transmit-receive method and mail transmit-receive system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08331173A (en) * 1995-05-29 1996-12-13 Toshiba Corp Electronic mail system with data compression function, transmitting and receiving method therefor and transmitter
US20050131995A1 (en) * 2003-12-11 2005-06-16 International Business Machines Corporation Autonomic evaluation of web workload characteristics for self-configuration memory allocation
CN101075241A (en) * 2006-12-26 2007-11-21 腾讯科技(深圳)有限公司 Method and system for processing buffer
CN101656985A (en) * 2009-08-18 2010-02-24 中兴通讯股份有限公司 Method for managing url resource cache and device thereof
CN101984405A (en) * 2010-10-11 2011-03-09 中兴通讯股份有限公司 Method of software version upgrade and terminal and system
CN102375882A (en) * 2011-09-19 2012-03-14 奇智软件(北京)有限公司 Method, device and browser for rapidly accessing webpage
WO2013076777A1 (en) * 2011-11-25 2013-05-30 日立コンシューマエレクトロニクス株式会社 Image transmission device, image transmission method, image reception device, and image reception method
CN103188247A (en) * 2011-12-31 2013-07-03 深圳市金蝶友商电子商务服务有限公司 Method and system of data transmission
CN103595616A (en) * 2012-08-17 2014-02-19 腾讯科技(深圳)有限公司 Mail transmit-receive method and mail transmit-receive system
CN103096126A (en) * 2012-12-28 2013-05-08 中国科学院计算技术研究所 Method and system of collaborative type cache for video-on-demand service in collaborative type cache cluster
CN103488709A (en) * 2013-09-09 2014-01-01 东软集团股份有限公司 Method and system for building indexes and method and system for retrieving indexes

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016070431A1 (en) * 2014-11-07 2016-05-12 华为技术有限公司 Memory access method and apparatus, and computer device
CN105900060A (en) * 2014-11-07 2016-08-24 华为技术有限公司 Memory access method and apparatus, and computer device
CN105900060B (en) * 2014-11-07 2019-05-03 华为技术有限公司 Memory pool access method, device and computer equipment
WO2017117734A1 (en) * 2016-01-06 2017-07-13 华为技术有限公司 Cache management method, cache controller and computer system
US10831677B2 (en) 2016-01-06 2020-11-10 Huawei Technologies Co., Ltd. Cache management method, cache controller, and computer system
CN112948279A (en) * 2019-11-26 2021-06-11 伊姆西Ip控股有限责任公司 Method, apparatus and program product for managing access requests in a storage system
CN112351109A (en) * 2020-11-27 2021-02-09 中国农业银行股份有限公司 Accessory processing method and device
CN115277566A (en) * 2022-05-20 2022-11-01 鸬鹚科技(深圳)有限公司 Load balancing method and device for data access, computer equipment and medium
CN115277566B (en) * 2022-05-20 2024-03-22 鸬鹚科技(深圳)有限公司 Load balancing method and device for data access, computer equipment and medium

Similar Documents

Publication Publication Date Title
CN103873562A (en) Cache method and cache system
TWI648622B (en) Flash memory compression, reading method and device using the same
CN111247518B (en) Method and system for database sharding
KR20180100169A (en) Short Link Handling Method, Device, and Server
US20160335243A1 (en) Webpage template generating method and server
CN112269789B (en) Method and device for storing data, and method and device for reading data
EP2898430B1 (en) Mail indexing and searching using hierarchical caches
KR102222087B1 (en) Image recognition method and apparatus based on augmented reality
CN108055302B (en) Picture caching processing method and system and server
CN107911461B (en) Object processing method in cloud storage system, storage server and cloud storage system
CN108429777B (en) Data updating method based on cache and server
US8515913B2 (en) File management apparatus, file management method, and computer readable medium storing program
CN104768079B (en) Multimedia resource distribution method, apparatus and system
US10771358B2 (en) Data acquisition device, data acquisition method and storage medium
CN105653198A (en) Data processing method and device
CN106681995B (en) Data caching method, data query method and device
CN110704194A (en) Method and system for managing memory data and maintaining data in memory
CN111177476A (en) Data query method and device, electronic equipment and readable storage medium
CN114817651B (en) Data storage method, data query method, device and equipment
CN105005567A (en) Interest point query method and system
CN102508892A (en) System and method for quickly previewing pictures
CN102437894A (en) Method, device and equipment for compressing information to be sent
CN105187562A (en) System and method for operating remote file
CN105404672A (en) Webpage data storage and operation system and method
CN108494700B (en) Cross-link data transmission method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20140618