CN105657054A - Content center network caching method based on K means algorithm - Google Patents
Content center network caching method based on K means algorithm Download PDFInfo
- Publication number
- CN105657054A CN105657054A CN201610125100.6A CN201610125100A CN105657054A CN 105657054 A CN105657054 A CN 105657054A CN 201610125100 A CN201610125100 A CN 201610125100A CN 105657054 A CN105657054 A CN 105657054A
- Authority
- CN
- China
- Prior art keywords
- content
- cache
- barycenter
- controller
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1014—Server selection for load balancing based on the content of a request
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
Abstract
The invention relates to a content center network caching method based on a K means algorithm, and belongs to the technical field of communication. The caching method comprises the steps that a controller of a software-defined network is utilized for counting the request frequency of a certain content on a content switch, a threshold value is set at the controller end for selecting all switches with the content request frequency exceeding the threshold value as undetermined cache points, the K-means cache algorithm is utilized at the controller end for selecting a plurality of better cache nodes from the undetermined cache points, and the controller sends a proactive cache content instruction to the selected cache nodes through an OpenFlow channel; when the content is returned to a requester from a content service node, the cache instruction issued by the controller is executed, and the content is cached into the cache nodes selected by the controller. The method can well solve the problem that an existing content center network caching mechanism is low in homogenization caching and caching hit ratio.
Description
Technical field
The invention belongs to communication technical field, relate to a kind of content center network caching method based on K mean algorithm.
Background technology
In net, caching mechanism is one of core technology of content center network (ContentCentricNetwork, CCN). By buffer memory partial content on network node, make content requests that nearest cached copies can be used without obtaining corresponding contents after finding main frame by addressing again, thus can effectively reduce the time delay of content obtaining, reduce the uninterrupted of identical content in network, thus improving the performance of network simultaneously.
The buffer memory of CCN is that application is transparent and ubiquitous. When traditional buffering scheme is to return from supplier when content, all nodes on path are all to this content of buffer memory, the cache policy of this " generally " causes existing between cache node redundant data, reduces the multiformity of cache contents, causes the utilization rate of cache resources to decline. The research of CCN caching technology is devoted to propose various concrete new technique schemes and cache policy, to promote the overall performance of caching system. In order to solve the problems such as the CCN wasting of resources that buffer memory machine brings everywhere, Chinese scholars has carried out big quantity research. At present, cache policy is broadly divided into Cache Design and two aspects of cache decision.
How Cache Design: different types of flow and application take on a different character, provide the buffer service of differentiated to be a problem demanding prompt solution for different flow. In order to realize differentiated buffer service, Cache Design technology is then one of most important part. Current Cache Design technology is divided into the Cache Design based on fixed partition and the Cache Design two kinds dynamically divided. Spatial cache is divided into the application that fixing part makes each different classes of by the Cache Design of fixed partition can use the buffer memory that will not be taken by other flows. This scheme Problems existing has 2 points: first, when some type of flow does not arrive, and when other flows are more, or produces cache miss and the wasting of resources. Second, it is difficult to ensure as the buffer memory quality assurance that the offer of different types of flow is different.The Cache Design dynamically divided then can allow certain discharge pattern use unappropriated spatial cache. This comprises again strategy two kinds different: based on sharing of priority and sharing based on balance of weights. Shared meeting based on priority allows some application have higher priority relative to other application, and comes to high priority content vacating space by removing the content of low priority. This strategy has a problem in that when data high-speed arrives, and repeatedly compares priority and can have a strong impact on performance. Shared meeting based on balance of weights presets weight, but still can use the space being not used by, and difficult point is in that how to optimize weight.
Cache decision: which content is cache decision mechanism determine needs to be stored on which node, is divided into non-cooperating formula cache decision and the big class of cooperative caching decision-making two. Status informations that non-cooperating formula cache decision need not be known a priori by network other slow some nodes. LCE (LeaveCopyEverywhere), LCD (LeaveCopyDown), MCD (MoveCopyDown), Prob (CopywithProbability) and ProbCache (ProbabilisticCache) etc. are mainly had than more typical non-cooperating formula cache decision strategy. LCE is the cache decision strategy of acquiescence in CCN, and this policy mandates all routing nodes on packet return path are both needed to cache contents object, so can cause occurring a large amount of cache contents redundancy in network, reduces the multiformity of cache contents. LCD makes content object only be buffered in its place node next-hop node, and content object gets to network edge node after repeatedly being asked, and can produce a large amount of cache contents redundancy on path. Cache contents is obeyed the order interior joint downstream one (except source server) by MCD when cache hit, thus reduce requestor's cache contents redundancy to the path of content server, but when requestor is from different edge networks, there will be waving of content caching point, this dynamic can produce more network overhead. Prob requires that all routing nodes on packet return path are all with fixing probability P cache object, and the value of P can be adjusted according to caching situation. The object asked in ProbCache deposits in each node according to probability but probability is all different, and the distance of probability and requesting node is inversely proportional to, if therefore node is more near, then buffer memory probability is more big, otherwise then more little. Copy quickly can be pushed to network edge by this strategy, reduces number of copies simultaneously. In cooperative caching decision-making, network topology and node state are all known a priori by. Final cache location is calculated by the input of these information. The scope of the node according to participative decision making, it is possible to be divided into global coordination, path is coordinated and contiguous coordination three kinds. Global coordination refers to that in network, all cache nodes all can be considered, and so must be known a priori by the topology of whole network. Path coordinates to refer to that this coordination relates merely to the cache node from requestor to server along road. Contiguous coordination refers to that coordination only occurs between the adjacent node of node. Coordinating as a kind of method based on hash function in net, be also attributed to contiguous coordination, it is to use a hash function to determine which neighbour carrys out certain blocks of files of buffer memory.
In sum: current content center network cache policy yet suffers from problems with: homogeneity buffer memory: in non-cooperating formula cache decision, each node disjoint ground buffer memory will cause, with replacing, the content that each nodal cache is identical; Content is concentrated very much in spatial distribution or disperses very much, so will cause that requestor needs acquisition content from that excessively concentrate or scattered node, cause that flow is unreasonable; Irrational distribution in time, in the popular time, the content that each node buffer memory is identical, but popular time one mistake, this content almost disappears again on each node simultaneously.Cache hit rate is relatively low: do not know mutually the cache contents of the other side in non-cooperating formula cache decision between each node; And in cooperative caching, even if each node knows mutually the content of the other side's buffer memory, but have no time to promise to undertake, and the replacement of each node is independent, content is likely replaced at any time. This makes the effect of buffer memory have certain randomness and occasionality, and the forward efficiency of Interest is relatively low.
Summary of the invention
In view of this, it is an object of the invention to provide a kind of content center network caching method based on K mean algorithm, the method can solve homogeneity buffer memory that the existing caching mechanism of content center network exists and the problem such as cache hit rate is low, the method utilizes K mean algorithm to calculate some more excellent cache locations at controller end, and issues cache command to cache node by OpenFlow channel.
For reaching above-mentioned purpose, the present invention provides following technical scheme:
A kind of content center network caching method based on K mean algorithm, this caching method utilizes the request number of times of a certain content on the controller statistical content switch of software defined network, and select content requests number of times to exceed all switches of this threshold value as buffer undetermined in controller end setting threshold value, and utilize K average cache algorithm to select several preferably cache nodes from buffer undetermined at controller end, controller sent the instruction of active cache content to the cache node selected by OpenFlow channel; When content returns the process of requestor from content service node, the cache instruction that controller issues can be performed, by content caching in the cache node that controller is selected.
Further, the K mean algorithm described in this method is a kind of algorithm for partition clustering, according to the sample distribution information of input and required cluster numbers K, by successive ignition, sample is divided into K cluster, and specific algorithm flow process is:
1) K barycenter of random initializtion;
2) each sample of residue is measured its distance to each barycenter, sample is referred in the barycenter of its nearest neighbours;
3) to each barycenter, its all sample coordinate are taken average, obtains the position that this barycenter is new;
4) repeat 2)-3) step until each sample to each barycenter distance and restrain.
Further, the method specifically includes following steps:
S1: in network, the request number of times count area of a certain content A is added up to controller by respective switch;
S2: controller finds out the exchange more than threshold value T set in advance of A the number of request content as sample point;
S3: controller according to required tolerance by abstract for network topology for two-dimensional coordinate;
S4: determine required K value according to business, flow, cost factor;
S5: randomly select in sample point K different switches as initial barycenter;
S6: measure its distance to barycenter to remaining each sample, sample is referred to apart from the apoplexy due to endogenous wind belonging to minimum barycenter;
Its all sample coordinate are taken average by S7: for the class at each barycenter place, then find out from the nearest switch of average as the new position of barycenter;
S8: repeat step S6-S7 until the distance of each sample to each barycenter and convergence;
S9: using the switch of each cluster barycenter as content A cache location, controller sends active cache instruction by OpenFlow channel to it.
The beneficial effects of the present invention is: the method for the invention can be good at solving homogeneity buffer memory that the existing caching mechanism of content center network exists and the problem such as cache hit rate is low.
Accompanying drawing explanation
In order to make the purpose of the present invention, technical scheme and beneficial effect clearly, the present invention provides drawings described below to illustrate:
Fig. 1 is the content requests number counter analogous diagram of each switch;
Fig. 2 is selected cache location schematic diagram;
Fig. 3 is embodiments of the invention flowchart.
Detailed description of the invention
Below in conjunction with accompanying drawing, the preferred embodiments of the present invention are described in detail.
It is an object of the invention to solve homogeneity buffer memory that the existing caching mechanism of content center network exists and the problem such as cache hit rate is low, utilize K mean algorithm to calculate some more excellent cache locations at controller end, and issue cache command to cache node by OpenFlow channel.
In the method, K mean algorithm is a kind of algorithm for partition clustering, according to the sample distribution information of input and required cluster numbers K, by successive ignition, sample is divided into K cluster, and specific algorithm flow process is: 1) K barycenter of random initializtion; 2) each sample of residue is measured its distance to each barycenter, sample is referred in the barycenter of its nearest neighbours; 3) to each barycenter, its all sample coordinate are taken average, obtains the position that this barycenter is new; 4) repeat 2)-3) step until each sample to each barycenter distance and restrain.
Of the present invention based in the content center network cache policy of K mean algorithm, global network information in controller and the centralized Control to switch are the key factors that this cache policy is feasible. By the identification to content request mode, therefrom divide each cluster, can effectively obtain the cache location of optimum.
Cache policy based on K mean algorithm is as follows:
S1: in network, the request number of times count area of a certain content A is added up to controller by respective switch;
S2: controller finds out the exchange more than threshold value T set in advance of A the number of request content as sample point;
S3: controller according to required tolerance by abstract for network topology for two-dimensional coordinate;
S4: determine required K value according to business, flow, cost factor;
S5: randomly select in sample point K different switches as initial barycenter;
S6: measure its distance to barycenter to remaining each sample, sample is referred to apart from the apoplexy due to endogenous wind belonging to minimum barycenter;
Its all sample coordinate are taken average by S7: for the class at each barycenter place, then find out from the nearest switch of average as the new position of barycenter;
S8: repeat step S6-S7 until the distance of each sample to each barycenter and convergence;
S9: using the switch of each cluster barycenter as content A cache location, controller sends active cache instruction by OpenFlow channel to it.
Buffer memory surplus has vital impact for cache decision. When switch memory space inadequate, by force content caching is likely to result in loss of data in this switch. Consider therefore, it is necessary to buffer memory surplus to be included in algorithm. If the spatial cache idleness of switch n is Vn, wherein 0 < Vn< 1. Work as VnWhen being 1, represent that buffer memory is not used by completely, work as VnWhen being 0, represent that buffer memory is otherwise fully engaged. Concrete optimization method is: when uploading the request number of times count area of a certain content A, by VnIt is multiplied as new Counter Value with the value of switch content enumerator, then is uploaded to controller. So, if switch buffer memory takies a lot of, the value of enumerator will significantly be cut down, and does not even reach the threshold value T set by controller, and this switch would not be considered the position candidate of buffer memory. And when the switch buffer memory free time, the value of enumerator will not reduce in a large number, thus performing without influence on algorithm.
Fig. 3 is embodiments of the invention flowchart, and in the examples below, the transmission controlling information selects the outer link of band, it is possible to be ethernet link, IP link channel.
As it can be seen, the method for the invention comprises the following steps:
Step 301: in network, content A is uploaded in controller by controlling channel by all switches in the request number of times of this switch.
Step 302: the count value uploaded of all switches collected by controller, and according to threshold value T set in advance select the count value switch more than T as buffer undetermined, and as sample point; Topology connection according to network by abstract for the annexation of sample point for two-dimensional coordinate; Determining that required buffer memory is counted K value according to factors such as business, flow, costs, K is set to 2 by us here; Randomly select in sample point K different switches as initial barycenter; Measure its distance to barycenter to remaining each sample, sample is referred to apart from the apoplexy due to endogenous wind belonging to minimum barycenter; For the class at each barycenter place, its all sample coordinate are taken average, then find out from the nearest switch of average as the new position of barycenter, until each sample to the distance of each barycenter and is restrained.
Step 303: using the switch of each cluster barycenter after convergence as content A cache location, here selected cache node is switch 4 and switch 13, controller send active cache content A instruction (3.1 and 3.2) by OpenFlow channel to it.
Step 304: requestor sends the Interest information (3.3) of request content A, after switch receives this information, first retrieve in CS (ContentStore) and whether have content, if having, return data, if, not retrieving the record whether having this Interest in PIT (PendingInterestTable), if having, by the input port record of this Interest in PIT respective entries, if not, create a new entry, record current Interest input port. After having retrieved PIT, switch will retrieve FIB (ForwardingInterestBase), if having, this message be forwarded to corresponding port, otherwise be forwarded to other all of the ports except input port. Here Interest message all obtains coupling in switch 1,4,8,12 and 13, is finally forwarded to supplier (3.4,3.5,3.6,3.7 and 3.8).
Step 305: supplier sends content A (3.9) after receiving Interest information. Content A is back to requestor place (3.10 according to PIT mono-tunnel of switch 13,12,8,4 and 1,3.11,3.12,3.13 and 3.14), and in the process of passback, switch 4 and switch 13 also can perform the active cache order waited, are buffered in by content A in the CS of switch 4 and switch 13.
Fig. 1 is the content requests number counter analogous diagram of each switch, in the present embodiment, actual geographic position according to switch, the network by one with 81 switches carries out Coordinate Conversion, network topology is reduced to 9 �� 9 switch matrix, and the content requests counter information of the input of each switch is random integers between 0-100. Content requests counter threshold T is set to 70. The content requests enumerator simulation result obtained is as shown in Figure 1, it can be clearly seen that the distribution situation of content requests. Fig. 2 show the final cache location schematic diagram obtained when K takes 2,3 and 4 respectively, wherein, and the cache node that light grey expression is selected.
What finally illustrate is, preferred embodiment above is only in order to illustrate technical scheme and unrestricted, although the present invention being described in detail by above preferred embodiment, but skilled artisan would appreciate that, in the form and details it can be made various change, without departing from claims of the present invention limited range.
Claims (3)
1. the content center network caching method based on K mean algorithm, it is characterized in that: this caching method utilizes the request number of times of a certain content on the controller statistical content switch of software defined network, and select content requests number of times to exceed all switches of this threshold value as buffer undetermined in controller end setting threshold value, and utilize K average cache algorithm to select several preferably cache nodes from buffer undetermined at controller end, controller sent the instruction of active cache content to the cache node selected by OpenFlow channel;When content returns the process of requestor from content service node, the cache instruction that controller issues can be performed, by content caching in the cache node that controller is selected.
2. a kind of content center network caching method based on K mean algorithm according to claim 1, it is characterized in that: the K mean algorithm described in this method is a kind of algorithm for partition clustering, according to the sample distribution information inputted and required cluster numbers K, by successive ignition, sample being divided into K cluster, specific algorithm flow process is:
1) K barycenter of random initializtion;
2) each sample of residue is measured its distance to each barycenter, sample is referred in the barycenter of its nearest neighbours;
3) to each barycenter, its all sample coordinate are taken average, obtains the position that this barycenter is new;
4) repeat 2)-3) step until each sample to each barycenter distance and restrain.
3. a kind of content center network caching method based on K mean algorithm according to claim 2, it is characterised in that: the method specifically includes following steps:
S1: in network, the request number of times count area of a certain content A is added up to controller by respective switch;
S2: controller finds out the exchange more than threshold value T set in advance of A the number of request content as sample point;
S3: controller according to required tolerance by abstract for network topology for two-dimensional coordinate;
S4: determine required K value according to business, flow, cost factor;
S5: randomly select in sample point K different switches as initial barycenter;
S6: measure its distance to barycenter to remaining each sample, sample is referred to apart from the apoplexy due to endogenous wind belonging to minimum barycenter;
Its all sample coordinate are taken average by S7: for the class at each barycenter place, then find out from the nearest switch of average as the new position of barycenter;
S8: repeat step S6-S7 until the distance of each sample to each barycenter and convergence;
S9: using the switch of each cluster barycenter as content A cache location, controller sends active cache instruction by OpenFlow channel to it.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610125100.6A CN105657054B (en) | 2016-03-04 | 2016-03-04 | A kind of content center network caching method based on K mean algorithms |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610125100.6A CN105657054B (en) | 2016-03-04 | 2016-03-04 | A kind of content center network caching method based on K mean algorithms |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105657054A true CN105657054A (en) | 2016-06-08 |
CN105657054B CN105657054B (en) | 2018-10-12 |
Family
ID=56493180
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610125100.6A Expired - Fee Related CN105657054B (en) | 2016-03-04 | 2016-03-04 | A kind of content center network caching method based on K mean algorithms |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105657054B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106454430A (en) * | 2016-10-13 | 2017-02-22 | 重庆邮电大学 | Pre-release method for intra-autonomous domain video service in NDN/CCN (Named Data Networking/Content Centric Networking) |
CN106888265A (en) * | 2017-03-21 | 2017-06-23 | 浙江万里学院 | For the caching method of Internet of Things |
CN107835129A (en) * | 2017-10-24 | 2018-03-23 | 重庆大学 | Content center network fringe node potential energy strengthens method for routing |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140052814A1 (en) * | 2012-08-14 | 2014-02-20 | Calix, Inc. | Distributed cache system for optical networks |
CN103607386A (en) * | 2013-11-15 | 2014-02-26 | 南京云川信息技术有限公司 | A cooperative caching method in a P2P Cache system |
CN103716254A (en) * | 2013-12-27 | 2014-04-09 | 中国科学院声学研究所 | Self-aggregation cooperative caching method in CCN |
CN104253855A (en) * | 2014-08-07 | 2014-12-31 | 哈尔滨工程大学 | Content classification based category popularity cache replacement method in oriented content-centric networking |
-
2016
- 2016-03-04 CN CN201610125100.6A patent/CN105657054B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140052814A1 (en) * | 2012-08-14 | 2014-02-20 | Calix, Inc. | Distributed cache system for optical networks |
CN103607386A (en) * | 2013-11-15 | 2014-02-26 | 南京云川信息技术有限公司 | A cooperative caching method in a P2P Cache system |
CN103716254A (en) * | 2013-12-27 | 2014-04-09 | 中国科学院声学研究所 | Self-aggregation cooperative caching method in CCN |
CN104253855A (en) * | 2014-08-07 | 2014-12-31 | 哈尔滨工程大学 | Content classification based category popularity cache replacement method in oriented content-centric networking |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106454430A (en) * | 2016-10-13 | 2017-02-22 | 重庆邮电大学 | Pre-release method for intra-autonomous domain video service in NDN/CCN (Named Data Networking/Content Centric Networking) |
CN106454430B (en) * | 2016-10-13 | 2019-06-04 | 重庆邮电大学 | For the preparatory dissemination method of video traffic in Autonomous Domain in NDN/CCN |
CN106888265A (en) * | 2017-03-21 | 2017-06-23 | 浙江万里学院 | For the caching method of Internet of Things |
CN106888265B (en) * | 2017-03-21 | 2019-08-27 | 浙江万里学院 | Caching method for Internet of Things |
CN107835129A (en) * | 2017-10-24 | 2018-03-23 | 重庆大学 | Content center network fringe node potential energy strengthens method for routing |
CN107835129B (en) * | 2017-10-24 | 2020-06-02 | 重庆大学 | Content center network edge node potential energy enhanced routing method |
Also Published As
Publication number | Publication date |
---|---|
CN105657054B (en) | 2018-10-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105721600A (en) | Content centric network caching method based on complex network measurement | |
CN102710489B (en) | Dynamic shunt dispatching patcher and method | |
CN104811493B (en) | The virtual machine image storage system and read-write requests processing method of a kind of network aware | |
EP3206348B1 (en) | Method and system for co-operative on-path and off-path caching policy for information centric networks | |
Le et al. | Social caching and content retrieval in disruption tolerant networks (DTNs) | |
CN103905332B (en) | A kind of method and apparatus for determining cache policy | |
CN104022911A (en) | Content route managing method of fusion type content distribution network | |
CN101119313A (en) | Load sharing method and equipment | |
Naveen et al. | On the interaction between content caching and request assignment in cellular cache networks | |
CN106533733B (en) | The CCN collaboration caching method and device routed based on network cluster dividing and Hash | |
Janaszka et al. | On popularity-based load balancing in content networks | |
CN108366089B (en) | CCN caching method based on content popularity and node importance | |
Nour et al. | A distributed cache placement scheme for large-scale information-centric networking | |
CN103236989B (en) | Buffer control method in a kind of content distributing network, equipment and system | |
Wu et al. | MBP: A max-benefit probability-based caching strategy in information-centric networking | |
Yan et al. | A survey of low-latency transmission strategies in software defined networking | |
CN108965479B (en) | Domain collaborative caching method and device based on content-centric network | |
CN105657054A (en) | Content center network caching method based on K means algorithm | |
CN111901236A (en) | Method and system for optimizing openstack cloud network by using dynamic routing | |
Wang et al. | Effects of cooperation policy and network topology on performance of in-network caching | |
CN110365801A (en) | Based on the cooperation caching method of subregion in information centre's network | |
CN105141512A (en) | Unified network configuration and control method supporting packet/circuit mixed exchange network | |
Le et al. | The performance of caching strategies in content centric networking | |
Yufei et al. | A centralized control caching strategy based on popularity and betweenness centrality in ccn | |
CN102136986A (en) | Load sharing method and exchange equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20181012 |