CN105721600A - Content centric network caching method based on complex network measurement - Google Patents

Content centric network caching method based on complex network measurement Download PDF

Info

Publication number
CN105721600A
CN105721600A CN201610125099.7A CN201610125099A CN105721600A CN 105721600 A CN105721600 A CN 105721600A CN 201610125099 A CN201610125099 A CN 201610125099A CN 105721600 A CN105721600 A CN 105721600A
Authority
CN
China
Prior art keywords
node
content
network
centrality
caching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610125099.7A
Other languages
Chinese (zh)
Other versions
CN105721600B (en
Inventor
蔡岳平
刘军
罗森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN201610125099.7A priority Critical patent/CN105721600B/en
Publication of CN105721600A publication Critical patent/CN105721600A/en
Application granted granted Critical
Publication of CN105721600B publication Critical patent/CN105721600B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5682Policies or rules for updating, deleting or replacing the stored data

Abstract

The invention relates to a content centric network caching method based on complex network measurement, and belongs to the technical field of communication. The method comprises the following steps: performing statistics to obtain request times of certain content on content switches by using a controller of a software defined network, setting a threshold at a controller end, selecting all switches of which content request times exceed the threshold to serve as caching points to be determined, selecting a plurality of preferential caching nodes from the caching points to be determined through a caching strategy based on the complex network measurement at the controller end, and sending an instruction of actively caching content to the selected caching nodes through an OpenFlow channel by the controller; and in a process of returning the content to a requester from a content service node, executing the caching instruction issued by the controller, and caching the content in the caching nodes selected by the controller. Through adoption of the content centric network caching method, the problems of homogenous caching and low caching hit ratio in an existing caching mechanism of a content centric network can be solved well.

Description

A kind of content center network caching method based on complex network tolerance
Technical field
The invention belongs to communication technical field, relate to a kind of content center network caching method based on complex network tolerance.
Background technology
In net, caching mechanism is one of core technology of content center network (ContentCentricNetwork, CCN).By buffer memory partial content on network node, make content requests that nearest cached copies can be used without obtaining corresponding contents after finding main frame by addressing again, thus can effectively reduce the time delay of content obtaining, reduce the uninterrupted of identical content in network, thus improving the performance of network simultaneously.
The buffer memory of CCN is that application is transparent and ubiquitous.When traditional buffering scheme is to return from supplier when content, all nodes on path are all to this content of buffer memory, the cache policy of this " generally " causes existing between cache node redundant data, reduces the multiformity of cache contents, causes the utilization rate of cache resources to decline.The research of CCN caching technology is devoted to propose various concrete new technique schemes and cache policy, to promote the overall performance of caching system.In order to solve the problems such as the CCN wasting of resources that buffer memory machine brings everywhere, Chinese scholars has carried out big quantity research.At present, cache policy is broadly divided into Cache Design and two aspects of cache decision.
How Cache Design: different types of flow and application take on a different character, provide the buffer service of differentiated to be a problem demanding prompt solution for different flow.In order to realize differentiated buffer service, Cache Design technology is then one of most important part.Current Cache Design technology is divided into the Cache Design based on fixed partition and the Cache Design two kinds dynamically divided.Spatial cache is divided into the application that fixing part makes each different classes of by the Cache Design of fixed partition can use the buffer memory that will not be taken by other flows.This scheme Problems existing has 2 points: first, when some type of flow does not arrive, and when other flows are more, or produces cache miss and the wasting of resources.Second, it is difficult to ensure as the buffer memory quality assurance that the offer of different types of flow is different.The Cache Design dynamically divided then can allow certain discharge pattern use unappropriated spatial cache.This comprises again strategy two kinds different: based on sharing of priority and sharing based on balance of weights.Shared meeting based on priority allows some application have higher priority relative to other application, and comes to high priority content vacating space by removing the content of low priority.This strategy has a problem in that when data high-speed arrives, and repeatedly compares priority and can have a strong impact on performance.Shared meeting based on balance of weights presets weight, but still can use the space being not used by, and difficult point is in that how to optimize weight.
Cache decision: which content is cache decision mechanism determine needs to be stored on which node, is divided into non-cooperating formula cache decision and the big class of cooperative caching decision-making two.Status informations that non-cooperating formula cache decision need not be known a priori by network other slow some nodes.LCE (LeaveCopyEverywhere), LCD (LeaveCopyDown), MCD (MoveCopyDown), Prob (CopywithProbability) and ProbCache (ProbabilisticCache) etc. are mainly had than more typical non-cooperating formula cache decision strategy.LCE is the cache decision strategy of acquiescence in CCN, and this policy mandates all routing nodes on packet return path are both needed to cache contents object, so can cause occurring a large amount of cache contents redundancy in network, reduces the multiformity of cache contents.LCD makes content object only be buffered in its place node next-hop node, and content object gets to network edge node after repeatedly being asked, and can produce a large amount of cache contents redundancy on path.Cache contents is obeyed the order interior joint downstream one (except source server) by MCD when cache hit, thus reduce requestor's cache contents redundancy to the path of content server, but when requestor is from different edge networks, there will be waving of content caching point, this dynamic can produce more network overhead.Prob requires that all routing nodes on packet return path are all with fixing probability P cache object, and the value of P can be adjusted according to caching situation.The object asked in ProbCache deposits in each node according to probability but probability is all different, and the distance of probability and requesting node is inversely proportional to, if therefore node is more near, then buffer memory probability is more big, otherwise then more little.Copy quickly can be pushed to network edge by this strategy, reduces number of copies simultaneously.In cooperative caching decision-making, network topology and node state are all known a priori by.Final cache location is calculated by the input of these information.The scope of the node according to participative decision making, it is possible to be divided into global coordination, path is coordinated and contiguous coordination three kinds.Global coordination refers to that in network, all cache nodes all can be considered, and so must be known a priori by the topology of whole network.Path coordinates to refer to that this coordination relates merely to the cache node from requestor to server along road.Contiguous coordination refers to that coordination only occurs between the adjacent node of node.Coordinating as a kind of method based on hash function in net, be also attributed to contiguous coordination, it is to use a hash function to determine which neighbour carrys out certain blocks of files of buffer memory.
In sum: current content center network cache policy yet suffers from problems with: homogeneity buffer memory: in non-cooperating formula cache decision, each node disjoint ground buffer memory will cause, with replacing, the content that each nodal cache is identical;Content is concentrated very much in spatial distribution or disperses very much, so will cause that requestor needs acquisition content from that excessively concentrate or scattered node, cause that flow is unreasonable;Irrational distribution in time, in the popular time, the content that each node buffer memory is identical, but popular time one mistake, this content almost disappears again on each node simultaneously.Cache hit rate is relatively low: do not know mutually the cache contents of the other side in non-cooperating formula cache decision between each node;And in cooperative caching, even if each node knows mutually the content of the other side's buffer memory, but have no time to promise to undertake, and the replacement of each node is independent, content is likely replaced at any time.This makes the effect of buffer memory have certain randomness and occasionality, and the forward efficiency of Interest is relatively low.
Summary of the invention
In view of this, it is an object of the invention to provide a kind of content center network caching method based on complex network tolerance, the method can solve homogeneity buffer memory that the existing caching mechanism of content center network exists and the problem such as cache hit rate is low, the method utilizes the cache algorithm based on complex network tolerance to calculate some more excellent cache locations at controller end, and issues cache command to cache node by OpenFlow channel.
For reaching above-mentioned purpose, the present invention provides following technical scheme:
A kind of content center network caching method based on complex network tolerance, the method utilizes the request number of times of a certain content on the controller statistical content switch of software defined network, and select content requests number of times to exceed all switches of this threshold value as buffer undetermined in controller end setting threshold value, and utilize the cache policy based on complex network tolerance to select several preferably cache nodes from buffer undetermined at controller end, controller sent the instruction of active cache content to the cache node selected by OpenFlow channel;When content returns the process of requestor from content service node, the cache instruction that controller issues can be performed, by content caching in the cache node that controller is selected.
Further, the described cache policy measured based on complex network in the utilization of controller end selects several preferably cache nodes from buffer undetermined, specifically include: in complex network, adopting three kinds of Elementary Measures to weigh the importance of node: to spend centrality, close on centrality and Jie's centrality, these three tolerance has investigated the different aspect of importance respectively:
Degree centrality: the definition center simplest method of degree is to find out the limit number that a node is joined directly together, and namely spends, and spending significantly high node for one will have a lot of connection with other nodes, and in non-directed graph figure, degree is defined as:
CD(v)=deg (v)
Closing on centrality: close on centrality and defined by other node beelines in certain node to network, higher centrality of closing on means in this node off-network network that a lot of nodes are all closer to, the also center of just convergence network more, and its formula is as follows:
C C ( v ) = Σ t ∈ V \ v 2 - d G ( v , t )
Jie's centrality: Jie's centrality is that situation about being occurred on shortest path between the other nodes by node determines, Jie's centrality is to weigh the node tolerance to the importance that information is propagated in whole network, having the high central node that is situated between is the key node in network, and it is defined as:
C B ( v ) = Σ s ≠ v ≠ t ∈ V σ s t ( v ) σ s t .
Further, the method specifically includes following steps:
S1: in network, the request number of times count area of a certain content A is added up to controller by respective switch;
S2: controller find out A the number of the request content switch more than threshold value T set in advance as buffer undetermined, and as sample point;
S3: using these sample switches as node, according to actual switch connection, build a non-directed graph;
S4: calculate three tolerance of this non-directed graph respectively: spend centrality, close on centrality and Jie's centrality, and their normalized value ND (NormalizedDegree), NC (NormalizedCloseness), NB (NormalizedBetweenness);
S5: the demand according to different business, determines weight respectively α, β and γ that three tolerance is shared, and calculates a total score, its formula is: S=α * ND+ β * NC+ γ * NB;
S6: total score is ranked up, the demand according to different business, choose switch successively as cache node according to total score sequence, controller sends active cache instruction by OpenFlow channel to it.
The beneficial effects of the present invention is: the method for the invention can be good at solving homogeneity buffer memory that the existing caching mechanism of content center network exists and the problem such as cache hit rate is low.
Accompanying drawing explanation
In order to make the purpose of the present invention, technical scheme and beneficial effect clearly, the present invention provides drawings described below to illustrate:
Fig. 1 is NSFNET node topology figure;
Fig. 2 is embodiments of the invention flowchart.
Detailed description of the invention
Below in conjunction with accompanying drawing, the preferred embodiments of the present invention are described in detail.
It is an object of the invention to solve homogeneity buffer memory that the existing caching mechanism of content center network exists and the problem such as cache hit rate is low, in complex network, having three kinds of Elementary Measures in order to weigh the importance of node: spend centrality, close on centrality and Jie's centrality, these three tolerance has investigated the different aspect of importance respectively.
Degree centrality: the definition center simplest method of degree is to find out the limit number that a node is joined directly together, and namely spends.Spend significantly high node for one and will have a lot of connection with other nodes.In non-directed graph figure, degree is defined as:
CD(v)=deg (v)
Close on centrality: close on centrality and defined by other node beelines in certain node to network.Higher centrality of closing on means in this node off-network network that a lot of nodes are all closer to, the also center of just convergence network more.Its formula is as follows:
C C ( v ) = Σ t ∈ V \ v 2 - d G ( v , t )
Jie's centrality: Jie's centrality is that situation about being occurred on shortest path between the other nodes by node determines.Jie's centrality is to weigh the node tolerance to the importance that information is propagated in whole network.Generally having the high central node that is situated between is the key node in network.It is defined as:
C B ( v ) = Σ s ≠ v ≠ t ∈ V σ s t ( v ) σ s t
In the content center network cache policy measured based on complex network, global network information in controller and the centralized Control of switch are the key factors that this cache policy is feasible.This cache policy, by the identification to content request mode, therefrom divides each cluster, can effectively obtain the cache location of optimum.
This method comprises the following steps: S1: in network, the request number of times count area of a certain content A is added up to controller by respective switch;S2: controller find out A the number of the request content switch more than threshold value T set in advance as buffer undetermined, and as sample point;S3: using these sample switches as node, according to actual switch connection, build a non-directed graph;S4: calculate three tolerance of this non-directed graph respectively: spend centrality, close on centrality and Jie's centrality, and their normalized value ND (NormalizedDegree), NC (NormalizedCloseness), NB (NormalizedBetweenness);S5: the demand according to different business, determines weight respectively α, β and γ that three tolerance is shared, and calculates a total score, its formula is: S=α * ND+ β * NC+ γ * NB;S6: total score is ranked up, the demand according to different business, choose switch successively as cache node according to total score sequence, controller sends active cache instruction by OpenFlow channel to it.
Fig. 2 is embodiments of the invention flowchart, and in the examples below, the transmission controlling information selects the outer link of band, it is possible to be ethernet link, IP link channel.
As in figure 2 it is shown, be that the embodiment that the present invention obtains cache location based on the content center network cache policy that complex network is measured comprises the following steps:
Step 1: in network, content A is uploaded in controller by controlling channel by all switches in the request number of times of this switch.
Step 2: the count value uploaded of all switches collected by controller, and according to threshold value T set in advance select the count value switch more than T as buffer undetermined, and as sample point;According to these sample switch connections in network topology, build a non-directed graph;Calculate the degree centrality of each point of this non-directed graph respectively, close on centrality and betweenness, and their normalized value ND, NC and NB;Demand according to different business, determines weight respectively α, β and γ that three tolerance is shared, calculates a total score according to formula S=α * ND+ β * NC+ γ * NB, and total score is ranked up.
Step 3: the demand according to different business, switch is chosen successively as content A cache location according to total score sequence, here selected cache node is switch 4 and switch 6, controller send active cache content A instruction (2.1 and 2.2) by OpenFlow channel to it.
Step 4: requestor sends the Interest information (2.3) of request content A, after switch receives this information, first whether retrieval CS there is content, if having, return data, if not, whether retrieval PIT have the record of this Interest, if having, by the input port record of this Interest in PIT respective entries, if not, creating a new entry, recording current Interest input port.After having retrieved PIT, switch will retrieve FIB, if having, this message is forwarded to corresponding port, otherwise be forwarded to other all of the ports except input port.Here Interest message all obtains coupling in switch 1,4,5,6 and 10, is finally forwarded to supplier (2.4,2.5,2.6,2.7 and 2.8).
Step 5: supplier sends content A (2.9) after receiving Interest information.Content A is back to requestor place (2.10 according to PIT mono-tunnel of switch 10,6,5,4 and 1,2.11,2.12,2.13 and 2.14), and in the process of passback, switch 4 and switch 13 also can perform the active cache order waited, are buffered in by content A in the CS of switch 4 and switch 6.
Fig. 1 is NSFNET node topology figure, the analysis object using NSFNET node topology as us.Assume that in figure, each node is the switch that content A request counting exceedes threshold value, then the deposit position of buffer memory should be in the middle of these 14 nodes.Following table is three kinds of centrads and their respective normalized value ND, NC, the NB of each node drawn by Complex Networks Analysis.Assume that business A requires that cache location is the smaller the better from the jumping figure of request, say, that require higher to closing on centrad.Then set weight α=0.25 of the number of degrees, close on centrad weight beta=0.5, Jie's centrad weight γ=0.25.We calculate total score S more accordingly, and result is as follows:
Switch Degree centrality D ND Close on centrality C NC Jie centrality B NB Total score S
6 8 1 0.5416667 1 0.245726496 1 1
4 6 0.5 0.4814815 0.646465 0.143162393 0.555556 0.587121
5 6 0.5 0.4814815 0.646465 0.128205128 0.490741 0.570918
11 6 0.5 0.4642857 0.545455 0.132478632 0.509259 0.525042
13 6 0.5 0.4642857 0.545455 0.117521368 0.444444 0.508838
14 6 0.5 0.4642857 0.545455 0.117521368 0.444444 0.508838
3 6 0.5 0.4642857 0.545455 0.085470085 0.305556 0.474116
9 6 0.5 0.4482759 0.451411 0.11965812 0.453704 0.464131
8 6 0.5 0.4482759 0.451411 0.117521368 0.444444 0.461816
1 6 0.5 0.4482759 0.451411 0.068376068 0.231481 0.408576
2 6 0.5 0.4333333 0.363636 0.068376068 0.231481 0.364689
10 4 0 0.4333333 0.363636 0.02991453 0.064815 0.198022
7 4 0 0.40625 0.204545 0.034188034 0.083333 0.123106
12 4 0 0.3714286 0 0.014957265 0 0
It will be seen that according to the demand of this business and network component relationship, controller can select switch as cache node according to the score that switch each in result is last successively, sends the finger of active cache content by OpenFlow channel to it.
What finally illustrate is, preferred embodiment above is only in order to illustrate technical scheme and unrestricted, although the present invention being described in detail by above preferred embodiment, but skilled artisan would appreciate that, in the form and details it can be made various change, without departing from claims of the present invention limited range.

Claims (3)

1. the content center network caching method based on complex network tolerance, it is characterized in that: the method utilizes the request number of times of a certain content on the controller statistical content switch of software defined network, and select content requests number of times to exceed all switches of this threshold value as buffer undetermined in controller end setting threshold value, and utilize the cache policy based on complex network tolerance to select several preferably cache nodes from buffer undetermined at controller end, controller sent the instruction of active cache content to the cache node selected by OpenFlow channel;When content returns the process of requestor from content service node, the cache instruction that controller issues can be performed, by content caching in the cache node that controller is selected.
2. a kind of content center network caching method based on complex network tolerance according to claim 1, it is characterised in that:
The described cache policy measured based on complex network in the utilization of controller end selects several preferably cache nodes from buffer undetermined, specifically include: in complex network, adopting three kinds of Elementary Measures to weigh the importance of node: to spend centrality, close on centrality and Jie's centrality, these three tolerance has investigated the different aspect of importance respectively:
Degree centrality: the definition center simplest method of degree is to find out the limit number that a node is joined directly together, and namely spends, and spending significantly high node for one will have a lot of connection with other nodes, and in non-directed graph figure, degree is defined as:
CD(v)=deg (v)
Closing on centrality: close on centrality and defined by other node beelines in certain node to network, higher centrality of closing on means in this node off-network network that a lot of nodes are all closer to, the also center of just convergence network more, and its formula is as follows:
C c ( v ) = Σ t ∈ V \ v 2 - d G ( v , t )
Jie's centrality: Jie's centrality is that situation about being occurred on shortest path between the other nodes by node determines, Jie's centrality is to weigh the node tolerance to the importance that information is propagated in whole network, having the high central node that is situated between is the key node in network, and it is defined as:
C B ( v ) = Σ s ≠ v ≠ t ∈ V σ s t ( v ) σ s t ;
In above formula, G represents that network, V represent set of node, and v, s, t all represent node, CDV () represents the number of degrees of vertex v, C in figureC(v) characterize vertex v close on centrality, CBV () characterizes Jie's centrality of vertex v;σstRepresent that node is to (s, all of number of path, σ between t)stV () represents that node is to (s, through the number of path of node v between t).
3. a kind of content center network caching method based on complex network tolerance according to claim 2, it is characterised in that: the method specifically includes following steps:
S1: in network, the request number of times count area of a certain content A is added up to controller by respective switch;
S2: controller find out A the number of the request content switch more than threshold value T set in advance as buffer undetermined, and as sample point;
S3: using these sample switches as node, according to actual switch connection, build a non-directed graph;
S4: calculate three tolerance of this non-directed graph respectively: spend centrality, close on centrality and Jie's centrality, and their normalized value ND (NormalizedDegree), NC (NormalizedCloseness), NB (NormalizedBetweenness);
S5: the demand according to different business, determines weight respectively α, β and γ that three tolerance is shared, and calculates a total score, its formula is: S=α * ND+ β * NC+ γ * NB;
S6: total score is ranked up, the demand according to different business, choose switch successively as cache node according to total score sequence, controller sends active cache instruction by OpenFlow channel to it.
CN201610125099.7A 2016-03-04 2016-03-04 A kind of content center network caching method based on complex network measurement Expired - Fee Related CN105721600B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610125099.7A CN105721600B (en) 2016-03-04 2016-03-04 A kind of content center network caching method based on complex network measurement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610125099.7A CN105721600B (en) 2016-03-04 2016-03-04 A kind of content center network caching method based on complex network measurement

Publications (2)

Publication Number Publication Date
CN105721600A true CN105721600A (en) 2016-06-29
CN105721600B CN105721600B (en) 2018-10-12

Family

ID=56156517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610125099.7A Expired - Fee Related CN105721600B (en) 2016-03-04 2016-03-04 A kind of content center network caching method based on complex network measurement

Country Status (1)

Country Link
CN (1) CN105721600B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106982248A (en) * 2017-03-01 2017-07-25 中国科学院深圳先进技术研究院 The caching method and device of a kind of content center network
CN107105043A (en) * 2017-04-28 2017-08-29 西安交通大学 A kind of content center network caching method based on software defined network
CN108347379A (en) * 2018-02-12 2018-07-31 重庆邮电大学 Based on the centrally stored content center network method for routing in region
CN109218225A (en) * 2018-09-21 2019-01-15 广东工业大学 A kind of data pack buffer method and system
CN109644160A (en) * 2016-08-25 2019-04-16 华为技术有限公司 The mixed method of name resolving and producer's selection is carried out in ICN by being sorted in
CN110402567A (en) * 2016-12-29 2019-11-01 华为技术有限公司 Central caching is based in network centered on information
CN110830298A (en) * 2019-11-08 2020-02-21 北京师范大学 Method for measuring targeted propagation capacity on complex network
CN112887943A (en) * 2021-01-27 2021-06-01 福州大学 Cache resource allocation method and system based on centrality

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101370030A (en) * 2008-09-24 2009-02-18 东南大学 Resource load stabilization method based on contents duplication
CN101431530A (en) * 2007-10-26 2009-05-13 阿尔卡泰尔卢森特公司 Method for caching content data packages in caching nodes
EP2317727A1 (en) * 2009-10-28 2011-05-04 Alcatel Lucent Method for cache management and devices therefore
US20140149533A1 (en) * 2012-11-27 2014-05-29 Fastly Inc. Data storage based on content popularity
CN104885431A (en) * 2012-12-13 2015-09-02 华为技术有限公司 Content based traffic engineering in software defined information centric networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101431530A (en) * 2007-10-26 2009-05-13 阿尔卡泰尔卢森特公司 Method for caching content data packages in caching nodes
CN101370030A (en) * 2008-09-24 2009-02-18 东南大学 Resource load stabilization method based on contents duplication
EP2317727A1 (en) * 2009-10-28 2011-05-04 Alcatel Lucent Method for cache management and devices therefore
US20140149533A1 (en) * 2012-11-27 2014-05-29 Fastly Inc. Data storage based on content popularity
CN104885431A (en) * 2012-12-13 2015-09-02 华为技术有限公司 Content based traffic engineering in software defined information centric networks

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109644160A (en) * 2016-08-25 2019-04-16 华为技术有限公司 The mixed method of name resolving and producer's selection is carried out in ICN by being sorted in
CN109644160B (en) * 2016-08-25 2020-12-04 华为技术有限公司 Hybrid method for name resolution and producer selection in ICN by classification
CN110402567B (en) * 2016-12-29 2021-06-01 华为技术有限公司 Centrality-based caching in information-centric networks
CN110402567A (en) * 2016-12-29 2019-11-01 华为技术有限公司 Central caching is based in network centered on information
CN106982248B (en) * 2017-03-01 2019-12-13 中国科学院深圳先进技术研究院 caching method and device for content-centric network
CN106982248A (en) * 2017-03-01 2017-07-25 中国科学院深圳先进技术研究院 The caching method and device of a kind of content center network
CN107105043B (en) * 2017-04-28 2019-12-24 西安交通大学 Content-centric network caching method based on software defined network
CN107105043A (en) * 2017-04-28 2017-08-29 西安交通大学 A kind of content center network caching method based on software defined network
CN108347379A (en) * 2018-02-12 2018-07-31 重庆邮电大学 Based on the centrally stored content center network method for routing in region
CN109218225A (en) * 2018-09-21 2019-01-15 广东工业大学 A kind of data pack buffer method and system
CN109218225B (en) * 2018-09-21 2022-02-15 广东工业大学 Data packet caching method and system
CN110830298A (en) * 2019-11-08 2020-02-21 北京师范大学 Method for measuring targeted propagation capacity on complex network
CN112887943A (en) * 2021-01-27 2021-06-01 福州大学 Cache resource allocation method and system based on centrality
CN112887943B (en) * 2021-01-27 2022-07-08 福州大学 Cache resource allocation method and system based on centrality

Also Published As

Publication number Publication date
CN105721600B (en) 2018-10-12

Similar Documents

Publication Publication Date Title
CN105721600A (en) Content centric network caching method based on complex network measurement
KR101567385B1 (en) A method for collaborative caching for content-oriented networks
CN102710489B (en) Dynamic shunt dispatching patcher and method
EP3206348B1 (en) Method and system for co-operative on-path and off-path caching policy for information centric networks
CN103905332B (en) A kind of method and apparatus for determining cache policy
Janaszka et al. On popularity-based load balancing in content networks
Le et al. Social caching and content retrieval in disruption tolerant networks (DTNs)
CN108366089B (en) CCN caching method based on content popularity and node importance
CN106533733A (en) CCN collaborative cache method and device based on network clustering and Hash routing
Wang et al. Effects of cooperation policy and network topology on performance of in-network caching
CN108965479B (en) Domain collaborative caching method and device based on content-centric network
Wu et al. MBP: A max-benefit probability-based caching strategy in information-centric networking
Lv et al. ACO-inspired ICN routing mechanism with mobility support
CN105141512A (en) Unified network configuration and control method supporting packet/circuit mixed exchange network
Lau et al. An agent-based dynamic routing strategy for automated material handling systems
CN105657054A (en) Content center network caching method based on K means algorithm
Pacifici et al. Coordinated selfish distributed caching for peering content-centric networks
Yufei et al. A centralized control caching strategy based on popularity and betweenness centrality in ccn
Zheng et al. Optimal proactive cache management in mobile networks
CN109525494A (en) Opportunistic network routing mechanism implementation method based on message next-hop Dynamic Programming
Cui et al. Design of in-network caching scheme in CCN based on grey relational analysis
CN105188088B (en) Caching method and device based on content popularit and node replacement rate
CN107302571A (en) Information centre's network route and buffer memory management method based on drosophila algorithm
Singh et al. Hybrid information placement in named data networking-Internet of Things system
Cheng et al. A Forwarding Strategy Based on Recommendation Algorithm in Named Data Networking

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20181012

CF01 Termination of patent right due to non-payment of annual fee