CN108076144A - The fair cache algorithm and device of a kind of content center network - Google Patents

The fair cache algorithm and device of a kind of content center network Download PDF

Info

Publication number
CN108076144A
CN108076144A CN201711254118.7A CN201711254118A CN108076144A CN 108076144 A CN108076144 A CN 108076144A CN 201711254118 A CN201711254118 A CN 201711254118A CN 108076144 A CN108076144 A CN 108076144A
Authority
CN
China
Prior art keywords
node
request
data
dtt
itt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711254118.7A
Other languages
Chinese (zh)
Other versions
CN108076144B (en
Inventor
袁东明
徐亚楠
胡鹤飞
冉静
刘元安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN201711254118.7A priority Critical patent/CN108076144B/en
Publication of CN108076144A publication Critical patent/CN108076144A/en
Application granted granted Critical
Publication of CN108076144B publication Critical patent/CN108076144B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses the fair cache algorithms and device of a kind of content center network, it is included in request interest packet procedures, statistics request interest bag and its number, and the order according to request number of times from high to low, interest bag is ranked up, it is stored in afterwards in interest bag request number of times Table I TT (Interest Times Table), and is transferred to next node.In data distributing process, end (server) is provided in data, the ITT tables indirect assignment of present node is given to request data package table DTT (Data Times Table), and this request data package table is transferred to its child node;In non-data provides node, DTT is obtained, calculates the cache residual space C of present node, L entry before caching, and ensure the total amount of data of this L entry less than or equal to 0.9C;After caching successfully, the entry cached is deleted in DTT, is resequenced;Request data package table is transmitted to its child node.The present invention defines content popularit according to request number of times, has there is provided buffer threshold, and the data redundancy of content center network is reduced with this, improves hit rate, reduces hit rate variance, and it is fair to realize caching.

Description

The fair cache algorithm and device of a kind of content center network
Technical field
The present invention relates to network communication technology fields, and in particular to the fair cache algorithm and dress of a kind of content center network It puts.
Background technology
Content center network (Content Centric Networking, CCN) research is National Science Foundation Most representative one kind in one of 4 Future Internet framework support projects proposed in August, 2010 and all entries Distributed Future network architectures.It is one of important achievement of current Future-oriented Internet Architecture research, and core concept is Change the end-to-end communication mechanism of current internet terminal room, content and terminal location are removed, pass through publish/subscribe normal form (Publish/Subscribe Paradigm) provides the services such as storage and multi-party communication.
It is universal in the CCN network architectures in order to alleviate the rapid growth of network traffics severe pressure caused by network bandwidth Employ caching method built in the network of ubiquitousization.However, caching mechanism obtains performance in the content distribution for improving network Simultaneously, it is also possible to lead to the problem of excessive caching redundancy and network resource utilization and efficiency is caused to reduce.In CCN networks In, the whole network node embed caching advantage and cache resources using the contradiction between insufficient, be urgently to be solved in caching research Certainly the problem of.Typical CCN nodes are mainly comprising content memorizer (CS), pending required list (Pending Interest Table, PIT) and Forwards Forwarding storehouse (Forwarding Information Base, FIB).Here, FIB saves CCN nodes The next-hop interface of content server is reached, CS preserves the cache contents on node, and PIT records the content name for not responding interest bag And its reach interface.
The research of cache policy has history for many years, and complete cache policy is made of two parts:Cache decision-making And cache replacement policy.In terms of cache decision, mainly there are random ProbCache, LCD (Leave Copy Down) and deposit entirely (ALWAYS) 3 kinds.Although algorithm is simple and practicable, it can so that content redundancy is higher in network and node hit rate is relatively low. In order to improve network-caching performance, existing caching method is mainly based upon content popularit and the section calculated based on complex network Point attribute (betweenness, the number of degrees, centrad etc.).Existing popularity acquisition modes some is excessively complicated, it is necessary to by all requests Data packet and current interest band enter formula and acquire, and so huge calculation amount will obviously expend the memory of cache node, increase Network delay;Some obtains the popularity of interest bag based on prediction algorithm, and this algorithm can cause because of the difference of user's request amount Accuracy rate is different.
In addition, the nodal community algorithm based on complex network calculating is easily by the high content caching of popularity in betweenness, degree The high node of the attributes such as number, centrad, the content for being easy to cause these nodes are constantly replaced, and joint behavior is caused to decline;And And content being unevenly distributed in a network, hit rate difference is excessive, and data redudancy is high, and the operand of whole system is high, bears It carries greatly, it is unbalanced.
The content of the invention
In view of this, the embodiment of the present invention provides a kind of fair cache algorithm and device of content center network, for solving Certainly existing caching method operand is big, and hit rate is low, the unbalanced feature of utilization rate, hit rate between each node.
A kind of fair cache algorithm of content center network is provided based on the above-mentioned purpose embodiment of the present invention, including:
During request data package, in fringe node, statistics request interest bag and its number, and be stored in interest bag and ask It asks in frequency table, in order of the present node according to request number of times from high to low, interest bag is ranked up, it will the current row of completion The interest bag request number of times Table I TT (Interest Times Table) of sequence is transferred to next node;
In next cache node, merge the ITT of its child node transmission, by the identical corresponding request time of request interest bag Number is added, and obtains the ITT of current cache node, and the ITT of present node is sorted according to the order of request number of times from high to low, will This table is transmitted to next node;
As a result, until all nodes have ITT.
In data distributing process, end (server) is provided in data, by the ITT table indirect assignments of present node to number of request Its child node is arrived according to bag table DTT (Data Times Table), and by the outflow of this request data package table;
In non-data provides node, DTT is obtained, the cache residual space C of present node is calculated, before taking request data package L entry is cached, and provides that the total amount of data of this L entry is less than or equal to 0.9C;After caching successfully, deleted in DTT The entry cached, and resequence;Request data package table is transmitted to its child node.
Further, it is described in fringe node, the ITT tables of calculating current cache node, which is characterized in that in interest bag Receiving port often receives an interest bag, is required for inquiring about whether have existed this interest bag in interest bag table is tracked, if depositing Lower port is then being recorded, and corresponding request number of times is added 1;If being not present, this interest bag, corresponding port are recorded, and is set Request number of times is put as 1.
Further, in ITT transmission processes, from fringe node, searched whether that user please in the buffer first The data packet asked, if so, then directly sending data to user;If the interest bag for no, asking user in PIT (Interest) and its corresponding request number of times is recorded, and forms interest bag request number of times table, and according to request number of times Size carries out descending arrangement to PIT;Then by forwarding tables of data that this interest bag request number of times table is forwarded to next node B;
Node B summarizes the interest bag request number of times table of its child node in interest bag table is tracked first, inquire about in CS whether Interesting bag cache contents, if so, then deleting this interest bag in interest bag request number of times table, and according to FIB, reach next section Point;
And so on, until all cache nodes have ITT reports.
Further, when data distributing to current cache node, inquire about whether there is the number in present node in PIT first Port is asked according to bag, if in the presence of by this data forwarding so far port;If being not present, give up.
Further, 0.9 setting is to realize being uniformly distributed for popular content.On the one hand so that current cache section Point leaves remaining cache space, mitigates the load of present node;On the other hand, 0.9 setting causes the higher content of popularity Some or several nodes will not be focused on, so that the content replacement frequency and hit rate of cache node reach balanced.
An embodiment of the present invention provides a kind of buffer storage of content center network, including:
Interest bag request number of times Table I TT, to record the popularity of interest bag;
Request data package table DTT so as to can be more fair in each nodal cache user requested data, improves node life Middle rate reduces network delay, reduces the hit rate variance of node.
Further, ITT is used to record the data packet of user's request and its number of request, and the stream of interest bag is judged with this Row degree.It is arranged from high to low according to user's request number of times;Make in PIT, increase ITT list items, receiving request interest for the first time Corresponding request number of times is assigned a value of 1 by Bao Shi, whithin a period of time, in the interest coating request process, is often requested one Secondary, corresponding request number of times adds 1;
Further, DTT needs the data cached and its requested number of data for record buffer memory node;It is basis What user's request number of times arranged from high to low has ordinal number table;It is the list of the individualism in CS;In the data packet of the content In, according to present node residual caching capacity C, the content less than or equal to 0.9C capacity is taken to be cached in this DTT tables, delayed Corresponding entry is deleted after depositing in DTT, new DTT tables is formed and is transmitted to next node.
Description of the drawings
Fig. 1 is the flow diagram of the caching method of the content center network of one embodiment of the invention;
Fig. 2 is the schematic diagram of the caching method of the content center network of one embodiment of the invention;
Fig. 3 is the structure diagram of the buffer storage of the content center network of one embodiment of the invention.
Specific embodiment
Understand to make the object, technical solutions and advantages of the present invention clearer, below in conjunction in the embodiment of the present invention Attached drawing, the technical solution in the embodiment of the present invention is further carried out it is clear, complete, describe in detail, it is clear that it is described Embodiment is only part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, this field Those of ordinary skill's all other embodiments obtained, belong to the scope of protection of the invention.
Fig. 1 is the flow diagram of the method for one embodiment of the invention.As shown in Figure 1, the method for the embodiment includes:
S101:In fringe node, statistics request interest bag and its number, and be stored in interest bag request number of times table, Order of the present node according to request number of times from high to low, is ranked up interest bag, please by the current interest bag for completing sequence Frequency table ITT (Interest Times Table) is asked to be transferred to next node;
S102:Merge the ITT of its child node transmission, the identical corresponding request number of times of request interest bag is added, is obtained The ITT of present node according to the order of request number of times is from high to low sorted, this table is transmitted to by the ITT of current cache node Next node;
S103:Determine that all nodes there are ITT tables.
S104:End (server) is provided in data, gives the ITT tables indirect assignment of present node to request data package table DTT (Data Times Table), and its child node is arrived into the outflow of this request data package table;
S105:DTT is obtained, calculates the cache residual space C of present node, L entry is delayed before taking request data package It deposits, and provides that the total amount of data of this L entry is less than or equal to 0.9C;After caching successfully, the entry cached is deleted in DTT, And it resequences;Request data package table is transmitted to its child node.
S106:Determine that all nodes all obtain DTT and all cached according to DTT.
Illustrate the detailed process of the caching method of the content center network of the embodiment of the present invention by taking Fig. 2 as an example below:
S201:The request interest bag received is counted in fringe node A
D1D2D3D4, corresponding request number of times is k1k2k3k4, it is as shown in table 1 to obtain ITT:
The ITT of 1 present node of table
Ask interest bag Request number of times
I1 k1
I2 k2
I3 k3
I4 k4
S202:Interest bag is ranked up according to the order of request number of times from high to low, the ITT tables after being sorted, such as Shown in table 2:
ITT after the sequence of table 2
Ask interest bag Request number of times
I3 k3
I2 k2
I1 k1
I4 k4
S203:The ITT for completing sequence is transmitted in its father node.
S204:Present node (father node) merges the ITT of its child node, definition
k1=k11+k12, wherein k1Represent interest bag D in present node1Request number of times, k11Represent emerging in the ITT of child node 1 Interest bag D1Request number of times, k12Represent interest bag D in the ITT of child node 21Request number of times, obtain the ITT of present node, such as Shown in table 3:
ITT after the merging of table 3
Ask interest bag Request number of times
I3 k3
I2 k2
I1 k1
I4 k4
I5 k5
I6 k6
S205:ITT after merging according to request number of times is resequenced again, obtains table 4:
ITT after the sequence of table 4
Ask interest bag Request number of times
I3 k3
I6 k6
I1 k1
I5 k5
I2 k2
I4 k4
S206:Judge whether present node is data source (i.e. server), if so, no longer passing ITT to other nodes It passs;If it is not, then continue to pass to its father node.
S207:For the purpose of simplifying the description, the present embodiment fixes tentatively present node as data source, then in this node, ITT is directly assigned It is worth to the DTT of present node, therefore obtains table 5:
The DTT of 5 present node of table
Data packet Request number of times
D3 k3
D6 k6
D1 k1
D5 k5
D2 k2
D4 k4
S208:DTT is passed into its child node.
S209:After present node obtains DTT, the caching residual capacity of this node is calculated, is defined as C.
S210:According in DTT data packet put in order it is data cached.Calculate data packet DiSize, be defined as ci.With Exemplified by table 5, if c3< 0.9C, then cache D3If c6< (0.9C-c3), then cache D6;If c6> (0.9C-c3), then do not cache D6, And stop comparing.
It should be strongly noted that the setting of 0.9 thresholding herein, is to realize the fairness of caching so that popular A cache node will not be concentrated on by spending high content, to reduce the load of this node, reduce hit rate between each node Difference.This thresholding is drawn according to many experiments.
S211:Delete the data D cachedi, new DTT is obtained, as shown in chart 6:
DTT after the completion of the caching of table 6
Data packet Request number of times
D1 k1
D5 k5
D2 k2
D4 k4
S212:Judge whether present node is fringe node, if so, terminating flow;If not then by newly-generated DTT It is transmitted to next node.
For cache contents distribution problem in content center network CCN, the prior art mostly computation complexity is high, and data are superfluous It is remaining serious.The embodiment of the present invention sets interest bag to ask innovatively by the reference of user's request number of times content popularit the most Frequency table with reference to the buffer memory capacity of itself, and defines buffer threshold, has reached simply and effectively fair cache policy.This hair is real Apply that bright the proposed method of example is easy to operate, and this method can be easily used to implement in network by professional, have compared with High practicability.The embodiment of the present invention can not only ensure the service quality of high popularity content, additionally it is possible to improve in network The diversity of appearance is effectively improved the utilization rate of nodes storage resource.
Fig. 3 is the buffer storage structure diagram of the content center network of one embodiment of the invention.As shown in figure 3, ITT (interest bag request number of times table) is placed in PIT, is merged with pending required list, to reduce data redundancy;DTT is placed in CS It is interior, to facilitate nodal cache data.
Disclosed exemplary embodiment, but should disclosed exemplary embodiment, it should be noted that without departing substantially from power On the premise of the scope disclosed by the invention that profit requirement limits, it may be many modifications and change.According to public affairs described herein The function, step and/or action for opening the claim to a method of embodiment are not required to perform with any particular order.In addition, although originally The element of disclosure of the invention can be described or required in the form of individual, it is also contemplated that it is multiple, it is unless explicitly limited odd number.
It should be appreciated that it is used in the present context, unless context clearly supports exception, singulative " one It is a " (" a ", " an ", " the ") be intended to also include plural form.It is to be further understood that "and/or" used herein is Finger includes one or the arbitrary and all of more than one project listed in association can
The invention described above discloses that embodiment sequence number is for illustration only, does not represent the quality of embodiment.
Those of ordinary skills in the art should understand that:The discussion of any of the above embodiment is exemplary only, not It is intended to imply that scope disclosed by the invention is limited to these examples (including claim);Under the thinking of the embodiment of the present invention, It can also be combined, and exist present invention as described above between technical characteristic in above example or different embodiments Many other variations of the different aspect of embodiment, for simplicity, they are not provided in details.Therefore, it is all of the invention real It applies within the spirit and principle of example, any omission for being made, modification, equivalent substitution, improvement etc. should be included in implementation of the present invention Within the protection domain of example.

Claims (9)

1. a kind of fair cache algorithm of content center network, which is characterized in that including:
Request data packet procedures:
In fringe node, statistics request interest bag and its number, and be stored in interest bag request number of times table, in present node root According to the order of request number of times from high to low, interest bag is ranked up, by the current interest bag request number of times Table I TT for completing sequence (Interest Times Table) is transferred to next node;
In next cache node, merge the ITT of its child node transmission, by the identical corresponding request number of times phase of request interest bag Add, obtain the ITT of current cache node, the ITT of present node is sorted according to the order of request number of times from high to low, by this table It is transmitted to next node;
As a result, until all nodes have ITT.
Data distributing process:
End (server) is provided in data, gives the ITT tables indirect assignment of present node to request data package table DTT (Data Times Table), and its child node is arrived into the outflow of this request data package table;
In non-data provides node, DTT is obtained, calculates the cache residual space C of present node, take before request data package L Entry is cached, and provides that the total amount of data of this L entry is less than or equal to 0.9C;After caching successfully, delete and delayed in DTT The entry deposited, and resequence;Request data package table is transmitted to its child node.
It is 2. according to claim 1 in fringe node, the ITT tables of calculating current cache node, which is characterized in that at edge Cache node often receives an interest bag, is required for inquiring about whether have existed this interest bag in interest bag table is tracked, if depositing Lower port is then being recorded, and corresponding request number of times is added 1;If being not present, this interest bag, corresponding port are recorded, and is set Request number of times is put as 1.
3. it is according to claim 1 in non-edge node, obtain ITT tables, which is characterized in that summarizing the institute of its child node After having ITT, bag cache contents of whether being interested in CS are inquired about, if so, then deleting this interest bag in ITT, and according to FIB, are passed To next node;If no, ITT tables are ranked up.
4. design according to claim 1 is during data distributing, in the DTT that server end obtains, which is characterized in that It is equal in server end DTT with ITT.
5. during data distributing, non-server end obtains DTT for design according to claim 1, which is characterized in that After obtaining upper level DTT, present node remaining space C is calculated, and after the data of caching 0.9C capacity, is deleted corresponding in DTT Data entry, obtain the DTT tables of this node.
6. 0.9C capacity according to claim 5, which is characterized in that 0.9 this thresholding is set, is to realize caching Fairness so that the high content of popularity will not concentrate on a cache node, to reduce the load of this node, reduce each section The difference of hit rate between point.This thresholding is drawn according to many experiments.
7. a kind of buffer storage of content center network, which is characterized in that including:
In interest bag transmission process, ITT tables are added, to record the popularity of interest bag;During data distributing, increase DTT tables so as to can be more fair in each nodal cache user requested data, improve node hit rate, when reducing network Prolong, reduce the hit rate variance of node.
8. interest bag request number of times Table I TT (Interest Times Table) according to claim 7, feature exists In, including:
For recording the number of the data packet of user's request and its request, the popularity of interest bag is judged with this.It is according to user What request number of times arranged from high to low has ordinal number table;Make in PIT, increased interest bag request data ITT list items, for the first time When receiving request interest bag, corresponding request number of times is assigned a value of 1, whithin a period of time, in the interest coating request process, It is often requested once, corresponding request number of times adds 1.
9. data pack buffer race-card DTT according to claim 7, which is characterized in that including:
The data and its requested number of data cached for record buffer memory node needs;It is by height according to user's request number of times There is ordinal number table to low arrangement;It is the list of the individualism in CS;In the data packet of the content, remained according to present node Remaining buffer memory capacity C, takes the content of 0.9C capacity to be cached in this DTT tables, deletes corresponding entry after caching in DTT, Form new DTT tables.
CN201711254118.7A 2017-12-03 2017-12-03 Fair caching algorithm and device for content-centric network Active CN108076144B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711254118.7A CN108076144B (en) 2017-12-03 2017-12-03 Fair caching algorithm and device for content-centric network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711254118.7A CN108076144B (en) 2017-12-03 2017-12-03 Fair caching algorithm and device for content-centric network

Publications (2)

Publication Number Publication Date
CN108076144A true CN108076144A (en) 2018-05-25
CN108076144B CN108076144B (en) 2020-09-11

Family

ID=62157554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711254118.7A Active CN108076144B (en) 2017-12-03 2017-12-03 Fair caching algorithm and device for content-centric network

Country Status (1)

Country Link
CN (1) CN108076144B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111556514A (en) * 2020-04-14 2020-08-18 北京航空航天大学 Decentralized mobile edge computing resource discovery and selection method and system
CN112261128A (en) * 2020-10-21 2021-01-22 重庆邮电大学 Active push cache method for content source movement in CCN

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104253855A (en) * 2014-08-07 2014-12-31 哈尔滨工程大学 Content classification based category popularity cache replacement method in oriented content-centric networking
CN105100008A (en) * 2014-05-09 2015-11-25 华为技术有限公司 Method and related device for distributing contents in content-centric network
CN105577537A (en) * 2015-12-25 2016-05-11 中国科学院信息工程研究所 Multipath forwarding method and system of history record based information centric network
WO2016150502A1 (en) * 2015-03-25 2016-09-29 Nec Europe Ltd. Method and device of processing icn interest messages in a dtn scenario
CN107135271A (en) * 2017-06-12 2017-09-05 浙江万里学院 A kind of content center network caching method of Energy Efficient

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105100008A (en) * 2014-05-09 2015-11-25 华为技术有限公司 Method and related device for distributing contents in content-centric network
CN104253855A (en) * 2014-08-07 2014-12-31 哈尔滨工程大学 Content classification based category popularity cache replacement method in oriented content-centric networking
WO2016150502A1 (en) * 2015-03-25 2016-09-29 Nec Europe Ltd. Method and device of processing icn interest messages in a dtn scenario
CN105577537A (en) * 2015-12-25 2016-05-11 中国科学院信息工程研究所 Multipath forwarding method and system of history record based information centric network
CN107135271A (en) * 2017-06-12 2017-09-05 浙江万里学院 A kind of content center network caching method of Energy Efficient

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111556514A (en) * 2020-04-14 2020-08-18 北京航空航天大学 Decentralized mobile edge computing resource discovery and selection method and system
CN111556514B (en) * 2020-04-14 2021-09-21 北京航空航天大学 Decentralized mobile edge computing resource discovery and selection method and system
CN112261128A (en) * 2020-10-21 2021-01-22 重庆邮电大学 Active push cache method for content source movement in CCN
CN112261128B (en) * 2020-10-21 2022-08-12 重庆邮电大学 Active push caching method for content source movement in CCN

Also Published As

Publication number Publication date
CN108076144B (en) 2020-09-11

Similar Documents

Publication Publication Date Title
CN107566194B (en) Method for realizing cross-domain virtual network mapping
US8254249B2 (en) Session resilience prioritization queuing mechanism to minimize and eliminate packet loss
US9065809B2 (en) Method and node for distributing electronic content in a content distribution network
JP5014282B2 (en) Communication data statistics apparatus, communication data statistics method and program
CN107404530B (en) Social network cooperation caching method and device based on user interest similarity
WO2019129300A1 (en) Cache decision method and device
Sinky et al. Responsive content-centric delivery in large urban communication networks: A LinkNYC use-case
Nasimi et al. Edge-assisted congestion control mechanism for 5G network using software-defined networking
Tuncer et al. Scalable cache management for ISP-operated content delivery services
CN108076144A (en) The fair cache algorithm and device of a kind of content center network
Khan et al. Offloading content with self-organizing mobile fogs
Bera et al. Mobility-aware flow-table implementation in software-defined IoT
Yen et al. Cooperative online caching in small cell networks with limited cache size and unknown content popularity
CN106686060A (en) Content spread method and system
Li et al. Content size-aware edge caching: A size-weighted popularity-based approach
CN106686399B (en) It is a kind of based on joint cache structure network in video cache method
Mahdian et al. MinDelay: Low-latency joint caching and forwarding for multi-hop networks
Anamalamudi et al. Cooperative caching scheme for machine-to-machine information-centric IoT networks
Hou et al. A gnn-based approach to optimize cache hit ratio in ndn networks
Sun et al. A proactive on-demand content placement strategy in edge intelligent gateways
CN111629390A (en) Network slice arranging method and device
Saino On the design of efficient caching systems
Noh et al. Progressive caching system for video streaming services over content centric network
CN106230723B (en) A kind of message forwarding cache method and device
CN106507415B (en) A kind of content caching and network cooperating method of mobile network-oriented

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant