CN111628933A - Path caching method based on content relevance in information center network - Google Patents

Path caching method based on content relevance in information center network Download PDF

Info

Publication number
CN111628933A
CN111628933A CN202010438358.8A CN202010438358A CN111628933A CN 111628933 A CN111628933 A CN 111628933A CN 202010438358 A CN202010438358 A CN 202010438358A CN 111628933 A CN111628933 A CN 111628933A
Authority
CN
China
Prior art keywords
node
content
packet
path
psi
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010438358.8A
Other languages
Chinese (zh)
Inventor
杨武
苘大鹏
吕继光
王巍
玄世昌
卢琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN202010438358.8A priority Critical patent/CN111628933A/en
Publication of CN111628933A publication Critical patent/CN111628933A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/302Route determination based on requested QoS
    • H04L45/306Route determination based on the nature of the carried application
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/563Data redirection of data network streams

Abstract

The invention belongs to the technical field of cache decision of an information center network, and particularly relates to a path cache method based on content relevance in the information center network. The invention finds the correlation between the target content and the node storage content by finding out the content with the strongest correlation with the historical popular content, and simultaneously considers the position of the node in the path to carry out caching decision. The method has better cache hit rate performance, can comprehensively make cache decision according to the position offset of the node on a forwarding path and the correlation between the content and other popular contents of the current node, and can timely change the frequently accessed content of the node. The invention has better adaptability and elasticity and can adapt to the change of network topology.

Description

Path caching method based on content relevance in information center network
Technical Field
The invention belongs to the technical field of cache decision of an information center network, and particularly relates to a path cache method based on content relevance in the information center network.
Background
Information Centric Networking (ICN), a subversive and novel model of communication networks, has recently become a focus in the field of future network architecture research. The ICN takes information as a center, and directly uses information names to realize data identification, retrieval and route forwarding. The information center network takes the cache as a built-in structure, and the nodes store all flowing data by default, so that subsequent requests can be responded as soon as possible. In an actual network, the spreading and popularity trend of one piece of information changes continuously with time, and the popularity of the information is different in different time periods. In some existing researches, only the attribute, network structure or caching method of the node is considered, but the characteristics of the content, such as classical LCE, LCD and the like, are ignored; or the method of deep learning and the like is used for analyzing the popularity trend of the content, the state of the node and the popularity trend of the content are comprehensively considered, and the cache method proposed in the researches does not consider the association among the contents requested by the interest packages passing through the same node, namely, the contents accessed by the same node at different moments at high frequency have certain correlation.
Disclosure of Invention
The invention aims to provide a path caching method based on content relevance in an information center network, which solves the problem of neglecting the relevance between contents in the prior art, has better cache hit rate performance, and can comprehensively make caching decision according to the position offset of a node on a forwarding path and the relevance between the contents and other popular contents of the current node.
The purpose of the invention is realized by the following technical scheme: the method comprises the following steps:
step 1: a request node generates an interest packet, and a hop field in the interest packet is initialized to 0; the request node forwards the interest packet to the next routing node along the path;
step 2: after receiving the interest packet, the node adds one to the hop field in the interest packet;
and step 3: if the current node is a content source node or an intermediate cache node, the interest packet reaches a transmission end point, and the node stores a hop field value in the interest packet into a variable maxhop; the node constructs a data packet containing a variable maxhop and returns the data packet to the request node along the original path;
if the current node is not a content source node or an intermediate cache node, the node forwards the interest packet to the next routing node from the port according to the FIB, and the step 2 is returned;
and 4, step 4: returning the data packet to the request node from the content source node or the intermediate cache node; the nodes in the packet path need to determine whether to store data in the packet, in addition to completing the routing forwarding of the packet, and the specific determination method is as follows:
step 4.1: after receiving the data packet, the node analyzes the relevance between the content in the data packet and the local popular content, and calculates the relevance xi between the target content and a cache content in the local CS;
step 4.2: the node extracts the relative position information of the current node on the forwarding path according to the hop field in the data packet, and calculates the position offset h of the current node relative to the request node through normalization processing;
step 4.3: calculating the caching priority psi of the current node on the path for the target content;
ψ=ξ+h+p
wherein p is a random factor;
step 4.4: judging whether the caching priority psi of the current node on the path for the target content is smaller than a set threshold psi';
if psi < psi', the current node does not store the data in the data packet;
if psi ≧ psi', the current node stores the contents of the packet in the local CS.
The invention has the beneficial effects that:
the invention finds the correlation between the target content and the node storage content by finding out the content with the strongest correlation with the historical popular content, and simultaneously considers the position of the node in the path to carry out caching decision. The method has better cache hit rate performance, can comprehensively make cache decision according to the position offset of the node on a forwarding path and the correlation between the content and other popular contents of the current node, and can timely change the frequently accessed content of the node. The invention has better adaptability and elasticity and can adapt to the change of network topology.
Drawings
FIG. 1 is a diagram of interest package and data package formats of the present invention.
FIG. 2 is a comparative experimental analysis chart.
FIG. 3 is a pseudo code diagram of interest packet processing in the present invention.
Fig. 4 is a pseudo code diagram of packet processing in the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Aiming at the neglect of the Correlation between contents in the existing research and the characteristic that a certain Correlation possibly exists between contents accessed by nodes successively, the invention designs a path caching method OPCCC (On-Path based On Content Correlation) based On the Correlation of the contents, finds out the contents with the strongest Correlation with the historical popular contents, and carries out caching decision by finding the Correlation between the target contents and the stored contents of the nodes and considering the positions of the nodes in the path.
The method has better cache hit rate performance, can comprehensively make cache decision according to the position offset of the node on a forwarding path and the correlation between the content and other popular contents of the current node, and can timely change the frequently accessed content of the node. The system access delay is basically the same as that of a typical cache method, such as an LCD (liquid crystal display), and the cache hit rate is improved. The invention has better adaptability and elasticity and can adapt to the change of network topology.
In the invention, the interest packet and the data packet are forwarded hop by hop along the path, and each node of the path needs to perform correlation analysis to obtain the correlation degree between the content and the popular content of the current node. And then calculating the cache priority of the content at the current node by a cache decision method according to the correlation and the relative position of the current node on the path. And comparing the cache priority with a threshold value set by a system to determine whether the current node needs to store the target content. If the cache priority is larger than the threshold set by the system, adding the target content into the CS of the current node for the subsequent interest packet to respond nearby; otherwise, the current node does not store the content.
The analysis of the correlation degree is realized by a correlation analysis module in the routing node. Before carrying out association mining, a node needs to construct a request record set R locally, then a frequent 2 item set is generated according to an Apriori algorithm, and the relevance of target content and other contents of the node is calculated by using the frequent 2 item set. The Apriori algorithm is a classical rule association mining algorithm, finds out a frequent item set by using an iterative idea, and then generates an expected rule according to the frequent item set, thereby extracting the association among data and guiding the decision making, and the Apriori algorithm is widely used in the fields of shopping basket analysis, network intrusion detection technology, scientific research data analysis and the like of the consumer market. The Apriori algorithm has two goals: (1) excavating a frequent item set; (2) the association rule is discovered. When generating a frequent item set, firstly generating an item set list composed of all single contents according to a record set, namely a candidate set with the length of 1, then scanning the record set, and removing the item set which does not meet the minimum support degree by a pruning method. And performing a connecting operation on the remaining item sets to generate a candidate set 2-item set. And circulating the above operations to generate the k-frequent item sets layer by layer until a new candidate item set can not be generated. Therefore, Apriori's algorithm has the disadvantage of frequently scanning the entire record set, producing a large number of intermediate results. In the analysis of the content relevancy, only a frequent item set is generated for calculating the relevancy, and no association rule is generated. In order to reduce the iterative operation of the algorithm, reduce the number of scans on the whole content request record set, and avoid generating a large number of unnecessary candidate sets, this section calculates the correlation degree by using the frequent 2 item set. The algorithm can stop digging out 2-item frequent item sets, and the calculation of the correlation degree is carried out by utilizing the frequent 2 item sets.
The relevance analysis of the target content and other popular contents of the current node enables the system to sense the access preference of the user in time, and adjusts the cache decision, so as to select the content with high relevance as accurately as possible for storage; meanwhile, the position offset of the node on the path is considered, and the node closer to the user is selected as the cache deployment node, so that the cache deployment node theoretically has better cache performance.
When a node receives an interest packet of other nodes from a network, except that the interest packet is processed according to an execution flow and a default routing rule in the NDN, a correlation analysis module is initialized, wherein the correlation analysis module comprises request record set construction and frequent 2 item set generation; and meanwhile, updating a hop field in the interest packet, and collecting the position information of the node on the path for the cache decision.
The hop field in the interest packet is initialized to 0 by the requesting node that generated the interest packet. And then, when the interest packet reaches a routing node along the path, the node adds one to the field in the interest packet and forwards the interest packet to a next hop node from the port according to the FIB. When receiving the interest package, the node first determines whether the requested content in the interest package can be provided. If the interest packet reaches the content source node or the intermediate cache node, the interest packet reaches the transmission end point, the hop field value in the interest packet is the total number of nodes through which the interest packet is transmitted along the path, the content source node or the intermediate cache node stores the hop field value in the interest packet into a variable maxhop, and the hop field value is added into the data packet when the node constructs a data packet return response.
When a data packet returns from a content source node or an intermediate cache node, in this process, the node needs to determine whether to store currently forwarded data according to a cache decision mechanism in addition to completing routing forwarding of the data packet. The method specifically comprises the following four steps:
(a) and analyzing the relevance between the current forwarding data and the local popular content, and calculating the relevance xi between the target content and a cache content in the local CS.
(b) And extracting the relative position information of the current node on the forwarding path according to the hop field in the interest packet, and calculating the position offset degree h of the current node relative to the request node through normalization processing.
Figure BDA0002503153080000041
Therein, max1≤i≤pl{hopiDenotes the maximum hop value, min, in the path to which the packet is forwarded1≤i≤pl{hopiIndicating the minimum hop value in the path for forwarding the data packet; pl refers to the number of nodes in the path to which the packet is forwarded.
(c) And calculating the caching priority psi of the current node on the path for the target content according to the correlation and the position offset calculated by the operation.
Comparing the cache priority psi of the current node calculated in the step (c) with a threshold psi ', and if psi is larger than psi', storing the content carried by the currently forwarded data packet into the CS; otherwise, the content is not stored at the current node. And inquiring a PIT table, and forwarding the data packet from the port where the interest packet arrives.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (1)

1. A path caching method based on content relevance in an information center network is characterized by comprising the following steps:
step 1: a request node generates an interest packet, and a hop field in the interest packet is initialized to 0; the request node forwards the interest packet to the next routing node along the path;
step 2: after receiving the interest packet, the node adds one to the hop field in the interest packet;
and step 3: if the current node is a content source node or an intermediate cache node, the interest packet reaches a transmission end point, and the node stores a hop field value in the interest packet into a variable maxhop; the node constructs a data packet containing a variable maxhop and returns the data packet to the request node along the original path;
if the current node is not a content source node or an intermediate cache node, the node forwards the interest packet to the next routing node from the port according to the FIB, and the step 2 is returned;
and 4, step 4: returning the data packet to the request node from the content source node or the intermediate cache node; the nodes in the packet path need to determine whether to store data in the packet, in addition to completing the routing forwarding of the packet, and the specific determination method is as follows:
step 4.1: after receiving the data packet, the node analyzes the relevance between the content in the data packet and the local popular content, and calculates the relevance xi between the target content and a cache content in the local CS;
step 4.2: the node extracts the relative position information of the current node on the forwarding path according to the hop field in the data packet, and calculates the position offset h of the current node relative to the request node through normalization processing;
step 4.3: calculating the caching priority psi of the current node on the path for the target content;
ψ=ξ+h+p
wherein p is a random factor;
step 4.4: judging whether the caching priority psi of the current node on the path for the target content is smaller than a set threshold psi';
if psi < psi', the current node does not store the data in the data packet;
if psi ≧ psi', the current node stores the contents of the packet in the local CS.
CN202010438358.8A 2020-05-22 2020-05-22 Path caching method based on content relevance in information center network Pending CN111628933A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010438358.8A CN111628933A (en) 2020-05-22 2020-05-22 Path caching method based on content relevance in information center network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010438358.8A CN111628933A (en) 2020-05-22 2020-05-22 Path caching method based on content relevance in information center network

Publications (1)

Publication Number Publication Date
CN111628933A true CN111628933A (en) 2020-09-04

Family

ID=72272699

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010438358.8A Pending CN111628933A (en) 2020-05-22 2020-05-22 Path caching method based on content relevance in information center network

Country Status (1)

Country Link
CN (1) CN111628933A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104753797A (en) * 2015-04-09 2015-07-01 清华大学深圳研究生院 Content center network dynamic routing method based on selective caching
CN106101223A (en) * 2016-06-12 2016-11-09 北京邮电大学 A kind of caching method mated with node rank based on content popularit
EP3206348A1 (en) * 2016-02-15 2017-08-16 Tata Consultancy Services Limited Method and system for co-operative on-path and off-path caching policy for information centric networks
CN108366089A (en) * 2018-01-08 2018-08-03 南京邮电大学 A kind of CCN caching methods based on content popularit and pitch point importance
CN108900618A (en) * 2018-07-04 2018-11-27 重庆邮电大学 Content buffering method in a kind of information centre's network virtualization

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104753797A (en) * 2015-04-09 2015-07-01 清华大学深圳研究生院 Content center network dynamic routing method based on selective caching
EP3206348A1 (en) * 2016-02-15 2017-08-16 Tata Consultancy Services Limited Method and system for co-operative on-path and off-path caching policy for information centric networks
CN106101223A (en) * 2016-06-12 2016-11-09 北京邮电大学 A kind of caching method mated with node rank based on content popularit
CN108366089A (en) * 2018-01-08 2018-08-03 南京邮电大学 A kind of CCN caching methods based on content popularit and pitch point importance
CN108900618A (en) * 2018-07-04 2018-11-27 重庆邮电大学 Content buffering method in a kind of information centre's network virtualization

Similar Documents

Publication Publication Date Title
Zhong et al. A deep reinforcement learning-based framework for content caching
KR102100710B1 (en) Method for transmitting packet of node and content owner in content centric network
CN107454562B (en) ICN (Integrated Circuit network) architecture-oriented D2D mobile content distribution method
CN109905480B (en) Probabilistic cache content placement method based on content centrality
CN109873869B (en) Edge caching method based on reinforcement learning in fog wireless access network
CN103107945B (en) A kind of system and method for fast finding IPV6 route
CN111107000B (en) Content caching method in named data network based on network coding
CN113783779B (en) Hierarchical random caching method in named data network
CN107977160B (en) Method for data access of exchanger
Man et al. On-path caching based on content relevance in information-centric networking
CN101840417B (en) UID query method for internet of things based on correlation
CN111628933A (en) Path caching method based on content relevance in information center network
CN108521373B (en) Multipath routing method in named data network
Sun et al. Predict-then-prefetch caching strategy to enhance QoE in 5G networks
Kulkarni et al. Model and machine learning based caching and routing algorithms for cache-enabled networks
Zhou et al. Popularity and age based cache scheme for content-centric network
Guan et al. A classification-based wisdom caching scheme for content centric networking
Gulati et al. AdCaS: Adaptive caching for storage space analysis using content centric networking
CN111327532A (en) Method for realizing capacity of super-large forwarding policy table of network equipment
Herouala et al. NBCC: Simulation of a new Caching strategy using Naive Bayes Classifier in NDN
Ahmad et al. Intelligent Stretch Reduction in Information-CentricNetworking towards 5G-Tactile Internet realization
Mei et al. Csa: A credibility search algorithm based on different query in unstructured peer-to-peer networks
Kurniawan et al. Modified-LRU Algorithm for Caching in Named Data Network on Mobile Network
CN110011918A (en) A kind of the website safety detection method and system of router cooperation
CN111625565B (en) Multi-attribute cooperative caching method for information center network cache privacy protection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200904