CN105743975A - Cache placing method and system based on data access distribution - Google Patents

Cache placing method and system based on data access distribution Download PDF

Info

Publication number
CN105743975A
CN105743975A CN201610057645.8A CN201610057645A CN105743975A CN 105743975 A CN105743975 A CN 105743975A CN 201610057645 A CN201610057645 A CN 201610057645A CN 105743975 A CN105743975 A CN 105743975A
Authority
CN
China
Prior art keywords
data
access
mobile node
cache
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610057645.8A
Other languages
Chinese (zh)
Other versions
CN105743975B (en
Inventor
范小朋
须成忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201610057645.8A priority Critical patent/CN105743975B/en
Publication of CN105743975A publication Critical patent/CN105743975A/en
Application granted granted Critical
Publication of CN105743975B publication Critical patent/CN105743975B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5682Policies or rules for updating, deleting or replacing the stored data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to a cache placing method and system based on data access distribution. The cache placing method comprises the following steps: partitioning cache spaces of mobile nodes into selfish spaces and altruistic spaces in order to fully utilize the cache spaces of the mobile nodes; selecting T data with highest access frequencies according to probability distribution of access data of the mobile nodes, and caching the selected T data into the selfish spaces; and selecting R data from the remaining data, and caching the selected R data into the altruistic spaces. Through adoption of the cache placing method and system, the data with highest frequencies are cached effectively according to the probability distribution on the basis of user data access distribution and expected data of other mobile nodes are cached in order to contribute to the other mobile nodes; the two major factors of frequency distribution of user access data and distance of the user access data are fully considered; the cache spaces of the mobile nodes are utilized fully; and overall overhead of the user access data is lowered effectively.

Description

Buffer memory laying method and system based on data access distribution
Technical field
The present invention relates to data buffer storage field, particularly relate to a kind of buffer memory laying method based on data access distribution and system.
Background technology
Wireless self-networking (Internet-basedMobileAdHocNetwork based on Internet, IMANET) mobile subscriber can be made to use the data resource that mobile equipment has access on Internet by the mode of multi-hop, be one data access mode very easily.In actual applications, the mobile wireless ad hoc networks network that car networking (VehicularNetwork) can be distributed as data, forward, stores and share, after arriving particularly in big data age, become more and more likely by car networking share data.Additionally, wireless sensor network (WirelessSensorNetworks) is more and more extensive in application in recent years, and application includes the military or civil areas such as environmental monitoring, mobile multimedia, logistics management, traffic control, target following and Smart Home.But wireless self-networking there is also some problems, for instance wireless communication bandwidth resource is nervous, it is limited to move device storage capacity, mobile network links instability and the finite energy moving equipment self etc..Therefore, in order to improve the efficiency of data access, find a kind of method that can share data and just seem extremely important.
Wireless network data caching method (DataCaching) is widely used for realizing the distribution of efficient data and sharing, it is possible to effectively improve data access efficiency.This method mainly takes data source to be placed on cache node by data copy, other mobile nodes can data copy on access cache node, access expense without going data source to access data to save the time, to save.Data cache method key problem is how to select cache node to place data copy in a network, because placing copy meaning to require that the data of latest edition can be transferred to cache node by data source in time, can produce new expense.
When existing method considers user accesses data expense, two factors of main consideration: is the frequency of user accesses data, and another one is the distance of user accesses data.But, the feature of user accesses data is not that simple access frequency can describe in practice, it is impossible to well reduce the overall overhead of user accesses data, the spatial cache of each mobile node can not be made to maximize the use.
Summary of the invention
In view of this, it is necessary to place the overall overhead that can not well reduce user accesses data for above-mentioned data buffer storage, the problem that each mobile node spatial cache maximizes the use can not be made, it is provided that a kind of buffer memory laying method based on data access distribution.
Additionally, the present invention also provides for a kind of buffer memory place system based on data access distribution.
A kind of buffer memory laying method based on data access distribution provided by the invention, comprises the steps:
S10: each mobile node spatial cache is divided into selfishness space and two, Li Ta space part, wherein, selfishness space is used for the data that self access frequency of buffer memory mobile node is big, and there are the data of demand in his property space of profit for other mobile nodes of buffer memory;
S20: obtain each mobile node and access frequency and the content of data, obtain each mobile node and access the probability distribution of data, it is cached to selfishness space according to T the data that probability distribution chooses access frequency the highest, is cached to his property space of profit by remaining data are chosen R data.
In one of which embodiment, each mobile node of described acquisition accesses frequency and the content of data, obtain each mobile node access data probability distribution step particularly as follows:
Access frequency and the content of data by recording mobile node, adopt histogrammic mode to describe the probability distribution of user accesses data, obtain each mobile node and access the probability distribution of data.
In one of which embodiment, described by remaining data are chosen R data be cached to profit his property space step particularly as follows:
When choosing in by remaining data, select R data according to Poisson distribution and be cached to his property space of profit.
In one of which embodiment, described T the data choosing access frequency the highest according to probability distribution are cached to selfishness space, by remaining data are chosen R data be cached to profit his property space step particularly as follows:
Each mobile node selects, according to probability distribution, T the data that access frequency is the highest, then sends cache request to data source, and request becomes the cache node of data;
Each mobile node selects R data according to Poisson distribution from remaining data, then sends cache request to data source;
If after certain data source collects enough nR/ (m-T) individual data, sending data and copy corresponding cache request mobile node to, wherein n is the quantity of mobile node, and m is data bulk.
In one of which embodiment, described method further comprises the steps of:
Definition average access jumping figure, assesses each mobile node method by average access jumping figure and accesses the average overhead of each data, and described average access jumping figure is defined as:
h = 1 2 × log n l o g ( nR 2 ) - l o g ( m - T )
Wherein, h is average access jumping figure, and n is the quantity of mobile node, and m is the quantity of data, and R is the expected value of data cached number in his property space of profit, and T is cached to data amount check optimum in selfishness space according to what probability distribution obtained.
A kind of buffer memory place system based on data access distribution provided by the invention, including:
Spatial cache divides module, each mobile node spatial cache is divided into selfishness space and two, Li Ta space part, wherein, selfishness space is used for the data that self access frequency of buffer memory mobile node is big, and there are the data of demand in his property space of profit for other mobile nodes of buffer memory;
Data cache module, obtain each mobile node and access frequency and the content of data, obtain each mobile node and access the probability distribution of data, it is cached to selfishness space according to T the data that probability distribution chooses access frequency the highest, is cached to his property space of profit by remaining data are chosen R data.
In one of which embodiment, described data cache module accesses frequency and the content of data by recording mobile node, adopts histogrammic mode to describe the probability distribution of user accesses data, obtains each mobile node and accesses the probability distribution of data.
In one of which embodiment, when choosing in by remaining data, described data cache module is selected R data according to Poisson distribution and is cached to his property space of profit.
In one of which embodiment, described data cache module controls each mobile node and selects, according to probability distribution, T the data that access frequency is the highest, then sends cache request to data source, and request becomes the cache node of data;
Control each mobile node and select R data according to Poisson distribution from remaining data, then send cache request to data source;
If after enough nR/ (m-T) the individual data of certain data source collection, controlling data source transmission data and copy corresponding cache request mobile node to, wherein n is the quantity of mobile node, and m is data bulk.
In one of which embodiment, described data cache module definition average access jumping figure, to assess each mobile node method by average access jumping figure and access the average overhead of each data, described average access jumping figure is defined as:
h = 1 2 × l o g n l o g ( nR 2 ) - l o g ( m - T )
Wherein, h is average access jumping figure, and n is the quantity of mobile node, and m is the quantity of data, and R is the expected value of data cached number in his property space of profit, and T is cached to data amount check optimum in selfishness space according to what probability distribution obtained.
Buffer memory laying method that the present invention is distributed based on data access and system, each mobile node spatial cache is divided into selfishness space and two, Li Ta space part makes full use of the spatial cache of mobile node, and T the data that the probability distribution according to each mobile node access data chooses access frequency the highest are cached to selfishness space, it is cached to his property space of profit by remaining data are chosen R data, thus effectively accessing distribution based on user data to carry out, according to probability distribution, the data that buffer memory frequency is the highest, and other mobile nodes are contributed by other mobile node expected datas of buffer memory, take into full account the frequency distribution of user accesses data and two principal elements of distance of user accesses data, maximally utilize the spatial cache of mobile node, effectively reduce the overall overhead of user accesses data.
Accompanying drawing explanation
Fig. 1 is the flow chart of the buffer memory laying method in an embodiment based on data access distribution;
Fig. 2 is message overall overhead simulation comparison schematic diagram in an embodiment;
Fig. 3 is average data access delay simulation comparison schematic diagram in an embodiment;
Fig. 4 is the structure chart of the buffer memory place system in an embodiment based on data access distribution.
Detailed description of the invention
In order to make the purpose of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein is only in order to explain the present invention, is not intended to limit the present invention.
The overall overhead that can not well reduce user accesses data is placed for above-mentioned data buffer storage, the problem that each mobile node spatial cache maximizes the use can not be made, the present invention provides a kind of buffer memory laying method based on data access distribution, the inventive method is by the Probability Distribution Analysis to wireless network (can be mobile Internet) user accesses data, classifying rationally mobile node spatial cache simultaneously, different buffer memory modes of emplacements is taked for different types of data, realize mobile node spatial cache to maximally utilize, improve data access efficiency, effectively reduce the overall overhead of user accesses data.
Concrete, as it is shown in figure 1, the method specifically includes following steps:
S10: each mobile node spatial cache is divided into selfishness space and two, Li Ta space part, wherein, selfishness space is used for the data that self access frequency of buffer memory mobile node is big, and there are the data of demand in his property space of profit for other mobile nodes of buffer memory.
For making it possible to fully maximally utilize the spatial cache of mobile node, and reduce the overall overhead of user accesses data, in this embodiment, each mobile node spatial cache is divided, it is divided into selfishness space and two, Li Ta space part, it is respectively intended to the data that the big data of self access frequency of buffer memory mobile node are interested with other mobile nodes, so taking into full account on the basis that mobile node self access frequency is distributed, interested in other mobile nodes, the data having demand carry out buffer memory, take into full account the frequency of user accesses data, two principal elements of distance with user accesses data, maximally utilize the spatial cache of mobile node, reduce the overall overhead of user accesses data.
S20: obtain each mobile node and access frequency and the content of data, obtain each mobile node and access the probability distribution of data, it is cached to selfishness space according to T the data that probability distribution chooses access frequency the highest, is cached to his property space of profit by remaining data are chosen R data.
After each mobile node spatial cache is divided into selfishness space and two, Li Ta space part, it is thus necessary to determine that be cached to the data in two parts.First obtain each mobile node and access frequency and the content of data, frequency and content according to each mobile node access data can obtain each mobile node and access the probability distribution of data, the data access distribution of mobile node is may determine that, such that it is able to data are carried out buffer memory placement according to data access distribution by probability distribution.
In further mode, obtain each mobile node and access frequency and the content of data, the step obtaining the probability distribution that each mobile node accesses data accesses frequency and the content of data particularly as follows: pass through to record mobile node, adopt histogrammic mode to describe the probability distribution of user accesses data, obtain each mobile node and access the probability distribution of data.After obtaining probability distribution, the data access distribution of mobile node can be had a clear understanding of according to probability distribution, thus further determining that the data needing buffer memory in selfishness space and two, Li Ta space part.
In this embodiment, for better his property space of profit can be cached to by remaining data are chosen R data, when choosing in by remaining data, select R data according to Poisson distribution and be cached to his property space sharp.
Further, be cached to selfishness space according to T the data that probability distribution chooses access frequency the highest, by remaining data are chosen R data be cached to his property space of profit particularly as follows:
Each mobile node selects, according to probability distribution, T the data that access frequency is the highest, then sends cache request to data source, and request becomes the cache node of data;
Each mobile node selects R data according to Poisson distribution from remaining data, then sends cache request to data source;
If after certain data source collects enough nR/ (m-T) individual data, sending data and copy corresponding cache request mobile node to, wherein n is the quantity of mobile node, and m is data bulk.
Further, create nearest cache node list, be used for the information of the nearest cache node safeguarding that each mobile node needs each data accessed;
If certain cache request mobile node receives the data copy that respective data sources is sent, more new data;If the data copy passed by, then more it is newly stored in the nearest cache node list of this locality.
If certain mobile node receives corresponding data and within the cache request effective time, then use these data and be calculated as cache request success;If otherwise abandoning these data or also having memory space, store these data as copy.
If the mobile node of centre receives data buffer storage request, and intermediate mobile nodes has the copy of these data just, then respond immediately to the mobile node of data buffer storage request;Copy without data, be forwarded on the nearest cache node being able to know that.
Meanwhile, accessing the expense of data for mobile node in reasonable assessment wireless ad hoc, the overall overhead of user accesses data is optimized, the data bulk in selfishness space and Li Ta space is adjusted, the method further comprises the steps of:
Definition average access jumping figure (AverageAccessHops), assesses each mobile node method by average access jumping figure and accesses the average overhead of each data, and average access jumping figure is defined as:
h = 1 2 × l o g n l o g ( nR 2 ) - l o g ( m - T )
Wherein, h is average access jumping figure, and n is the quantity of mobile node, and m is the quantity of data, and R is the expected value of data cached number in his property space of profit, and T is cached to data amount check optimum in selfishness space according to what probability distribution obtained.
Further, concretely comprising the following steps of definition average access jumping figure: the theoretical analysis method of the random bipartite graph in employing Random Graph opinion, using mobile node set and data acquisition system as two parts of random bipartite graph, establish the access relation figure between mobile node and data.Which reduces the impact for access relation of mobile network's change in topology.
Described the distance and expense of weighing mobile node access arbitrary data by average access jumping figure, T and R is optimized adjustment, the overall overhead of maximized reduction user accesses data.
For selecting, according to user accesses data distribution, the data amount check T that access frequency is the highest, in further mode, set user data access and be distributed as Zifp-like distribution, then optimum T is that equation below provides:
Topt≈62.5×θ-10
Wherein: θ is the Zipf-like key parameter being distributed Z (θ):
p ( k ) = 1 k θ Σ i = 1 m 1 / i θ
For the effect of the inventive method is detected, the inventive method is contrasted with existing BDC method (based on the data buffer storage of income, Benefit-basedDataCaching, be called for short BDC).Being AACC by the inventive method name definition, construct simulated environment (using NS2), then two key indexs of correction data buffer memory, one is average access latency, and one is caching system general messages expense.Wherein, simulated environment employs wireless communication protocol IEEE802.11 as the communication protocol between mobile node, uses DSDV as Routing Protocol.100 mobile nodes of random placement in the environment of whole 2000*500m2.The communication radius of each mobile node is 250m, and mobile node 0 is as data source, and the size of data is 1500 bytes, and each mobile node accesses the frequency of data and passes through stochastic generation.Additionally, in order to ensure the data consistency between data source and cache node, employ the buffer consistency algorithm based on TTL, the effect duration of data is 8 seconds.
As shown in Figures 2 and 3, in both figures, abscissa represents that each mobile node produces the average time of request of data to the result of emulation.The vertical coordinate of Fig. 2 represents that, in general messages expense, including data request information, data return message, data are new information more, and other all message in caching system, but not including that route messages.Vertical coordinate in Fig. 3 represents the access delay of average cache data.
Simulation comparison result according to Fig. 2 and Fig. 3 is it is apparent that general messages expense and the on average data cached access delay of the inventive method are all substantially better than BDC method.
Should based on the buffer memory laying method of data access distribution, each mobile node spatial cache is divided into selfishness space and two, Li Ta space part makes full use of the spatial cache of mobile node, and T the data that the probability distribution according to each mobile node access data chooses access frequency the highest are cached to selfishness space, it is cached to his property space of profit by remaining data are chosen R data, thus effectively accessing distribution based on user data to carry out, according to probability distribution, the data that buffer memory frequency is the highest, and other mobile nodes are contributed by other mobile node expected datas of buffer memory, take into full account the frequency distribution of user accesses data and two principal elements of distance of user accesses data, maximally utilize the spatial cache of mobile node, effectively reduce the overall overhead of user accesses data.
Meanwhile, the present invention also provides for a kind of buffer memory place system based on data access distribution, and as shown in Figure 4, this system includes:
Spatial cache divides module 100, each mobile node spatial cache is divided into selfishness space and two, Li Ta space part, wherein, selfishness space is used for the data that self access frequency of buffer memory mobile node is big, and there are the data of demand in his property space of profit for other mobile nodes of buffer memory.
For making it possible to fully maximally utilize the spatial cache of mobile node, and reduce the overall overhead of user accesses data, in this embodiment, spatial cache divides module 100 and each mobile node spatial cache is divided, it is divided into selfishness space and two, Li Ta space part, it is respectively intended to the data that the big data of self access frequency of buffer memory mobile node are interested with other mobile nodes, so taking into full account on the basis that mobile node self access frequency is distributed, interested in other mobile nodes, the data having demand carry out buffer memory, take into full account the frequency of user accesses data, two principal elements of distance with user accesses data, maximally utilize the spatial cache of mobile node, reduce the overall overhead of user accesses data.
Data cache module 200, obtain each mobile node and access frequency and the content of data, obtain each mobile node and access the probability distribution of data, it is cached to selfishness space according to T the data that probability distribution chooses access frequency the highest, is cached to his property space of profit by remaining data are chosen R data.
After each mobile node spatial cache is divided into selfishness space and two, Li Ta space part, it is thus necessary to determine that be cached to the data in two parts.First data cache module 200 obtains each mobile node and accesses frequency and the content of data, frequency and content according to each mobile node access data can obtain each mobile node and access the probability distribution of data, the data access distribution of mobile node is may determine that, such that it is able to data are carried out buffer memory placement according to data access distribution by probability distribution.
In further mode, data cache module 200 accesses frequency and the content of data by recording mobile node, adopts histogrammic mode to describe the probability distribution of user accesses data, obtains each mobile node and accesses the probability distribution of data.Data cache module 200, after obtaining probability distribution, can have a clear understanding of the data access distribution of mobile node, thus further determining that the data needing buffer memory in selfishness space and two, Li Ta space part according to probability distribution.
In this embodiment, for better his property space of profit can be cached to by remaining data are chosen R data, when data cache module 200 is chosen in by remaining data, select R data according to Poisson distribution and be cached to his property space sharp.
Further, data cache module 200 controls each mobile node and selects, according to probability distribution, T the data that access frequency is the highest, then sends cache request to data source, and request becomes the cache node of data;
Control each mobile node and select R data according to Poisson distribution from remaining data, then send cache request to data source;
If after enough nR/ (m-T) the individual data of certain data source collection, controlling data source transmission data and copy corresponding cache request mobile node to, wherein n is the quantity of mobile node, and m is data bulk.
Further, data cache module 200 creates nearest cache node list, is used for the information of the nearest cache node safeguarding that each mobile node needs each data accessed;
If certain cache request mobile node receives the data copy that respective data sources is sent, more new data;If the data copy passed by, then more it is newly stored in the nearest cache node list of this locality.
If certain mobile node receives corresponding data and within the cache request effective time, then use these data and be calculated as cache request success;If otherwise abandoning these data or also having memory space, store these data as copy.
If the mobile node of centre receives data buffer storage request, and intermediate mobile nodes has the copy of these data just, then respond immediately to the mobile node of data buffer storage request;Copy without data, be forwarded on the nearest cache node being able to know that.
Simultaneously, the expense of data is accessed for mobile node in reasonable assessment wireless ad hoc, the overall overhead of user accesses data is optimized, the data bulk in selfishness space and Li Ta space is adjusted, data cache module 200 defines average access jumping figure (AverageAccessHops), assessing each mobile node method by average access jumping figure and access the average overhead of each data, average access jumping figure is defined as:
h = 1 2 × l o g n l o g ( nR 2 ) - l o g ( m - T )
Wherein, h is average access jumping figure, and n is the quantity of mobile node, and m is the quantity of data, and R is the expected value of data cached number in his property space of profit, and T is cached to data amount check optimum in selfishness space according to what probability distribution obtained.
Further, data cache module 200 adopts the theoretical analysis method of the random bipartite graph in Random Graph opinion, using mobile node set and data acquisition system as two parts of random bipartite graph, establishes the access relation figure between mobile node and data.Which reduces the impact for access relation of mobile network's change in topology.
Described the distance and expense of weighing mobile node access arbitrary data by average access jumping figure, T and R is optimized adjustment, the overall overhead of maximized reduction user accesses data.
For selecting, according to user accesses data distribution, the data amount check T that access frequency is the highest, in further mode, data cache module 200 sets user data access and is distributed as Zifp-like distribution, then optimum T is provided by equation below:
Topt≈62.5×θ-10
Wherein: θ is the Zipf-like key parameter being distributed Z (θ):
p ( k ) = 1 k θ Σ i = 1 m 1 / i θ
Should based on the buffer memory place system of data access distribution, spatial cache divides module and each mobile node spatial cache is divided into selfishness space and two, Li Ta space part makes full use of the spatial cache of mobile node, T the data that data cache module chooses access frequency the highest according to the probability distribution of each mobile node access data are cached to selfishness space, it is cached to his property space of profit by remaining data are chosen R data, thus effectively accessing distribution based on user data to carry out, according to probability distribution, the data that buffer memory frequency is the highest, and other mobile nodes are contributed by other mobile node expected datas of buffer memory, take into full account the frequency distribution of user accesses data and two principal elements of distance of user accesses data, maximally utilize the spatial cache of mobile node, effectively reduce the overall overhead of user accesses data.
Buffer memory laying method that the present invention is distributed based on data access and system, each mobile node spatial cache is divided into selfishness space and two, Li Ta space part makes full use of the spatial cache of mobile node, and T the data that the probability distribution according to each mobile node access data chooses access frequency the highest are cached to selfishness space, it is cached to his property space of profit by remaining data are chosen R data, thus effectively accessing distribution based on user data to carry out, according to probability distribution, the data that buffer memory frequency is the highest, and other mobile nodes are contributed by other mobile node expected datas of buffer memory, take into full account the frequency distribution of user accesses data and two principal elements of distance of user accesses data, maximally utilize the spatial cache of mobile node, effectively reduce the overall overhead of user accesses data.
These are only presently preferred embodiments of the present invention, not in order to limit the present invention, all any amendment, equivalent replacement and improvement etc. made within the spirit and principles in the present invention, should be included within protection scope of the present invention.

Claims (10)

1. the buffer memory laying method based on data access distribution, it is characterised in that comprise the steps:
S10: each mobile node spatial cache is divided into selfishness space and two, Li Ta space part, wherein, selfishness space is used for the data that self access frequency of buffer memory mobile node is big, and there are the data of demand in his property space of profit for other mobile nodes of buffer memory;
S20: obtain each mobile node and access frequency and the content of data, obtain each mobile node and access the probability distribution of data, it is cached to selfishness space according to T the data that probability distribution chooses access frequency the highest, is cached to his property space of profit by remaining data are chosen R data.
2. the buffer memory laying method based on data access distribution according to claim 1, it is characterised in that each mobile node of described acquisition accesses frequency and the content of data, obtain each mobile node access data probability distribution step particularly as follows:
Access frequency and the content of data by recording mobile node, adopt histogrammic mode to describe the probability distribution of user accesses data, obtain each mobile node and access the probability distribution of data.
3. the buffer memory laying method based on data access distribution according to claim 1, it is characterised in that described by remaining data are chosen R data be cached to his property space sharp step particularly as follows:
When choosing in by remaining data, select R data according to Poisson distribution and be cached to his property space of profit.
4. the buffer memory laying method based on data access distribution according to claim 3, it is characterized in that, described T the data choosing access frequency the highest according to probability distribution are cached to selfishness space, by remaining data are chosen R data be cached to profit his property space step particularly as follows:
Each mobile node selects, according to probability distribution, T the data that access frequency is the highest, then sends cache request to data source, and request becomes the cache node of data;
Each mobile node selects R data according to Poisson distribution from remaining data, then sends cache request to data source;
If after certain data source collects enough nR/ (m-T) individual data, sending data and copy corresponding cache request mobile node to, wherein n is the quantity of mobile node, and m is data bulk.
5. the buffer memory laying method based on data access distribution according to claim 1, it is characterised in that described method further comprises the steps of:
Definition average access jumping figure, assesses each mobile node method by average access jumping figure and accesses the average overhead of each data, and described average access jumping figure is defined as:
h = 1 2 × log n l o g ( nR 2 ) - l o g ( m - T )
Wherein, h is average access jumping figure, and n is the quantity of mobile node, and m is the quantity of data, and R is the expected value of data cached number in his property space of profit, and T is cached to data amount check optimum in selfishness space according to what probability distribution obtained.
6. the buffer memory place system based on data access distribution, it is characterised in that including:
Spatial cache divides module, each mobile node spatial cache is divided into selfishness space and two, Li Ta space part, wherein, selfishness space is used for the data that self access frequency of buffer memory mobile node is big, and there are the data of demand in his property space of profit for other mobile nodes of buffer memory;
Data cache module, obtain each mobile node and access frequency and the content of data, obtain each mobile node and access the probability distribution of data, it is cached to selfishness space according to T the data that probability distribution chooses access frequency the highest, is cached to his property space of profit by remaining data are chosen R data.
7. the buffer memory place system based on data access distribution according to claim 6, it is characterized in that, described data cache module accesses frequency and the content of data by recording mobile node, adopt histogrammic mode to describe the probability distribution of user accesses data, obtain each mobile node and access the probability distribution of data.
8. the buffer memory place system based on data access distribution according to claim 6, it is characterised in that when choosing in by remaining data, described data cache module is selected R data according to Poisson distribution and is cached to his property space sharp.
9. the buffer memory place system based on data access distribution according to claim 8, it is characterized in that, described data cache module controls each mobile node and selects, according to probability distribution, T the data that access frequency is the highest, then sending cache request to data source, request becomes the cache node of data;
Control each mobile node and select R data according to Poisson distribution from remaining data, then send cache request to data source;
If after enough nR/ (m-T) the individual data of certain data source collection, controlling data source transmission data and copy corresponding cache request mobile node to, wherein n is the quantity of mobile node, and m is data bulk.
10. the buffer memory place system based on data access distribution according to claim 6, it is characterized in that, described data cache module definition average access jumping figure, assessing each mobile node method by average access jumping figure and access the average overhead of each data, described average access jumping figure is defined as:
h = 1 2 × log n l o g ( nR 2 ) - l o g ( m - T )
Wherein, h is average access jumping figure, and n is the quantity of mobile node, and m is the quantity of data, and R is the expected value of data cached number in his property space of profit, and T is cached to data amount check optimum in selfishness space according to what probability distribution obtained.
CN201610057645.8A 2016-01-28 2016-01-28 Caching laying method and system based on data access distribution Active CN105743975B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610057645.8A CN105743975B (en) 2016-01-28 2016-01-28 Caching laying method and system based on data access distribution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610057645.8A CN105743975B (en) 2016-01-28 2016-01-28 Caching laying method and system based on data access distribution

Publications (2)

Publication Number Publication Date
CN105743975A true CN105743975A (en) 2016-07-06
CN105743975B CN105743975B (en) 2019-03-05

Family

ID=56246820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610057645.8A Active CN105743975B (en) 2016-01-28 2016-01-28 Caching laying method and system based on data access distribution

Country Status (1)

Country Link
CN (1) CN105743975B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106933511A (en) * 2017-02-27 2017-07-07 武汉大学 Consider the GML data storage method for organizing and system of load balancing and disk efficiency

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102354301A (en) * 2011-09-23 2012-02-15 浙江大学 Cache partitioning method
CN103052114A (en) * 2012-12-21 2013-04-17 中国科学院深圳先进技术研究院 Data cache placement system and data caching method
CN103634231A (en) * 2013-12-02 2014-03-12 江苏大学 Content popularity-based CCN cache partition and substitution method
US8706971B1 (en) * 2012-03-14 2014-04-22 Netapp, Inc. Caching and deduplication of data blocks in cache memory
US20140215156A1 (en) * 2013-01-30 2014-07-31 Electronics And Telecommunications Research Institute Prioritized dual caching method and apparatus
CN104516952A (en) * 2014-12-12 2015-04-15 华为技术有限公司 Memory partition deployment method and device
CN104714753A (en) * 2013-12-12 2015-06-17 中兴通讯股份有限公司 Data access and storage method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102354301A (en) * 2011-09-23 2012-02-15 浙江大学 Cache partitioning method
US8706971B1 (en) * 2012-03-14 2014-04-22 Netapp, Inc. Caching and deduplication of data blocks in cache memory
CN103052114A (en) * 2012-12-21 2013-04-17 中国科学院深圳先进技术研究院 Data cache placement system and data caching method
US20140215156A1 (en) * 2013-01-30 2014-07-31 Electronics And Telecommunications Research Institute Prioritized dual caching method and apparatus
CN103634231A (en) * 2013-12-02 2014-03-12 江苏大学 Content popularity-based CCN cache partition and substitution method
CN104714753A (en) * 2013-12-12 2015-06-17 中兴通讯股份有限公司 Data access and storage method and device
CN104516952A (en) * 2014-12-12 2015-04-15 华为技术有限公司 Memory partition deployment method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HYOTAEK SHIM; BON-KEUN SEO; JIN-SOO KIM; SEUNGRYOUL MAENG: "《Adapting Cache Partitioning Algorithms to Pseudo-LRU Replacement Policies》", 《2010 IEEE 26TH SYMPOSIUM ON MASS STORAGE SYSTEMS AND TECHNOLOGIES》 *
曾文烽 许胤龙: "《采用分区缓存调度策略的P2P点播***》", 《计 算 机 工 程》 *
杨菲菲 陈志云 曾秋梅: "《一种基于流行度和分段适应性的流媒体缓存算法》", 《计算机应用与软件》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106933511A (en) * 2017-02-27 2017-07-07 武汉大学 Consider the GML data storage method for organizing and system of load balancing and disk efficiency
CN106933511B (en) * 2017-02-27 2020-02-14 武汉大学 Space data storage organization method and system considering load balance and disk efficiency

Also Published As

Publication number Publication date
CN105743975B (en) 2019-03-05

Similar Documents

Publication Publication Date Title
Zeng et al. Directional routing and scheduling for green vehicular delay tolerant networks
Hail et al. Caching in named data networking for the wireless internet of things
Yasser et al. VANET routing protocol for V2V implementation: A suitable solution for developing countries
Chen et al. Cross-layer design for data accessibility in mobile ad hoc networks
Glass et al. Leveraging MANET-based cooperative cache discovery techniques in VANETs: A survey and analysis
Chithaluru et al. ARIOR: adaptive ranking based improved opportunistic routing in wireless sensor networks
Teng et al. Adaptive transmission range based topology control scheme for fast and reliable data collection
Haillot et al. A protocol for content-based communication in disconnected mobile ad hoc networks
Tiwari et al. Cooperative gateway cache invalidation scheme for internet-based vehicular ad hoc networks
CN105357278A (en) Guandu cache strategy for named-data mobile ad hoc network
Li et al. An optimized content caching strategy for video stream in edge-cloud environment
US10291474B2 (en) Method and system for distributed optimal caching of content over a network
Li et al. Joint perception data caching and computation offloading in MEC-enabled vehicular networks
Elayoubi et al. Optimal D2D Content Delivery for Cellular Network Offloading: Special Issue on Device-to-Device Communication in 5G Networks
Leira et al. Context-based caching in mobile information-centric networks
Liu et al. Real-time search-driven caching for sensing data in vehicular networks
CN105743975A (en) Cache placing method and system based on data access distribution
Mishra et al. An efficient content replacement policy to retain essential content in information-centric networking based internet of things network
de Moraes Modesto et al. Utility-gradient implicit cache coordination policy for information-centric ad-hoc vehicular networks
Caetano et al. A cluster based collaborative cache approach for MANETs
Pack et al. Proxy-based wireless data access algorithms in mobile hotspots
Ashraf et al. Dynamic cooperative cache management scheme based on social and popular data in vehicular named data network
CN101170384A (en) Method for maintaining data consistent of mobile device in Ad hoc networking
Oualil et al. A personalized learning scheme for internet of vehicles caching
González-Cañete et al. A cross layer interception and redirection cooperative caching scheme for MANETs

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant