CN112637273B - Intelligent hot spot data prediction and cache method - Google Patents

Intelligent hot spot data prediction and cache method Download PDF

Info

Publication number
CN112637273B
CN112637273B CN202011412624.6A CN202011412624A CN112637273B CN 112637273 B CN112637273 B CN 112637273B CN 202011412624 A CN202011412624 A CN 202011412624A CN 112637273 B CN112637273 B CN 112637273B
Authority
CN
China
Prior art keywords
intelligent
data
intelligent entity
entity
local server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011412624.6A
Other languages
Chinese (zh)
Other versions
CN112637273A (en
Inventor
吴大鹏
李学芳
张普宁
王汝言
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202011412624.6A priority Critical patent/CN112637273B/en
Publication of CN112637273A publication Critical patent/CN112637273A/en
Application granted granted Critical
Publication of CN112637273B publication Critical patent/CN112637273B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Medical Informatics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to an intelligent hot spot data prediction and cache method, which belongs to the field of Internet of things and comprises the following steps: s1: each sensor senses state data of the intelligent entity and periodically uploads the acquired state data of the intelligent entity to the local server; s2: the search system records search requests submitted by local users by taking a fixed time interval as a unit and records the times of searching different intelligent entities; s3: the local server excavates hidden time domain correlation in the intelligent entity data according to the historical search records of the user and establishes a corresponding hot spot intelligent entity prediction model; s4: the local server realizes active caching of the state data of the hot intelligent entity through a designed dynamic caching strategy; s5: the local server quickly searches whether the intelligent entity state content matched with the search request exists, if not, the local server issues the search request to the sensor, and the sensor collects data and returns the data to the user through the local server.

Description

Intelligent hot spot data prediction and cache method
Technical Field
The invention belongs to the field of Internet of things, and relates to an intelligent hot spot data prediction and caching method.
Background
With the deep application of the internet of things (IoT) in the fields of intelligent transportation, smart home, remote medical treatment, environmental monitoring and the like, more and more sensing devices with sensing functions, such as mass sensors, RFIDs and the like, are connected to the internet of things. In the future 20 years, trillion pieces of internet-of-things sensing equipment appear in our lives, and the scene of information sharing and everything interconnection is realized. The internet of things is an important component of a new generation of information technology and an important development stage of the information age, and is called as the third wave of development of the world information industry after computers and the internet. Currently, the application of the internet of things has been popularized in various fields such as logistics management, warehouse storage, intelligent transportation, smart home, environment monitoring, public safety and the like. With the gradual deepening of the application of the internet of things and the gradual improvement of the requirements of people on the living quality, the real-time performance, effectiveness and reliability of people for acquiring physical world entity information are more and more strongly required. Such as searching for nearby vacant parking spaces, quiet coffee shops, vacant meeting rooms, parks with better air quality, etc.
In recent years, the application of internet of things searching is receiving more and more attention, and the research of the internet of things searching technology plays a vital role in realizing the sharing of internet of things sensing resources, promoting the application of the internet of things and promoting the fusion of a physical space and an information space. The size and the range of collected data can be reduced through Internet of things searching, a user can also obtain intelligent entity information of the physical world through a search engine, and intelligent searching requirements exist in typical applications of Internet of things such as smart cities. At present, researchers at home and abroad have conducted intensive research on the aspect of searching internet of things. Xie L, Wang Z, Wang Y put forward a private Cloud platform-oriented Multi-Keyword Search mechanism in 'New Multi-Keyword graphical Search Method for Sensor Network Cloud Platforms' [ in IEEE Sensors, pp.3047-3058,2018 ], encrypted Sensor data are stored in a Cloud for centralized caching, and a hierarchical clustering Method is adopted to realize efficient and safe Search of the encrypted Sensor data. M.Shen, B.Ma, L.Zhu, X.Du and K.xu put forward a Cloud-Based Internet of Things Search scheme in "Secure Phrase Search for Intelligent Processing of Encrypted Data in Cloud-Based IoT" [ in IEEE Internet of Things Journal, pp.1998-2008,2019 ], and after a user sends an entity Search request, the user needs to send request information to the Cloud for Search matching through a local server, and then the Search result is returned to the user through the local server.
Disclosure of Invention
In view of this, the present invention provides an intelligent hot spot data prediction and caching method, aiming at the problems that the number of physical intelligent entities is large, the caching resources of a local server are limited, and the observation contents of all the intelligent entities are difficult to cache. Firstly, designing a hot intelligent entity prediction method, and predicting a hot intelligent entity set with higher attention of a user group based on an LSTM model; then, a hot intelligent entity dynamic cache strategy is designed, and real-time cache and update of the hot intelligent entity are achieved; and finally, caching the hot spot intelligent entity data to a local server close to the user.
In order to achieve the purpose, the invention provides the following technical scheme:
an intelligent hot spot data prediction and cache method comprises the following steps:
s1: data acquisition: various types of sensors sense the state data of the intelligent entity and periodically upload the acquired state data of the intelligent entity to a local server covering the sensing range of the intelligent entity;
s2: search record sorting: the search system records search requests submitted by local users by taking a fixed time interval as a unit and records the times of searching different intelligent entities;
s3: hot intelligent entity prediction: the local server excavates hidden time domain correlation in intelligent entity data according to a user history search record and establishes a corresponding hot spot intelligent entity prediction model based on a Long Short-Term Memory network (LSTM) model;
s4: hot intelligent entity caching: after completing the prediction of the hot intelligent entity according to the step S3, the local server implements active caching of the state data of the hot intelligent entity through the designed dynamic caching strategy;
s5: user search: after a user sends a search request, the local server quickly searches whether intelligent entity state content matched with the search request exists, if yes, the fact that the user searches for the intelligent entity is a hot intelligent entity is indicated, the result is directly returned, if not, the local server sends the search request to a sensor associated with the intelligent entity, and the sensor collects data and returns the data to the user through the local server.
Further, step S3 specifically includes the following steps:
s31: the input of the LSTM model is the vector x (t-1) { x ] of the number of searched intelligent entities at the last time t-1 1 (t-1),x 2 (t-1),...,x q (t-1),...,x Q (t-1) }, where q represents a certain intelligent entity to be searched, x q (t-1) representing the number of times the intelligent entity Q was searched by the user at the last time t-1, and Q representing the total number of the intelligent entities searched at the time t-1;
s32: LSTM network including forgetting gate f t And input gate i t Update door C t And an output gate o t Four gate structures for maintaining and updating cell states, wherein t represents the current time, and f, i, C and o represent corresponding vectors of the four different gate structures;
s33: forget the door layer to read the output h of the previous layer t-1 And input x of the current time t Outputting a value f t And assigning a state C to the current cell t-1 (ii) a Wherein f is t The calculation method is as follows: f. of t =σ(W f x t +U f h t-1 +b f ) H denotes a hidden state structure, x t An input vector representing the current time LSTM, C a cell state vector, f t An activation vector, W, representing a forgetting gate f 、U f And b f Input weight, cyclic weight and bias of the forgetting gate respectively;
s34: the input gate layer comprises two parts; the first part, which determines what value needs to be input through a sigmoid function; another part, creating a new candidate vector by tanh function, which is added to state C t The preparation method comprises the following steps of (1) performing; the new candidate vector is calculated as:
Figure BDA0002817859820000031
wherein W C 、U C And b C Updating the input weight, the cyclic weight and the bias of the gate respectively;
s35: the update gate layer updates the state of the old cells, and C t-1 Is updated to C t The updating method is as follows:
Figure BDA0002817859820000032
wherein i t Representing the input vector corresponding to the + current time t;
s36: the output gate layer outputs a numerical value based on the state of the cell; firstly, a sigmoid layer is operated to determine which part of the cell state is to be output; the cell state is then processed through tanh and multiplied by the output of the sigmoid gate; finally, outputting the part to be output; where W, U and b are the input weight, the round-robin weight, and the offset, respectively, for each gate structure, σ (-) and tanh (-) are activation functions;
s37: training by adopting a reverse error propagation algorithm (BPTT) which is expanded according to time, and iteratively correcting weight parameters in the network according to a predefined loss function so as to minimize the error between the predicted search times and the actual search times of the intelligent entity; the output is a vector of predicted number of times each intelligent entity has been searched at time t
Figure BDA0002817859820000033
Wherein
Figure BDA0002817859820000034
Representing the number of times the predictive smart entity q is searched by the user at time t;
s38: x is to be * (t) sorting the elements and obtaining a sorting index o ═ o 1 ,o 2 ,...,o q ,...,o Q };
S39: o is ═ o 1 ,o 2 ,...,o q ,...,o Q As input to the Zipf (zif's law) model, the probability that each intelligent entity is searched at time t is calculated: p ═ p 1 ,p 2 ,...,p q ,...,p Q ]Wherein p is q Representing the probability that the intelligent entity q is searched at time t.
Further, step S4 specifically includes the following steps:
s41: the local server is based on p ═ p 1 ,p 2 ,...,p q ,...,p Q ]Creating a popularity list caching hot intelligent entities, wherein the popularity represents the probability of the intelligent entities being searched;
s42: when the data reaches the local server, the cache space is directly cached in sequence according to the popularity list if the cache space is not full, if the cache space is full, the local server carries out name matching on the intelligent entity data cached in the cache space and the arriving intelligent entity data, and if the intelligent entity data can be matched, the cache data is directly replaced by the arriving intelligent entity data;
s43: if the popularity of the intelligent entity is not matched with the popularity of the intelligent entity, calculating the popularity of the intelligent entity, and if the popularity of the intelligent entity is greater than the minimum popularity in the popularity table, replacing the cache data corresponding to the minimum popularity with the data of the intelligent entity; if the data is less than the preset threshold value, the data reaching the intelligent entity is not cached;
s44: the cache system starts a task for monitoring the cache data to be out of date at the background, detects the out-of-date cache data regularly, and informs the local server to update the data once the cache is detected to be out of date.
Further, step S5 specifically includes the following steps:
s51: the hot intelligent entity state data is actively cached in a local server close to the user so as to meet the search requirement of the user;
s52: when a user submits a command of searching the intelligent entity in a given state, a search system sends a search request to a local server, and after receiving a request message, the local server quickly searches whether intelligent entity state data matched with the search request exists or not so as to judge the type of the searched intelligent entity;
s53: if the local server matches the intelligent entity state data information related to the search request, the fact that the user searches for the hot intelligent entity is indicated, and the search result is directly returned to the user, so that the search time delay is reduced and the search precision is improved;
s54: if the state data of the intelligent entity searched by the user is not in the local server, the entity searched by the user is judged to be a common intelligent entity, the local server issues a search request to a sensor associated with the intelligent entity, and the sensor collects data and returns the data to the user through the local server.
The invention has the beneficial effects that: the hot spot intelligent entity prediction and cache method provided by the invention can effectively reduce frequent communication between the user and the data source, and reduce the response of the remote center to the local user search request, thereby greatly reducing the cost and time delay for acquiring intelligent entity state data and improving the search accuracy.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a system architecture diagram of an intelligent hot spot data prediction and caching method according to the present invention;
FIG. 2 is a flowchart of an intelligent hot spot data prediction and caching method according to the present invention;
FIG. 3 is a diagram of a hot spot intelligent entity prediction model according to the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
As shown in fig. 1 and fig. 2, an intelligent hot spot data prediction and caching method preferably includes the following steps:
step one, data acquisition: various types of sensors sense the state data of the intelligent entity and periodically upload the acquired state data of the intelligent entity to a local server covering the sensing range of the intelligent entity.
Step two, search record sorting: the search system records search requests submitted by local users in a fixed time interval unit and records the times of searching different intelligent entities so that the local server can predict a hot intelligent entity set.
Step three, hot intelligent entity prediction: and the local server excavates hidden time domain correlation in the intelligent entity data according to the historical search records of the user and establishes a corresponding hot spot intelligent entity prediction model. The preferable method specifically comprises the following steps:
step three (one), LSTM model input is vector x (t-1) { x) of searched number of each intelligent entity at last time t-1 1 (t-1),x 2 (t-1),...,x q (t-1),...,x Q (t-1), where q represents a certain intelligent entity to be searched, x q (t-1) representing the number of times the intelligent entity Q was searched by the user at the last time t-1, and Q representing the total number of the intelligent entities searched at the time t-1;
step three (two), LSTM network has forget gate f t And input gate i t Update door C t And an output gate o t And four gate structures for maintaining and updating the cell state, wherein f, i, C and o represent corresponding vectors of the four different gate structures, and t represents the current moment.
Step three, reading the output h of the previous layer by the forgetting gate layer t-1 And input x of the current time t Outputting a value f t And assigning a state C to the current cell t-1 . Wherein f is t The calculation method is as follows: f. of t =σ(W f x t +U f h t-1 +b f ) H denotes a hidden state structure, x t An input vector representing the current time t LSTM, C a cell state vector, f t An activation vector, W, representing the forgetting gate at the current time t f 、U f And b f Input weight, loop weight and offset, σ (·), of the forgetting gate, respectively;
and step three (four), the input gate layer comprises two parts. In the first part, it is decided what value needs to be input by the sigmoid function. Another part, creating a new candidate vector by tanh function, which is added to state C t In (1). Input door i t The calculation formula is as follows: i.e. i t =σ(W i x t +U i h t-1 +b i ) The new candidate vector is calculated as:
Figure BDA0002817859820000061
wherein W is i 、U i And b i Input weight, round robin weight and offset, W, of the input gate, respectively C 、U C And b C Input weight, loop weight and bias of the update gate, respectively, tanh (-) is an activation function;
step three (five), updating the state of old cells by the updated gate layer, and converting C into C t-1 Is updated to C t The updating method is as follows:
Figure BDA0002817859820000062
wherein i t Representing an input vector corresponding to the current time t;
and step three (six), outputting a numerical value based on the state of the cell by the gate layer. A sigmoid layer is first run to determine which part of the cell state will be output. The cell state is then processed through tanh and multiplied by the output of the sigmoid gate. The portion determined to be output is finally output. The output vector calculation method of the layer of output gates comprises the following steps: o t =σ(W o x t +U o h t-1 +b o ) Hidden state vector h t =o t ·tanh(C t ) Wherein W is o 、U o And b o Input weights, round robin weights and offsets, respectively, of the output gate layer, σ (-) and tanh (-) being activation functions, as shown in FIG. 3;
and step three (step seven), due to the existence of four gate structures, the LSTM can realize the function of intelligent memory, so that the LSTM is good at processing data on time series. By utilizing the advantage of the LSTM, the LSTM is used for predicting the number of times each intelligent entity is searched at the next moment so as to more accurately cache the intelligent entities searched by the user;
and step three (eight), training by adopting a backward error propagation algorithm (BPTT) expanded according to time, and iteratively correcting the weight parameters in the network according to a predefined loss function so as to minimize the error between the predicted search times and the actual search times of the intelligent entity. The output is a vector of predicted number of times each intelligent entity has been searched at time t
Figure BDA0002817859820000063
Wherein
Figure BDA0002817859820000064
Representing the number of times the predictive smart entity q is searched by the user at time t;
step three (nine), x * (t) sorting the elements and obtaining a sorting index o ═ o 1 ,o 2 ,...,o q ,...,o Q };
Step three (ten), changing o to { o ═ o 1 ,o 2 ,...,o q ,...,o Q As a Zipf (Zipf's law) model
Figure BDA0002817859820000071
Wherein i is q Is a sorted index of the number of requested searches of the intelligent entity q in the entire local server,
Figure BDA0002817859820000072
is the sum of the ranking indexes of all the search request times of all the intelligent entities,
Figure BDA0002817859820000073
is a parameter in the Zipf model that characterizes the popularity distribution of intelligent entities. When in use
Figure BDA0002817859820000074
When the number of the intelligent entities is increased, the search request is more concentrated on some hot intelligent entities. Therefore, the probability that each intelligent entity is searched at the time t can be calculated: p ═ p 1 ,p 2 ,...,p q ,...,p Q ]Wherein p is q Representing the probability that the intelligent entity q is searched at time t.
Step four, hot spot intelligent entity caching: after predicting the hot intelligent entity set with high local user group attention according to the steps, the local server realizes active cache of the state data of the hot intelligent entities through the designed dynamic cache strategy. The preferable method specifically comprises the following steps:
step four (one), the local server is according to p ═ p 1 ,p 2 ,...,p q ,...,p Q ]Creating a popularity list caching hot intelligent entities, wherein the popularity represents the probability of the intelligent entities being searched;
step four, when the data reaches the local server, the cache space is directly cached in sequence according to the popularity list if not full, if the cache space is full, the local server carries out name matching on the intelligent entity data cached in the cache space and the arriving intelligent entity data, and if the intelligent entity data can be matched, the cache data is directly replaced by the arriving intelligent entity data due to timeliness of the data of the Internet of things;
step four (three), if not matched, calculating the popularity of the intelligent entity, and calculating the popularity of the intelligent entity
Figure BDA0002817859820000075
Wherein N represents the number of all intelligent entities in the coverage area of the local server, k represents the name of the arriving intelligent entity, and N k Indicating that the local server receives the sum of the number of times that all the intelligent entities named k are searched in a unit period,
Figure BDA0002817859820000076
indicating that the local server receives the total number of search requests per unit period. At the end of each cycle, the local server pair n k The sum of (a) is cleared and then counted again.
And step four, if the popularity of the intelligent entity is larger than the minimum popularity in the popularity table, replacing the cache data corresponding to the minimum popularity with the data of the intelligent entity. If the arrival time is less than the preset time, the arrival intelligent entity data is not cached.
And step four (step five), as the data of the Internet of things has freshness and life cycle characteristics, the cache system also can start a task for monitoring the expiration of the cache data at the background, detect the expired cache data regularly, and inform the local server of updating the data once the expiration of the cache is detected so as to ensure the validity of the data.
Step five, searching by the user: after a user sends a search request, the local server quickly searches whether intelligent entity state content matched with the search request exists, if yes, the fact that the user searches for the intelligent entity is a hot intelligent entity is indicated, the result is directly returned, if not, the fact that the user searches for the intelligent entity is a common intelligent entity, the local server issues the search request to a sensor associated with the intelligent entity, and the sensor collects data and returns the data to the user through the local server to complete the whole search process. The preferable method specifically comprises the following steps:
step five, actively caching the state data of the hot intelligent entity in a local server close to the user so as to meet the search requirement of the user;
step five, (two) different from traditional search mode, when users submit the order of searching for the intellectual entity of given state, the search system sends out the search request to the local server, after the local server receives the solicited message, search for whether there is intellectual entity state data matching with search request fast, in order to differentiate the intellectual entity type searched;
step five, if the local server matches the intelligent entity state data information related to the search request, the fact that the user searches for the hot intelligent entity is indicated, and the search result is directly returned to the user, so that the search time delay is reduced and the search precision is improved;
and step five, if the state data of the intelligent entity searched by the user is not in the local server, judging that the intelligent entity searched by the user is a common intelligent entity, sending a search request to a sensor associated with the intelligent entity by the local server, and returning the data to the user through the local server after the sensor collects the data.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (2)

1. An intelligent hot spot data prediction and cache method is characterized in that: the method comprises the following steps:
s1: data acquisition: various types of sensors sense state data of the intelligent entity and periodically upload the acquired state data of the intelligent entity to a local server covering the sensing range of the intelligent entity;
s2: search record sorting: the search system records search requests submitted by local users by taking a fixed time interval as a unit and records the times of searching different intelligent entities;
s3: hot intelligent entity prediction: the local server excavates hidden time domain correlation in intelligent entity data according to user historical search records and establishes a corresponding hot spot intelligent entity prediction model based on a long-term and short-term memory network (LSTM) model; the method specifically comprises the following steps:
s31: the input of the LSTM model is the vector x (t-1) { x ] of the number of searched intelligent entities at the last time t-1 1 (t-1),x 2 (t-1),...,x q (t-1),...,x Q (t-1) }, where q represents a certain intelligent entity to be searched, x q (t-1) representing the number of times the intelligent entity Q was searched by the user at the last time t-1, and Q representing the total number of the intelligent entities searched at the time t-1;
s32: LSTM network including forgetting gate f t And input gate i t Update door C t And an output gate o t Four gate structures for maintaining and updating cell states, wherein t represents the current time, and f, i, C and o represent corresponding vectors of the four different gate structures;
s33: forget the door layer to read the output h of the previous layer t-1 And input x of the current time t Outputting a value f t And assigning a state C to the current cell t-1 (ii) a Wherein f is t The calculation method is as follows: f. of t =σ(W f x t +U f h t-1 +b f ) H denotes the hidden state structure, x t An input vector representing the current time LSTM, C a cell state vector, f t An activation vector, W, representing a forgetting gate f 、U f And b f Respectively, the input weight, the cycle weight and the bias of the forgetting gate;
s34: the input gate layer comprises two parts; the first part, which decides what value needs to be input through a sigmoid function; another part, creating a new candidate vector by tanh function, which is added to state C t Performing the following steps; the new candidate vector is calculated as:
Figure FDA0003700827920000011
wherein W C 、U C And b C Updating the input weight, the cyclic weight and the bias of the gate respectively;
s35: the update gate layer updates the state of the old cells, and C t-1 Is updated to C t The updating method is as follows:
Figure FDA0003700827920000012
wherein i t Representing the input vector corresponding to the + current time t;
s36: the output gate layer outputs a numerical value based on the state of the cell; firstly, a sigmoid layer is operated to determine which part of the cell state is to be output; the cell state is then processed through tanh and multiplied by the output of the sigmoid gate; finally, outputting the part to be output; where W, U and b are the input weight, the round-robin weight, and the offset, respectively, for each gate structure, σ (-) and tanh (-) are activation functions;
s37: training by adopting a reverse error propagation algorithm (BPTT) which is expanded according to time, and iteratively correcting weight parameters in the network according to a predefined loss function so as to minimize the error between the predicted search times and the actual search times of the intelligent entity; the output is a vector of predicted number of times each intelligent entity has been searched at time t
Figure FDA0003700827920000021
Wherein
Figure FDA0003700827920000022
Representing the number of times the predictive smart entity q is searched by the user at time t;
s38: x is to be * Elements in (t)Sorting and obtaining a sorting index o ═ o 1 ,o 2 ,...,o q ,...,o Q };
S39: o is ═ o 1 ,o 2 ,...,o q ,...,o Q As input to the Zipf (zif's law) model, the probability that each intelligent entity is searched at time t is calculated: p ═ p 1 ,p 2 ,...,p q ,...,p Q ]Wherein p is q Representing the probability that the intelligent entity q is searched at the time t;
s4: hot intelligent entity caching: the local server realizes active caching of the state data of the hot intelligent entity through a designed dynamic caching strategy; the method specifically comprises the following steps:
s41: the local server is based on p ═ p 1 ,p 2 ,...,p q ,...,p Q ]Creating a popularity list caching hot intelligent entities, wherein the popularity represents the probability of the intelligent entities being searched;
s42: when the data reaches the local server, the cache space is directly cached in sequence according to the popularity list if the cache space is not full, if the cache space is full, the local server carries out name matching on the intelligent entity data cached in the cache space and the arriving intelligent entity data, and if the intelligent entity data can be matched, the cache data is directly replaced by the arriving intelligent entity data;
s43: if the popularity of the intelligent entity is not matched with the popularity of the intelligent entity, calculating the popularity of the intelligent entity, and if the popularity of the intelligent entity is greater than the minimum popularity in the popularity table, replacing the cache data corresponding to the minimum popularity with the data of the intelligent entity; if the data is less than the preset threshold value, the data reaching the intelligent entity is not cached;
s44: the cache system starts a task for monitoring the expiration of cache data at the background, detects the expiration cache data regularly, and informs a local server to update the data once the cache expiration is detected;
s5: user search: after a user sends a search request, the local server quickly searches whether intelligent entity state content matched with the search request exists, if yes, the fact that the user searches for the intelligent entity is a hot intelligent entity is indicated, the result is directly returned, if not, the local server sends the search request to a sensor associated with the intelligent entity, and the sensor collects data and returns the data to the user through the local server.
2. The intelligent hotspot data prediction and caching method of claim 1, wherein: step S5 specifically includes the following steps:
s51: the hot intelligent entity state data is actively cached in a local server close to the user so as to meet the search requirement of the user;
s52: when a user submits a command of searching the intelligent entity in a given state, a search system sends a search request to a local server, and after receiving a request message, the local server quickly searches whether intelligent entity state data matched with the search request exists or not so as to judge the type of the searched intelligent entity;
s53: if the local server matches the intelligent entity state data information related to the search request, the fact that the user searches for the hot intelligent entity is indicated, and the search result is directly returned to the user, so that the search time delay is reduced and the search precision is improved;
s54: if the state data of the intelligent entity searched by the user is not in the local server, the entity searched by the user is judged to be a common intelligent entity, the local server issues a search request to a sensor associated with the intelligent entity, and the sensor collects data and returns the data to the user through the local server.
CN202011412624.6A 2020-12-04 2020-12-04 Intelligent hot spot data prediction and cache method Active CN112637273B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011412624.6A CN112637273B (en) 2020-12-04 2020-12-04 Intelligent hot spot data prediction and cache method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011412624.6A CN112637273B (en) 2020-12-04 2020-12-04 Intelligent hot spot data prediction and cache method

Publications (2)

Publication Number Publication Date
CN112637273A CN112637273A (en) 2021-04-09
CN112637273B true CN112637273B (en) 2022-08-02

Family

ID=75308075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011412624.6A Active CN112637273B (en) 2020-12-04 2020-12-04 Intelligent hot spot data prediction and cache method

Country Status (1)

Country Link
CN (1) CN112637273B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114553963B (en) * 2022-02-24 2023-07-25 重庆邮电大学 Multi-edge node collaborative caching method based on deep neural network in mobile edge calculation
CN115378963A (en) * 2022-08-24 2022-11-22 重庆邮电大学 Edge data service method
CN115914388A (en) * 2022-12-14 2023-04-04 广东信通通信有限公司 Resource data fresh-keeping method based on monitoring data acquisition

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102880617A (en) * 2011-07-15 2013-01-16 无锡物联网产业研究院 Internet-of-things entity searching method and system
CN110674230A (en) * 2019-09-25 2020-01-10 重庆邮电大学 Intelligent edge data classification storage method
CN111711681A (en) * 2020-06-10 2020-09-25 重庆邮电大学 Edge processing method for intelligent entity

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103412903B (en) * 2013-07-31 2017-10-03 无锡安拓思科技有限责任公司 The Internet of Things real-time searching method and system predicted based on object of interest
US10713716B2 (en) * 2017-08-04 2020-07-14 Airbnb, Inc. Verification model using neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102880617A (en) * 2011-07-15 2013-01-16 无锡物联网产业研究院 Internet-of-things entity searching method and system
CN110674230A (en) * 2019-09-25 2020-01-10 重庆邮电大学 Intelligent edge data classification storage method
CN111711681A (en) * 2020-06-10 2020-09-25 重庆邮电大学 Edge processing method for intelligent entity

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Hanlin Mou ; Yuhong Liu ; Li Wang.LSTM for Mobility Based Content Popularity Prediction in Wireless Caching Networks.《2019 IEEE Globecom Workshops (GC Wkshps)》.2020, *
LTE移动通信网络内容热度预测及基站缓存策略研究;尚康禹;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20190515;第三章和第四章 *
Marica Amadeo ; Giuseppe Ruggeri ; Claudia Campolo.Caching Popular and Fresh IoT Contents at the Edge via Named Data Networking.《IEEE INFOCOM 2020 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)》.2020, *
面向物联网的边云协同实体搜索方法;王汝言;《计算机工程》;20200831;第0-3章 *

Also Published As

Publication number Publication date
CN112637273A (en) 2021-04-09

Similar Documents

Publication Publication Date Title
CN112637273B (en) Intelligent hot spot data prediction and cache method
Yin et al. Joint modeling of user check-in behaviors for real-time point-of-interest recommendation
Chen et al. Improved DV-Hop node localization algorithm in wireless sensor networks
CN104995870B (en) Multiple target server arrangement determines method and apparatus
Bao et al. A survey on recommendations in location-based social networks
KR20090085498A (en) A method for searching optimum hub locations based on a prediction about logistic cost
CN110674230B (en) Intelligent edge data classification storage method
Zhang et al. Dynamic scholarly collaborator recommendation via competitive multi-agent reinforcement learning
Iwami et al. A bibliometric approach to finding fields that co-evolved with information technology
Rahimi et al. Behavior-based location recommendation on location-based social networks
CN108804870A (en) Key protein matter recognition methods based on Markov random walks
CN113703688A (en) Distributed storage node load adjustment method based on big data and file heat
CN102178511A (en) Disease prevention warning system and implementation method
CN111711681B (en) Edge processing method for intelligent entity
Liu et al. POI Recommendation Method Using Deep Learning in Location‐Based Social Networks
Xu et al. In-network query processing in mobile P2P databases
Amagata et al. Efficient processing of top-k dominating queries in distributed environments
CN109471971A (en) A kind of semantic pre-fetching system and method for oriented towards education Domain resources cloud storage
Soulier et al. On ranking relevant entities in heterogeneous networks using a language‐based model
Mo et al. EPT-GCN: Edge propagation-based time-aware graph convolution network for POI recommendation
Xu et al. A new self-adaptive hybrid Markov topic model POI recommendation in social networks
Bai RETRACTED ARTICLE: Data cleansing method of talent management data in wireless sensor network based on data mining technology
Huang Personalized travel route recommendation model of intelligent service robot using deep learning in big data environment
CN109104466B (en) WoT resource management method based on P2P
CN111581420B (en) Flink-based medical image real-time retrieval method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant