CN117544513B - Novel Internet of things customized service providing method and device based on fog resources - Google Patents

Novel Internet of things customized service providing method and device based on fog resources Download PDF

Info

Publication number
CN117544513B
CN117544513B CN202410015996.7A CN202410015996A CN117544513B CN 117544513 B CN117544513 B CN 117544513B CN 202410015996 A CN202410015996 A CN 202410015996A CN 117544513 B CN117544513 B CN 117544513B
Authority
CN
China
Prior art keywords
node
service path
service
load
delay
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410015996.7A
Other languages
Chinese (zh)
Other versions
CN117544513A (en
Inventor
王滨
赵海涛
王星
王琴
宁洛函
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202410015996.7A priority Critical patent/CN117544513B/en
Publication of CN117544513A publication Critical patent/CN117544513A/en
Application granted granted Critical
Publication of CN117544513B publication Critical patent/CN117544513B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0895Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/04Network management architectures or arrangements
    • H04L41/044Network management architectures or arrangements comprising hierarchical management structures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a novel Internet of things customized service providing method and device based on fog resources, wherein the method comprises the following steps: receiving a service request sent by an Internet of things terminal; the service request carries delay tolerance and resource requirements, and corresponds to a group of ordered Virtual Network Function (VNF); the delay tolerance is used for indicating the maximum tolerance delay; determining a service path of the service request according to the delay tolerance and the resource requirement and by taking the minimized delay and load as principles; the service path takes a main fog node for receiving a service request sent by the terminal of the Internet of things as a source node and takes a secondary fog node as a target node; the total delay of the service path is less than or equal to the maximum tolerant delay; and mapping the VNF corresponding to the service request according to the service path. The method can improve the calculation performance of the fog architecture and the flexibility of data processing.

Description

Novel Internet of things customized service providing method and device based on fog resources
Technical Field
The application relates to the field of internet of things service provision, in particular to a novel internet of things customized service provision method and device based on fog resources.
Background
The internet of things service provision refers to industries that provide various services for internet of things devices and systems. These services include data collection, data storage, data analysis, device management, security assurance, and the like. Some common internet of things service providing fields such as cloud platform service provide cloud computing and storage resources for internet of things equipment and systems, and support data acquisition, processing and analysis; and the data analysis service is used for analyzing the data collected from the internet-of-things equipment, extracting valuable information and helping a user to make decisions.
Fog Computing (Fog Computing) is an emerging Computing model that pushes data processing and storage toward the network edge, i.e., closer to the data source. The core idea of fog computing is to drop cloud services to the network edge by deploying small servers on network nodes such as devices, routers, switches, etc. These servers can process and store data and provide local services such as real-time analysis, decision making, and control. At the same time, they can also communicate with the cloud to acquire more resources and support. Therefore, the data transmission delay can be reduced, the data security and privacy protection can be improved, and the Internet of things equipment and applications can be better supported.
VNF (Virtual Network Function ) refers to network function software running on a virtualized platform, such as firewalls, gateways, load balancing, etc. NVF (Network Functions Virtualization, network function virtualization) is a technology that separates these VNFs from dedicated hardware devices and deploys them on general hardware facilities. VNF is a core part of NFV, which may improve flexibility, scalability and efficiency of network functions.
Disclosure of Invention
In view of the above, the present application provides a novel internet of things customized service providing method and device based on fog resources.
Specifically, the application is realized by the following technical scheme:
according to a first aspect of embodiments of the present application, a novel internet of things customized service providing method based on fog resources is provided, and the novel internet of things customized service providing method is applied to a multi-layer fog architecture network system, where the multi-layer fog architecture network system includes an internet of things terminal layer, at least two fog node layers and a cloud core layer, and the at least two fog node layers include a primary fog node layer and a secondary fog node layer, and the method includes:
receiving a service request sent by an Internet of things terminal; the service request carries delay tolerance and resource requirements, and corresponds to a group of ordered Virtual Network Function (VNF); the delay tolerance is used for indicating the maximum tolerance delay;
Determining a service path of the service request according to the delay tolerance and the resource requirement; the service path takes a main fog node for receiving a service request sent by the terminal of the Internet of things as a source node and takes a secondary fog node as a target node; the total delay of the service path is less than or equal to the maximum tolerant delay; the available resources of the service path meet the resource requirements;
and mapping the VNF corresponding to the service request according to the service path.
According to a second aspect of the embodiments of the present application, a novel internet of things customized service providing device based on fog resources is provided, deployed in a multi-layer fog architecture network system, the multi-layer fog architecture network system includes an internet of things terminal layer, at least two layers of fog node layers and a cloud core layer, the at least two layers of fog node layers include a primary fog node layer and a secondary fog node layer, and the device includes:
the receiving unit is used for receiving the service request sent by the terminal of the Internet of things; the service request carries delay tolerance and resource requirements, and corresponds to a group of ordered Virtual Network Function (VNF); the delay tolerance is used for indicating the maximum tolerance delay;
The determining unit is used for determining a service path of the service request according to the delay tolerance and the resource requirement; the service path takes a main fog node for receiving a service request sent by the terminal of the Internet of things as a source node and takes a secondary fog node as a target node; the total delay of the service path is less than or equal to the maximum tolerant delay; the available resources of the service path meet the resource requirements;
and the mapping unit is used for mapping the VNF corresponding to the service request according to the service path.
According to a third aspect of embodiments of the present application, there is provided an electronic device comprising a processor and a memory, wherein,
a memory for storing a computer program;
and a processor configured to implement the method provided in the first aspect when executing the program stored in the memory.
The embodiment of the application provides a novel Internet of things customized service providing method based on fog resources, which provides a multi-layer fog architecture network system comprising an Internet of things terminal layer, at least two fog node layers and a cloud core layer, wherein the at least two fog node layers comprise a main fog node layer and a secondary fog node layer, and under the condition that the main fog node receives a service request sent by an Internet of things terminal, the service path of the service request can be determined according to the time delay tolerance and the resource requirement carried in the service request, and the VNF corresponding to the service request is mapped according to the determined service path. By applying at least two fog node layers, the computing performance of the fog architecture and the flexibility of data processing are improved.
Drawings
Fig. 1 is a schematic flow chart of a method for providing a customized service of a novel internet of things based on fog resources according to an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram of a multi-layered mist architecture network architecture, as shown in an exemplary embodiment of the present application;
fig. 3 is a schematic structural diagram of a novel internet of things customized service providing device based on fog resources according to an exemplary embodiment of the present application;
fig. 4 is a schematic structural diagram of a novel internet of things customized service providing device based on fog resources according to an exemplary embodiment of the present application;
fig. 5 is a schematic hardware structure of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In order to better understand the technical solutions provided by the embodiments of the present application and make the above objects, features and advantages of the embodiments of the present application more obvious, the technical solutions in the embodiments of the present application are described in further detail below with reference to the accompanying drawings.
It should be noted that, the sequence number of each step in the embodiment of the present application does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Referring to fig. 1, a flow chart of a method for providing a novel internet of things customized service based on fog resources according to an embodiment of the present application may be applied to a multi-layer fog architecture network system, as shown in fig. 1, and the method for providing the novel internet of things customized service based on fog resources may include the following steps:
For example, to improve the computing performance of the multi-layer fog architecture and the flexibility of data processing, the multi-layer fog architecture may include at least two layers of fog nodes, that is, the multi-layer fog architecture network system includes an internet of things terminal layer, at least two layers of fog node layers, and a cloud core layer, where the at least two layers of fog node layers include a primary fog node layer and a secondary fog node layer.
Illustratively, the primary cloud node is closer to the end user layer and the secondary cloud node is closer to the cloud core layer; the secondary cloud node has more resources than the primary cloud node.
Step S100, receiving a service request sent by an Internet of things terminal; wherein, the service request carries delay tolerance and resource requirement, and the service request corresponds to a group of ordered VNF; the delay tolerance is used to indicate the maximum tolerated delay.
It should be noted that, the execution body of steps S100 to S120 may be a source node of the service request, and the source node may be a master node that receives the service request sent by the terminal of the internet of things.
In the embodiment of the application, the fog node refers to equipment or a node responsible for bearing tasks such as data processing, storage and transmission in fog calculation, and can help to optimize network performance and user experience.
Illustratively, the foggy node may include, but is not limited to, a router or gateway, or the like.
By way of example, the internet of things terminal device may include internet of things enabled devices such as sensors, desktops, mobile stations that generate internet of things data, yielding traffic for various real-time applications of different latency and resource requirements.
The service request refers to a service request generated by the terminal of the internet of things, and one service request may correspond to a group of orderly VNFs.
For example, in Network Function Virtualization (NFV), a service request may correspond to an ordered set of VNFs.
For a service request, it may need to go through multiple network functions to accomplish a particular task. These network functions may be connected in a particular order to form an SFC (Service Function Chain service function chain). The SFC defines a path of a service request in the network, and each node corresponds to a specific VNF.
An ordered set of VNFs corresponding to service requests may meet different requirements, such as security checking of network traffic, load balancing, traffic optimization, etc. By virtualizing network functions, VNFs may be flexibly combined and configured to meet the requirements of different service requests.
For example, an ordered set of VNFs for one service request may include:
1) Firewall (Firewall): as the first VNF of the service function chain, firewalls are used to examine and filter traffic entering the network to ensure the security of the network.
2) Load Balancer (Load Balancer): as a second VNF of the service function chain, a load balancer is used to distribute traffic over multiple servers to achieve load balancing and improve service availability and performance.
3) WAN Optimizer (WAN Optimizer): as a third VNF of the service function chain, the WAN optimizer may compress, cache and accelerate data flowing through the network to reduce latency and improve bandwidth utilization.
4) Deep packet inspection (Deep Packet Inspection): as a fourth VNF of the service function chain, deep packet inspection may analyze and inspect network traffic in detail to implement advanced security policies and application identification.
5) Network monitoring and analysis (Network Monitoring and Analytics): as the last VNF of the service function chain, network monitoring and analysis may collect and analyze network traffic data to monitor network performance, troubleshoot, and optimize network configuration.
In actual deployment, different service function chains may be designed according to specific service requirements and network architecture, and service requests are guided to corresponding VNFs for processing. Thus, the flexibility and the expandability of the network can be improved, and the dependence and the management cost of hardware equipment are reduced.
In this embodiment of the present application, a service request of an internet of things terminal may be received by a master node, where the service request may carry delay tolerance (for indicating a maximum tolerance delay, that is, a processing delay of an actual service request cannot exceed the delay), resource requirements (including a requirement of instantaneous processing capability, a memory resource requirement, and the like), and the like.
Step S110, determining a service path of a service request according to the delay tolerance and the resource requirement; the service path takes a main fog node for receiving a service request sent by the terminal of the Internet of things as a source node and takes a secondary fog node as a target node; the total delay of the service path is less than or equal to the maximum tolerant delay, and the available resources of the service path meet the resource requirements.
In this embodiment of the present application, when the master fog node receives a service request sent by an internet of things terminal, a service path of the service request may be determined according to the delay tolerance and the resource requirement.
The total delay requirement of the service path determined by the master fog node for the service request is smaller than or equal to the maximum tolerant delay, and the available resources of the service path meet the resource requirement.
Illustratively, the available resources of the service path may be characterized by the available resources of the target node or the available resources of the hop node.
The jump node is a fog node except a source node and a target node in the service path.
Illustratively, the available resources of a node may be characterized by the instantaneous processing power of the node as well as the memory.
In order to improve the processing efficiency of the service request and improve the utilization rate of system resources, in the process of determining the service path of the service request, the service path of the service request may be determined on the basis of minimizing delay and/or load.
By way of example, delay refers to the time required to transmit data from a sender to a receiver to receive data. Minimizing the delay may improve data transmission efficiency.
Illustratively, minimizing the load may include the process of minimizing the load of individual nodes in the system through reasonable resource allocation and load scheduling policies. The method aims to optimize the performance and resource utilization of the system and avoid that the resource utilization rate of part of nodes is too low and the load of other nodes is too high.
For example, for a received service request, an available VNF instance may be determined according to a VNF corresponding to the service request, for example, an available VNF instance list may be determined according to information of already deployed VNF instances, dependency relationships between VNFs may be determined, and resource constraints of nodes, such as computing, storage and network resources, may be ensured, the selected VNF instance may meet the resource constraints, an optimal mapping policy may be selected according to a minimum delay and load principle, and a service path may be determined.
For example, for a primary fog node that receives a service request, the primary fog node is a source node and a secondary fog node is a target node in a service path determined by the service request.
For example, the service path may include at least one primary fog node as a hop node.
Step S120, mapping the VNF corresponding to the service request according to the determined service path.
In this embodiment of the present application, when a service path is determined in the foregoing manner, a VNF corresponding to the service request may be mapped according to the determined service path, that is, a VNF corresponding to the service request is deployed in the determined service path, and the service request is processed by a node in which the VNF is deployed.
For example, the service request may be mapped to a corresponding VNF instance according to the selected mapping policy. Each VNF instance is ensured to meet the requirement of a service request, the dependency relationship among the VNs is met, the selected VNF instance is deployed on a corresponding computing node according to the mapping result, necessary configuration and connection are carried out, and the mapping from the service request to the VNF is realized.
It can be seen that in the method flow shown in fig. 1, a multi-layer fog architecture network system including an internet of things terminal layer, at least two fog node layers and a cloud core layer is provided, where the at least two fog node layers include a primary fog node layer and a secondary fog node layer, and when the primary fog node receives a service request sent by the internet of things terminal, the service path of the service request can be determined according to the time delay tolerance and the resource requirement carried in the service request, and the VNF corresponding to the service request is mapped according to the determined service path. By applying at least two fog node layers, the computing performance of the fog architecture and the flexibility of data processing are improved.
In some embodiments, determining the service path of the service request according to the delay tolerance and the resource requirement may include:
Determining an optimal service path according to communication delay between a source node and a target node, processing delay of the target node, instantaneous processing capacity and memory of the target node and available bandwidth of each link in the service path, and taking the minimum delay and load as principles;
and under the condition that the optimal service path meets the time delay and the load requirements, determining the optimal service path as the service path of the service request.
Illustratively, the secondary fog node has more resources than the primary fog node and generally has stronger processing capability, so that the processing efficiency of the service request can be improved by processing the service request by the secondary fog node.
Accordingly, the delay corresponding to the service path may be characterized in terms of the communication delay between the source node and the target node and the processing delay of the target node; the load corresponding to the service path is represented by the instantaneous processing capacity and the memory of the target node and the available bandwidth of each link in the service path, and the optimal service path is determined on the basis of minimizing delay and load.
In the case where the optimal service path is determined, it may be determined whether the optimal service path satisfies the delay and load requirements, and in the case where the optimal service path satisfies the delay and load requirements, the optimal service path is determined as the service path of the service request.
Illustratively, meeting the load requirement of the service path may include the target node meeting the load requirement or the hop node meeting the load requirement.
In one example, the determining the optimal service path according to the communication delay between the source node and the target node, the processing delay of the target node, the instantaneous processing capability and the memory of the target node, and the available bandwidth of each link in the service path based on the principle of minimizing the delay and the load may include:
the determination of the optimal service path is achieved by the following objective function:
wherein src is the source node and,for the target node +.>For communication delay between source node and target node, < >>Processing delay for target node, +.>For the maximum instantaneous processing power of the target node,maximum memory for target node +.>For maximum available bandwidth of the service path, < > for>Andis a preset weight.
In some embodiments, determining the service path of the service request according to the delay tolerance and the resource requirement may include:
the method comprises the steps of taking a source node as a current node, determining an optimal adjacent node of the current node from adjacent nodes of the current node, taking the optimal adjacent node as a jump node of a service path, and determining the optimal adjacent node as a new current node; for any node, the adjacent node of the node is other node with the distance of 1 from the node; the best neighbor node is the neighbor node that produces the smallest delay for that node;
Continuously determining the optimal adjacent node of the new current node until the determined optimal adjacent node is a secondary fog node, and determining the secondary fog node as a target node of the service path;
and determining the service path as the service path of the service request under the condition that the determined service path meets the time delay and the load requirement.
For example, in consideration of the fact that the processing delay of the service request has more obvious influence on the user experience, when determining the service path, the service path determination can be performed with priority on the principle of minimum delay.
For a main fog node receiving a service request sent by an internet of things terminal, the node can be used as a source node of a service path, and adjacent nodes of the source node are identified nearby the source node.
Wherein, the adjacent nodes of the source node are other nodes with the distance between the adjacent nodes and the source node being 1.
The source node may select, from the neighboring nodes, a neighboring node with the smallest delay generated for the source node as an optimal neighboring node according to the delay generated for the source node by the neighboring nodes, and use the optimal neighboring node as a next hop of the service path.
Under the condition that the optimal adjacent node of the source node is determined in the above manner, the determined optimal adjacent node is taken as the current node, the optimal adjacent node of the current node is continuously determined in the above manner, and the like until the determined optimal adjacent node is taken as the secondary fog node, and the secondary fog node is taken as the target node of the service path, so that the service path is obtained.
In the case where a service path is determined, it may also be determined whether the service path meets latency and load requirements, and in the case where the service path meets latency and load requirements, the service path may be determined as the service path of the service request.
In some embodiments, the mapping the VNF corresponding to the service request according to the determined service path may include:
under the condition that the load of the target node of the service path meets the requirement, mapping the VNF corresponding to the service request to the target node;
and under the condition that the load of the target node of the service path does not meet the requirement, mapping each VNF corresponding to the service request to each hop node in the service path in turn until each VNF is mapped.
In the multi-layer fog architecture network system, the secondary fog nodes have more resources than the primary fog nodes, and the processing capacity of the secondary fog nodes is generally higher, so that in order to improve the processing efficiency of the service request, whether the load of the target node of the service path meets the requirement can be determined firstly under the condition that the service path is determined.
In one example, the load satisfaction requirement includes that the available resources exceed the resource requirements of the service request, or, in case of a VNF to which the service request corresponds being mapped, the load does not exceed a preset load threshold.
The load unsatisfied requirement may include that the available resources are lower than the resource requirements of the service request, or that the load exceeds a preset load threshold in case of VNFs corresponding to the service request are mapped.
The load threshold may be determined, for example, from the total available resources of the node and the load utilization threshold.
Illustratively, the total available resources of the node include the instantaneous processing power and memory resources of the node.
For example, in a case where the load of the target node of the service path meets the requirement, VNFs corresponding to the service request may be mapped to the target node, and the target node is responsible for processing the service request.
Therefore, under the condition that the load of the target node meets the requirement, the target node is responsible for processing the service request, more resources and stronger processing capacity of the secondary fog node are effectively utilized, the processing efficiency of the service request is improved, the resource utilization rate is improved, and the load of the main fog node in the service path is reduced and the congestion of the main fog node is avoided due to the fact that the service request is processed by the target node, so that the supply delay can be reduced.
For example, in a case where the load of the target node of the service path does not meet the requirement, each VNF corresponding to the service request may be mapped to each hop node in the service path in sequence until each VNF is mapped.
For example, a first VNF of the VNFs corresponding to the service request may be mapped to a first hop node of the service path, a second VNF may be mapped to a second hop node of the service path, and so on until each VNF is mapped.
Therefore, under the condition that the load of the target node does not meet the requirement, the jump node in the service path is responsible for processing the service request, so that the success rate of processing the service request is ensured.
It should be noted that, in this embodiment of the present application, for a hop node of any VNF to be mapped, when the available resource of the hop node is insufficient for performing the VNF mapping, it may be determined that the network request cannot complete the VNF mapping, that is, neither the primary fog node nor the secondary fog node can process the service request, and in this case, the core layer may respond to the service request.
In one example, after mapping each VNF corresponding to the service request to each hop node in the service path in sequence, the method may further include:
for any one of the hop nodes, under the condition that the hop node meets the load migration condition, migrating the VNF mapped on the hop node to other hop nodes with loads lower than the average load in the service path;
The jump node meets the load migration condition, wherein the load of the jump node exceeds the average load, or the load of the jump node exceeds the average load and exceeds a preset load threshold; the average load is the average value of the load of each hop node in the service path.
For example, load balancing may be performed between established path master nodes in order to improve resource utilization of nodes in the system.
For any one of the hop nodes in the service path, in the case that the hop node satisfies the load migration condition, the VNF mapped on the hop node may be migrated to other hop nodes in the service path with a load lower than the average load.
In one example, the hop node meeting the load migration condition includes the load of the hop node exceeding an average load.
In another example, the hop node meeting the load migration condition includes the load of the hop node exceeding an average load and exceeding a preset load threshold to avoid unnecessary load migration.
In order to enable those skilled in the art to better understand the technical solutions provided by the embodiments of the present application, the technical solutions provided by the embodiments of the present application are described below in conjunction with specific scenarios.
In this embodiment, a multi-layer mist architecture network architecture is provided, as shown in fig. 2, which can break down the network into four layers, including an internet of things terminal layer (which may also be an end user layer), a primary mist node layer, a secondary mist node layer, and a cloud core layer. The terminal user layer is a source of service requests, the primary fog node layer and the secondary fog node layer process the service requests, and the cloud core layer responds to the service requests under the condition that the primary fog node and the secondary fog node cannot process the service requests. This architecture can be modeled as an undirected graph with nodes representing devices or layers of nodes and links representing connections.
The internet of things terminal may include all devices supporting the internet of things, such as sensors, desktops, and mobile stations, which generate internet of things data, generate traffic for various real-time applications with different delays and resource requirements, and the traffic is first received by the master fog node.
The second layer is a main fog node layer, is a gateway homogeneous node at the edge of the network, is directly communicated with the end user layer, and has limited resources to solve the problems of network saturation and congestion.
The third layer is a secondary cloud node layer, assuming more resources are owned than the primary cloud nodes, which act as an intermediate layer between the cluster heads that manage the multiple primary nodes, i.e., the primary cloud nodes and the cloud core. It combines the advantages of two layers in terms of higher resources, but at the cost of slightly increased latency.
The second and third layers are responsible for accepting and processing service requests from the end user layer's internet of things devices.
The fourth layer is a cloud core layer and is responsible for responding to the service request of the Internet of things from the terminal user layer under the condition that the service request cannot be processed by the primary fog node and the secondary fog node.
In this embodiment, the above-described multi-layer fog network architecture may be modeled as an undirected graph g= (N, E), where N and E represent a set of physical nodes and physical links, respectively.
AggregationThe system consists of all terminal equipment of the Internet of things, a main fog node and a secondary fog node; wherein (1)>For the ith (i is a positive integer) node in the kth layer, i.e., k=1 corresponds to the end user layer, k=2 corresponds to the primary fog node layer, and k=3 corresponds to the secondary fog node layer.
Each node in the set has instantaneous processing powerAnd memory->The third secondary fog node layer has more resources than the second primary fog node layer.
Illustratively, the instantaneous processing capability of a node may be characterized by the number of processors available to the node.
The set E consists of links connecting the various layers, the available bandwidth of each link being denoted b (E), and the propagation delay at the link being denoted D (E).
Based on the above-mentioned multilayer fog architecture network system, the novel service providing scheme of the internet of things based on fog resources provided by the embodiment of the application may include the following steps:
1. and generating service requests r according to the time delay of the terminal and the resource demand, receiving all the service requests r at the main fog node of the second layer, and directing the service requests to the target node by the source node (the main fog node which receives the service requests) to form a service path.
Illustratively, a service request r may be represented by a four-tuple.
Exemplary, first layerThe terminal equipment of the internet of things can generate a service request r according to the time delay tolerance and the resource requirement. All service requests are received at the primary foggy node of the second layer, which may be referred to as the source node src, which directs the service requests to the destination node dst, establishing a service path between src and dst. The service path needs to map VNFs (which may be denoted as F r )。Where T is the total number of VNFs corresponding to the service request.
Wherein VNs are SFC-dependent (Service Function Chain ), i.e
Where each VNF requires a fixed amount of processor and memory resources on the cloud node (primary or secondary) to which it is mapped.
In this embodiment, the link bandwidth is primarily affected by the instantaneous processing power and memory resources of the node.
Illustratively, the link bandwidth of the service request r may beEach service request r may be characterized by a 4-tuple. I.e. < ->Wherein->Representing end-to-end SFC delay bound (i.e., maximum tolerated delay), b r Is the bandwidth requested by the interconnected VNFs, i.e. the bandwidth required by the VNF connections.
2. And determining a service path of the service request according to the delay tolerance and the resource requirement of the service request and by taking the minimized delay and load as principles.
Illustratively, in order to improve the resource utilization of nodes in the system with reduced latency, a down-order hierarchical provisioning method for fog architecture may be employed, with minimal latency and load as principles, to determine the service path.
For example, the service path satisfying the condition may be determined by setting an objective function and based on the set objective function.
Wherein the objective function may be used to constrain the selection of the service path. By setting constraints of the objective function, specific properties of the path, such as minimum delay, minimum load, can be limited. This ensures that the selected path meets certain requirements and constraints.
For example, the following objective function may be usedThe determination of the service path is realized:
wherein,for communication delay between source node and target node, < >>Processing delay for target node, +.>For maximum instantaneous processing capacity of the target node, < > for>Maximum memory for target node +.>For maximum available bandwidth of the service path, < > for>And->Is a preset weight.
With the primary cloud node that received the service request as the source node src, then the service request is transferred from the src to the target node in the third layerThe VNFs are co-located and mapped, i.e. the VNFs corresponding to the service request are all mapped to the target node. These VNFs will be deployed and executed to meet the requirements of the service request.
It should be noted that, in the embodiment of the present application, delay may be prioritized, and the determination of the service path may be implemented in a manner of minimizing delay.
For example, a shortest path algorithm between a source node and a target node based on minimum node processing delay and link communication delay may be utilized to determine the service path.
The service path p comprises src, a set of hop nodes (also can be hop intermediate nodes)And, dst.
The algorithm may identify neighboring nodes (which may be direct neighbors) U (src) in the vicinity of src.
Wherein,i.e. the other primary fogger nodes in the second layer at a distance 1 from src.
The neighboring nodes create various delays, determined by:
wherein,for communication delays between neighboring nodes of the source node and the source node,as a neighbor node to the source nodeProcessing delays.
The neighbor that produces the smallest delay to src is referred to as the best neighbor Taking the node as a jump nodeAdd to service path->
The most recently determined hop node is taken as a current node, the best adjacent node of the current node is determined, the best adjacent node is taken as the hop node and added to the service path p until the best adjacent node of the current node is a secondary fog node of a third layerThe third-level cloud node is used as a target node of the service path.
Wherein the delay D (p) of the established service path p is:
wherein,for the processing delay of the hop node, e is the link in the established service path,for communication delay between neighboring nodes +.>For the sum of the communication delays of the links in the established service path, the delay of the established service path p (i.e.)>) Less than the maximum tolerated delay is required.
3. Based on the load status, a group and single VNF mapping procedure is proposed.
Illustratively, at the secondary cloud node, based on the load status of each node in the service pathIn case of having enough resources for mapping all VNFs of the service request r, it can be taken as host node +.>
Wherein,
in this case, each VNF corresponding to the service request may be mapped to the host node in sequence. Namely:
i.e. for a given F r A set of functions is included, each function can only occur once, and in the ordering, functions with lower indices cannot be equal to functions with higher indices.
The above formulas can be used to represent constraints on the set of functions, ensuring their uniqueness and ordering relationship.
Thereafter, the updateResources at:
/>
exemplary, at the secondary fog nodeWithout sufficient resources for mapping all VNFs of the service request r, a single mapping procedure may be performed. I.e. one VNF mapped to one hop node in path p +.>
In this case, the hop node closest to srt is selected from path p for a single mapping, then the next closest node, and so on, until all VNFs are mapped.
It should be noted that if the hop node also lacks resources, the request will be deleted from the network.
Furthermore, if the instantaneous load is onWhere in the high traffic state, a single mapping is completed. Wherein, inIn the case of (a) indicates that the load exceeds the threshold utilization, i.e. is in a high flow state, +.>For transient load, +.>Is the threshold utilization (i.e., the load threshold described above), and α is the utilization threshold. This approach enables dynamic mapping decisions, allowing active loading of states on the host node. At the threshold level, the margin occupied by idle resources is maintained to avoid full saturation of node load.
4. And carrying out load balancing among the hop nodes of the established service path.
For example, an average load per hop node may be calculated. In case the load of a certain master node exceeds the threshold of the average load, the load of that hop node may be migrated to other hop nodes in the path with lower loads.
Wherein the deficit load of each node is calculated as the value by which the load exceeds the average load. The ratio of the deficit load to the total excess load of the node is used to determine the extent of load migration.
For example, load balancing can be performed between the hop nodes of the service path, and the load is migrated from the overload node to the underload node, so that better node resource utilization rate is achieved, congestion in the main fog node layer is avoided, and therefore the blocking rate is reduced.
Each hop node on service path pThe average load of (2) is:
in case that the load of a certain master node exceeds the average load, the load migration to the underloaded master node in the service path p can be triggeredWherein:
the deficit load for each node is:
in the case of load migration, a hop node with a large deficit load may be preferentially selected as a migration target. For example, the load of the overloaded node is preferentially migrated to the hop node with the largest deficit load.
Based on the method, the ratio of the loss load of each hop node to the total excess negative can be counted respectively:
the smaller the ratio, the better the balance among the nodes, and the more fully utilized the node resources.
For example, to reduce unnecessary load migration, load migration may be triggered in the event that the load of a hop node exceeds an average load and exceeds a load threshold.
The load migration is performed between conditions in the same service path, and for example, when any service request is migrated to the VNF of the service request, the load migration needs to be performed between the nodes in the hop of the service path of the service request. The process controls the provisioning time within a delay range that ensures that the total delay of the service path is less than the maximum delay of the service request and prevents any random load from spreading to other nodes.
The methods provided herein are described above. The apparatus provided in this application is described below:
please refer to fig. 3, which is a schematic structural diagram of a novel internet of things customized service providing device based on fog resources provided in this application, where the device may be deployed in a multi-layer fog architecture network system, the multi-layer fog architecture network system includes an internet of things terminal layer, at least two fog node layers and a cloud core layer, and the at least two fog node layers include a primary fog node layer and a secondary fog node layer, as shown in fig. 3, the novel internet of things customized service providing device based on fog resources may include:
A receiving unit 310, configured to receive a service request sent by an internet of things terminal; the service request carries delay tolerance and resource requirements, and corresponds to a group of ordered Virtual Network Function (VNF); the delay tolerance is used for indicating the maximum tolerance delay;
a determining unit 320, configured to determine a service path of the service request according to the delay tolerance and the resource requirement; the service path takes a main fog node for receiving a service request sent by the terminal of the Internet of things as a source node and takes a secondary fog node as a target node; the total delay of the service path is less than or equal to the maximum tolerant delay; the available resources of the service path meet the resource requirements;
and the mapping unit 330 is configured to map the VNF corresponding to the service request according to the service path.
In some embodiments, the determining unit 320 determines the service path of the service request according to the delay tolerance and the resource requirement, including:
determining an optimal service path according to communication delay between a source node and a target node, processing delay of the target node, instantaneous processing capacity and memory of the target node and available bandwidth of each link in the service path, and taking the minimum delay and load as principles;
And under the condition that the optimal service path meets the time delay and the load requirements, determining the optimal service path as the service path of the service request.
In some embodiments, the determining unit determines the optimal service path according to a communication delay between the source node and the target node, a processing delay of the target node, an instantaneous processing capability and a memory of the target node, and an available bandwidth of each link in the service path, based on a principle of minimizing delay and load, and includes:
the determination of the optimal service path is achieved by the following objective function:
wherein src is the source node and,for the target node +.>For communication delay between source node and target node, < >>Processing delay for target node, +.>For the maximum instantaneous processing power of the target node,maximum memory for target node +.>For maximum available bandwidth of the service path, < > for>And->Is a preset weight.
In some embodiments, the determining unit 320 determines the service path of the service request according to the delay tolerance and the resource requirement, including:
the source node is used as a current node, an optimal adjacent node of the current node is determined from adjacent nodes of the current node, the optimal adjacent node is used as a jump node of a service path, and the optimal adjacent node is determined as a new current node; for any node, the adjacent node of the node is other node with the distance of 1 from the node; the best neighbor node is the neighbor node that produces the smallest delay for that node;
Continuously determining the optimal adjacent node of the new current node until the determined optimal adjacent node is a secondary fog node, and determining the secondary fog node as a target node of the service path;
and determining the service path as the service path of the service request under the condition that the determined service path meets the time delay and the load requirement.
In some embodiments, the mapping unit 330 maps, according to the service path, the VNF corresponding to the service request, including:
under the condition that the load of a target node of the service path meets the requirement, mapping the VNF corresponding to the service request to the target node;
the load meeting requirement includes that available resources exceed resource requirements of the service request, or that the load does not exceed a preset load threshold under the condition that the VNF corresponding to the service request is mapped.
In some embodiments, the mapping unit 330 maps, according to the service path, the VNF corresponding to the service request, including:
under the condition that the load of a target node of the service path does not meet the requirement, mapping each VNF corresponding to the service request to each hop node in the service path in sequence until each VNF is mapped;
Wherein the load unsatisfied requirement includes that available resources are lower than resource requirements of the service request, or, in case of mapping VNFs corresponding to the service request, the load exceeds a preset load threshold.
In some embodiments, as shown in fig. 4, the new internet of things customized service providing apparatus based on fog resources may further include:
a load balancing unit 340, configured to, for any one of the hop nodes, migrate the VNF mapped on the hop node to another hop node with a load lower than the average load in the service path when the hop node meets a load migration condition;
the jump node meets load migration conditions, wherein the load of the jump node exceeds an average load, or the load of the jump node exceeds the average load and exceeds a preset load threshold; the average load is an average value of loads of all the hop nodes in the service path.
The embodiment of the application also provides electronic equipment, which comprises a processor and a memory, wherein the memory is used for storing a computer program; and the processor is used for realizing the novel Internet of things customized service providing method based on the fog resources when executing the program stored in the memory.
Fig. 5 is a schematic hardware structure of an electronic device according to an embodiment of the present application. The electronic device may include a processor 501, a memory 502 storing machine-executable instructions. The processor 501 and the memory 502 may communicate via a system bus 503. And, by reading and executing the machine executable instructions corresponding to the novel internet of things customized service providing logic based on fog resources in the memory 502, the processor 501 can execute the novel internet of things customized service providing method based on fog resources described above.
The memory 502 referred to herein may be any electronic, magnetic, optical, or other physical storage device that may contain or store information, such as executable instructions, data, or the like. For example, a machine-readable storage medium may be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., hard drive), a solid state drive, any type of storage disk (e.g., optical disk, dvd, etc.), or a similar storage medium, or a combination thereof.
In some embodiments, there is also provided a machine-readable storage medium, such as memory 502 in fig. 5, having stored therein machine-executable instructions that when executed by a processor implement the novel internet of things customized service providing method based on fog resources described above. For example, the machine-readable storage medium may be ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
The embodiments of the present application also provide a computer program product storing a computer program and when executed by a processor, causing the processor to perform the novel internet of things customized service providing method based on fog resources described above.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description of the preferred embodiments of the present invention is not intended to limit the invention to the precise form disclosed, and any modifications, equivalents, improvements and alternatives falling within the spirit and principles of the present invention are intended to be included within the scope of the present invention.

Claims (9)

1. The novel Internet of things customized service providing method based on fog resources is characterized by being applied to a multi-layer fog architecture network system, wherein the multi-layer fog architecture network system comprises an Internet of things terminal layer, at least two fog node layers and a cloud core layer, the at least two fog node layers comprise a main fog node layer and a secondary fog node layer, the main fog node is closer to the Internet of things terminal layer, and the secondary fog node is closer to the cloud core layer; a secondary fog node having more resources than a primary fog node, the method comprising:
receiving a service request sent by an Internet of things terminal; the service request carries delay tolerance and resource requirements, and corresponds to a group of ordered Virtual Network Function (VNF); the delay tolerance is used for indicating the maximum tolerance delay;
determining a service path of the service request according to the delay tolerance and the resource requirement; the service path takes a main fog node for receiving a service request sent by the terminal of the Internet of things as a source node and takes a secondary fog node as a target node; the total delay of the service path is less than or equal to the maximum tolerant delay; the available resources of the service path meet the resource requirements;
Mapping the VNF corresponding to the service request according to the service path;
wherein the mapping the VNF corresponding to the service request according to the service path includes:
mapping the VNF corresponding to the service request to the target node when the load of the target node of the service path meets the requirement;
the load meeting requirement includes that available resources exceed resource requirements of the service request, or that the load does not exceed a preset load threshold under the condition that the VNF corresponding to the service request is mapped.
2. The method of claim 1, wherein determining the service path of the service request according to the delay tolerance and the resource requirement comprises:
determining an optimal service path according to communication delay between a source node and a target node, processing delay of the target node, instantaneous processing capacity and memory of the target node and available bandwidth of each link in the service path, and taking the minimum delay and load as principles;
and under the condition that the optimal service path meets the time delay and the load requirements, determining the optimal service path as the service path of the service request.
3. The method of claim 2, wherein determining the optimal service path based on communication delay between the source node and the target node, processing delay of the target node, instantaneous processing power and memory of the target node, and available bandwidth of each link in the service path, based on minimizing delay and load, comprises:
the determination of the optimal service path is achieved by the following objective function:
wherein src is the source node and,for the target node +.>For communication delay between source node and target node, < >>Processing delay for target node, +.>For the maximum instantaneous processing power of the target node,maximum memory for target node +.>For maximum available bandwidth of the service path, < > for>And->Is a preset weight.
4. The method of claim 1, wherein determining the service path of the service request according to the delay tolerance and the resource requirement comprises:
the source node is used as a current node, an optimal adjacent node of the current node is determined from adjacent nodes of the current node, the optimal adjacent node is used as a jump node of a service path, and the optimal adjacent node is determined as a new current node; for any node, the adjacent node of the node is other node with the distance of 1 from the node; the best neighbor node is the neighbor node that produces the smallest delay for that node;
Continuously determining the optimal adjacent node of the new current node until the determined optimal adjacent node is a secondary fog node, and determining the secondary fog node as a target node of the service path;
and determining the service path as the service path of the service request under the condition that the determined service path meets the time delay and the load requirement.
5. The method of claim 1, wherein mapping the VNF corresponding to the service request according to the service path further comprises:
under the condition that the load of a target node of the service path does not meet the requirement, mapping each VNF corresponding to the service request to each hop node in the service path in sequence until each VNF is mapped;
wherein the load unsatisfied requirement includes that available resources are lower than resource requirements of the service request, or, in case of mapping VNFs corresponding to the service request, the load exceeds a preset load threshold.
6. The method of claim 5, wherein after mapping VNFs corresponding to the service requests to hop nodes in the service path in turn, further comprising:
for any one of the hop nodes, under the condition that the hop node meets the load migration condition, migrating the VNF mapped on the hop node to other hop nodes with loads lower than the average load in the service path;
The jump node meets load migration conditions, wherein the load of the jump node exceeds an average load, or the load of the jump node exceeds the average load and exceeds a preset load threshold; the average load is an average value of loads of all the hop nodes in the service path.
7. The novel Internet of things customized service providing device based on fog resources is characterized by being deployed in a multi-layer fog architecture network system, wherein the multi-layer fog architecture network system comprises an Internet of things terminal layer, at least two fog node layers and a cloud core layer, the at least two fog node layers comprise a main fog node layer and a secondary fog node layer, the main fog node is closer to the Internet of things terminal layer, and the secondary fog node is closer to the cloud core layer; a secondary fog node having more resources than a primary fog node, the apparatus comprising:
the receiving unit is used for receiving the service request sent by the terminal of the Internet of things; the service request carries delay tolerance and resource requirements, and corresponds to a group of ordered Virtual Network Function (VNF); the delay tolerance is used for indicating the maximum tolerance delay;
the determining unit is used for determining a service path of the service request according to the delay tolerance and the resource requirement; the service path takes a main fog node for receiving a service request sent by the terminal of the Internet of things as a source node and takes a secondary fog node as a target node; the total delay of the service path is less than or equal to the maximum tolerant delay; the available resources of the service path meet the resource requirements;
A mapping unit, configured to map, according to the service path, a VNF corresponding to the service request;
the mapping unit maps the VNF corresponding to the service request according to the service path, including:
under the condition that the load of a target node of the service path meets the requirement, mapping the VNF corresponding to the service request to the target node;
the load meeting requirement includes that available resources exceed resource requirements of the service request, or that the load does not exceed a preset load threshold under the condition that the VNF corresponding to the service request is mapped.
8. The apparatus of claim 7, wherein the determining unit determines the service path of the service request according to the delay tolerance and the resource requirement, comprising:
determining an optimal service path according to communication delay between a source node and a target node, processing delay of the target node, instantaneous processing capacity and memory of the target node and available bandwidth of each link in the service path, and taking the minimum delay and load as principles;
under the condition that the optimal service path meets the time delay and the load requirements, determining the optimal service path as the service path of the service request;
The determining unit determines an optimal service path according to communication delay between a source node and a target node, processing delay of the target node, instantaneous processing capability and memory of the target node, and available bandwidth of each link in the service path, based on a principle of minimizing delay and load, and includes:
the determination of the optimal service path is achieved by the following objective function:
wherein src is the source node and,for the target node +.>For communication delay between source node and target node, < >>Processing delay for target node, +.>For the maximum instantaneous processing power of the target node,maximum memory for target node +.>For maximum available bandwidth of the service path, < > for>And->Is a preset weight;
and/or the number of the groups of groups,
the determining unit determines a service path of the service request according to the delay tolerance and the resource requirement, and includes:
the source node is used as a current node, an optimal adjacent node of the current node is determined from adjacent nodes of the current node, the optimal adjacent node is used as a jump node of a service path, and the optimal adjacent node is determined as a new current node; for any node, the adjacent node of the node is other node with the distance of 1 from the node; the best neighbor node is the neighbor node that produces the smallest delay for that node;
Continuously determining the optimal adjacent node of the new current node until the determined optimal adjacent node is a secondary fog node, and determining the secondary fog node as a target node of the service path;
under the condition that the determined service path meets the time delay and the load requirement, determining the service path as the service path of the service request;
and/or the number of the groups of groups,
the mapping unit maps the VNF corresponding to the service request according to the service path, and further includes:
under the condition that the load of a target node of the service path does not meet the requirement, mapping each VNF corresponding to the service request to each hop node in the service path in sequence until each VNF is mapped;
wherein the load unsatisfied requirement includes that available resources are lower than resource requirements of the service request, or the load exceeds a preset load threshold under the condition that the VNF corresponding to the service request is mapped;
wherein the apparatus further comprises:
the load balancing unit is used for migrating the VNF mapped on any one hop node to other hop nodes with loads lower than the average load in the service path under the condition that the hop node meets the load migration condition;
The jump node meets load migration conditions, wherein the load of the jump node exceeds an average load, or the load of the jump node exceeds the average load and exceeds a preset load threshold; the average load is an average value of loads of all the hop nodes in the service path.
9. An electronic device comprising a processor and a memory, wherein,
a memory for storing a computer program;
a processor configured to implement the method of any one of claims 1 to 6 when executing a program stored on a memory.
CN202410015996.7A 2024-01-02 2024-01-02 Novel Internet of things customized service providing method and device based on fog resources Active CN117544513B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410015996.7A CN117544513B (en) 2024-01-02 2024-01-02 Novel Internet of things customized service providing method and device based on fog resources

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410015996.7A CN117544513B (en) 2024-01-02 2024-01-02 Novel Internet of things customized service providing method and device based on fog resources

Publications (2)

Publication Number Publication Date
CN117544513A CN117544513A (en) 2024-02-09
CN117544513B true CN117544513B (en) 2024-04-02

Family

ID=89790327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410015996.7A Active CN117544513B (en) 2024-01-02 2024-01-02 Novel Internet of things customized service providing method and device based on fog resources

Country Status (1)

Country Link
CN (1) CN117544513B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101754316A (en) * 2008-12-12 2010-06-23 上海电机学院 QoS energy-saving routing method based on maximization network life cycle
CN110830292A (en) * 2019-11-01 2020-02-21 西安电子科技大学 Medical big data-oriented cloud and mist mixed path determination method
AU2020101430A4 (en) * 2020-07-21 2020-08-20 D. J, Joel Devadass Daniel MR Low delay communication between cyber physical systems of iot applications using fog nodes
CN111614754A (en) * 2020-05-20 2020-09-01 重庆邮电大学 Fog-calculation-oriented cost-efficiency optimized dynamic self-adaptive task scheduling method
CN111641973A (en) * 2020-05-29 2020-09-08 重庆邮电大学 Load balancing method based on fog node cooperation in fog computing network
CN114584627A (en) * 2022-05-09 2022-06-03 广州天越通信技术发展有限公司 Middle station dispatching system and method with network monitoring function
CN114637552A (en) * 2022-03-09 2022-06-17 天津理工大学 Fuzzy logic strategy-based fog computing task unloading method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10243878B2 (en) * 2016-06-16 2019-03-26 Cisco Technology, Inc. Fog computing network resource partitioning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101754316A (en) * 2008-12-12 2010-06-23 上海电机学院 QoS energy-saving routing method based on maximization network life cycle
CN110830292A (en) * 2019-11-01 2020-02-21 西安电子科技大学 Medical big data-oriented cloud and mist mixed path determination method
CN111614754A (en) * 2020-05-20 2020-09-01 重庆邮电大学 Fog-calculation-oriented cost-efficiency optimized dynamic self-adaptive task scheduling method
CN111641973A (en) * 2020-05-29 2020-09-08 重庆邮电大学 Load balancing method based on fog node cooperation in fog computing network
AU2020101430A4 (en) * 2020-07-21 2020-08-20 D. J, Joel Devadass Daniel MR Low delay communication between cyber physical systems of iot applications using fog nodes
CN114637552A (en) * 2022-03-09 2022-06-17 天津理工大学 Fuzzy logic strategy-based fog computing task unloading method
CN114584627A (en) * 2022-05-09 2022-06-03 广州天越通信技术发展有限公司 Middle station dispatching system and method with network monitoring function

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于云雾混合计算的车联网联合资源分配算法;唐伦;肖娇;魏延南;赵国繁;陈前斌;;电子与信息学报;20200815(第08期);全文 *

Also Published As

Publication number Publication date
CN117544513A (en) 2024-02-09

Similar Documents

Publication Publication Date Title
US11842207B2 (en) Centralized networking configuration in distributed systems
CN107770096B (en) SDN/NFV network dynamic resource allocation method based on load balancing
US10341208B2 (en) File block placement in a distributed network
US10257266B2 (en) Location of actor resources
US9584369B2 (en) Methods of representing software defined networking-based multiple layer network topology views
US11463511B2 (en) Model-based load balancing for network data plane
US9104492B2 (en) Cloud-based middlebox management system
US8087025B1 (en) Workload placement among resource-on-demand systems
EP2710470B1 (en) Extensible centralized dynamic resource distribution in a clustered data grid
US10660069B2 (en) Resource allocation device and resource allocation method
US8862775B2 (en) Network server and load balancing routing method for networks thereof
CN105515977B (en) Method, device and system for acquiring transmission path in network
Botero et al. A novel paths algebra-based strategy to flexibly solve the link mapping stage of VNE problems
EP3066569A1 (en) Centralized networking configuration in distributed systems
US20220318071A1 (en) Load balancing method and related device
Rankothge et al. On the scaling of virtualized network functions
Elsharkawey et al. Mlrts: multi-level real-time scheduling algorithm for load balancing in fog computing environment
CN117149445B (en) Cross-cluster load balancing method and device, equipment and storage medium
CN113259175B (en) Security service and function service combined arrangement method in edge computing environment
CN110888734A (en) Fog computing resource processing method and device, electronic equipment and storage medium
Masoumi et al. Dynamic online VNF placement with different protection schemes in a MEC environment
CN111405614B (en) Method for calculating APP load sharing at mobile edge
CN117544513B (en) Novel Internet of things customized service providing method and device based on fog resources
CN115361332B (en) Fault-tolerant route processing method and device, processor and electronic equipment
CN114866544B (en) CPU heterogeneous cluster-oriented containerized micro-service load balancing method in cloud edge environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant