CN113438678B - Method and device for distributing cloud resources for network slices - Google Patents

Method and device for distributing cloud resources for network slices Download PDF

Info

Publication number
CN113438678B
CN113438678B CN202110762144.0A CN202110762144A CN113438678B CN 113438678 B CN113438678 B CN 113438678B CN 202110762144 A CN202110762144 A CN 202110762144A CN 113438678 B CN113438678 B CN 113438678B
Authority
CN
China
Prior art keywords
cloud node
cloud
node
edge
data processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110762144.0A
Other languages
Chinese (zh)
Other versions
CN113438678A (en
Inventor
陈涛
白龙
尹超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liantong Shike Beijing Information Technology Co ltd
China United Network Communications Group Co Ltd
China Unicom Online Information Technology Co Ltd
Original Assignee
Liantong Shike Beijing Information Technology Co ltd
China United Network Communications Group Co Ltd
China Unicom Online Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liantong Shike Beijing Information Technology Co ltd, China United Network Communications Group Co Ltd, China Unicom Online Information Technology Co Ltd filed Critical Liantong Shike Beijing Information Technology Co ltd
Priority to CN202110762144.0A priority Critical patent/CN113438678B/en
Publication of CN113438678A publication Critical patent/CN113438678A/en
Application granted granted Critical
Publication of CN113438678B publication Critical patent/CN113438678B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/08Testing, supervising or monitoring using real traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/16Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]

Abstract

The application provides a method and a device for distributing cloud resources for a network slice. According to the method, the central cloud node and the edge cloud nodes in the cloud resources are reasonably distributed to each virtual network in the network slice according to the communication delay between each edge cloud node and the central cloud node, the data processing delay between each edge cloud node and the central cloud node and the computing capacity, balance between the computing efficiency and delay of each virtual network when the virtual network runs on the distributed cloud nodes can be guaranteed, and the utilization rate of the cloud resources can be improved.

Description

Method and device for distributing cloud resources for network slices
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and an apparatus for allocating cloud resources to a network slice.
Background
To satisfy 5G (5) th generation, 5G) communication network has a high transmission rate, a low delay, and a high network performance, and the communication technology for deploying network slices in a network infrastructure on which 5G depends is proposed in the communication field.
Currently, in a 5G communication network, available cloud resources are dynamically distributed to network slices mainly according to instantaneous user demands and the current load of the network.
A network slice consists of a series of virtual networks, and the virtual networks on each network slice can be split. Therefore, the communication field further proposes that each virtual network in the network slice can be deployed to a suitable cloud unit, that is, cloud resources can be allocated to the virtual network of the network slice by taking the virtual network of the network slice as a unit, so as to improve the utilization rate of the cloud resources. The field of communications does not specifically suggest how to allocate cloud resources for a virtual network of a network slice.
Therefore, how to allocate cloud resources to a virtual network of a network slice becomes a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a method and a device for allocating cloud resources to a network slice, which fully consider the delay requirement of a virtual network, the requirement of computing resources and the distribution condition of nodes, determine the optimal scheme of virtual network deployment, realize the balance between delay and computing efficiency, fully utilize the cloud resources and improve the utilization rate of the cloud resources.
In a first aspect, the present application provides a method for allocating cloud resources to a network slice, where the network slice includes a plurality of virtual networks, and the cloud resources include a center cloud node and a plurality of edge cloud nodes, the method including: obtaining a first communication delay between each edge cloud node of the plurality of edge cloud nodes and the central cloud node; acquiring first data processing time delay when each virtual network in the plurality of virtual networks is deployed at each edge cloud node; acquiring a second data processing time delay of each virtual network in the plurality of virtual networks when the virtual network is deployed at the central cloud node; determining a target cloud node allocated to each virtual network according to the first communication delay, the first data processing delay, the second data processing delay, the computing capacity of each edge cloud node and the computing capacity of the center cloud node, wherein the target cloud node is a cloud node in the cloud resources, and when each virtual network in the virtual networks runs on the corresponding target cloud node, the total computing resources required by the virtual networks are minimum.
With reference to the first aspect, in a first possible implementation manner, the total computing resources required by the multiple virtual networks satisfy the following relation:
Figure BDA0003149405360000021
Figure BDA0003149405360000022
Figure BDA0003149405360000023
Figure BDA0003149405360000024
wherein l a Representing an a-th link in a link set A including links between each edge node and the central node, I representing the plurality of virtual networks, I representing an ith virtual network in the plurality of virtual networks,
Figure BDA0003149405360000025
represents l a The computing rate of the edge cloud node of (a),
Figure BDA0003149405360000026
representing the computing rate of the central cloud node, E representing the sum of the maximum computing capacity which can be carried by the plurality of edge cloud nodes, F representing the sum of the maximum computing capacity which can be carried by the central cloud node,
Figure BDA0003149405360000027
is represented by a The first data processing delay corresponding to the edge cloud node of (1),
Figure BDA0003149405360000028
representing the second data processing latency corresponding to the central cloud node,
Figure BDA0003149405360000029
is represented by a Corresponding to the first communication delay, beta is a preset value,
Figure BDA00031494053600000210
is a value of 1 or 0,
Figure BDA00031494053600000211
representing the deployment of the ith virtual network at a On the edge cloud node of (1) and (2),
Figure BDA00031494053600000212
representing that the ith virtual network is deployed on the central cloud node.
In the method, the central cloud node and the plurality of edge cloud nodes distributed for the virtual network are determined by obtaining the communication delay, the data processing delay and the node computing capacity between each edge cloud node and the central cloud node in the plurality of edge cloud nodes, so that the delay limit of the existing mobile communication system is improved, the balance between the computing efficiency and the transmission delay is realized, and the utilization rate of cloud resources is improved.
With reference to the first aspect, in a second possible implementation manner, the obtaining a first communication delay between each edge cloud node of the multiple edge cloud nodes and the center cloud node includes: acquiring a first distance between each edge cloud node in the plurality of edge cloud nodes and the center cloud node; calculating the first communication delay as a function of the first distance, the first communication delay being equal to a ratio of the first distance to a transmission rate of a fiber link between the each edge cloud node and the center cloud node.
With reference to the first aspect, in a third possible implementation manner, the method further includes: and calculating the computing capacity of each cloud node according to the actual computing rate of each cloud node in the cloud resources and the maximum bearable computing amount, wherein the actual computing rate of each cloud node is determined by the computing rate of the floating point operation of each cloud node and the frequency of a CPU (central processing unit), and the maximum bearable computing amount of each cloud node is determined by the computing resources occupied by each cloud node.
With reference to the first aspect or any one of the foregoing possible implementation manners, in a fourth possible implementation manner, the determining, according to the first communication delay, the first data processing delay, and the second data processing delay, a target cloud node allocated to each virtual network includes: and determining the target cloud nodes distributed to each virtual network according to the first communication delay, the first data processing delay and the second data processing delay by using a heuristic algorithm.
In a second aspect, the present application provides an apparatus for allocating cloud resources to a network slice, where the network slice includes a plurality of virtual networks, and the cloud resources include a center cloud node and a plurality of edge cloud nodes, the apparatus including: an obtaining module, configured to obtain a first communication delay between each edge cloud node of the plurality of edge cloud nodes and the center cloud node; the obtaining module is further configured to obtain a first data processing delay of each virtual network in the multiple virtual networks when the virtual network is deployed at each edge cloud node; the obtaining module is further configured to obtain a second data processing delay of each of the multiple virtual networks when the virtual network is deployed at the central cloud node; a determining module, configured to determine, according to the first communication delay, the first data processing delay, the second data processing delay, the computing capability of each edge cloud node, and the computing capability of the center cloud node, a target cloud node allocated to each virtual network, where the target cloud node is a cloud node in the cloud resources, and when each virtual network in the multiple virtual networks runs on the corresponding target cloud node, a total computing resource required by the multiple virtual networks is minimum.
With reference to the second aspect, in a first possible implementation manner, the total computing resources required by the multiple virtual networks satisfy the following relation:
Figure BDA0003149405360000031
Figure BDA0003149405360000032
Figure BDA0003149405360000033
Figure BDA0003149405360000034
wherein l a Representing an a-th link in a link set A, the link set A including links between each edge node and the central node, I representing the plurality of virtual networks, I representing an ith virtual network in the plurality of virtual networks,
Figure BDA0003149405360000041
represents l a The computing rate of the edge cloud node of (c),
Figure BDA0003149405360000042
representing the computing rate of the central cloud node, E representing the sum of the maximum computing capacity which can be carried by the plurality of edge cloud nodes, F representing the sum of the maximum computing capacity which can be carried by the central cloud node,
Figure BDA0003149405360000043
represents l a The first data processing delay corresponding to the edge cloud node of (a),
Figure BDA0003149405360000044
representing the second data processing latency corresponding to the central cloud node,
Figure BDA0003149405360000045
is represented by a Corresponding to the first communication delay, beta is a preset value,
Figure BDA0003149405360000046
is a value of 1 or 0,
Figure BDA0003149405360000047
representing the deployment of the ith virtual network at a On the edge cloud node of (1) and (2),
Figure BDA0003149405360000048
representing that the ith virtual network is deployed on the central cloud node.
With reference to the second aspect, in a second possible implementation manner, the obtaining module is specifically configured to: obtaining a first distance between each edge cloud node of the plurality of edge cloud nodes and the center cloud node; calculating the first communication delay as a function of the first distance, the first communication delay being equal to a ratio of the first distance to a transmission rate of a fiber link between the each edge cloud node and the center cloud node.
With reference to the second aspect, in a third possible implementation manner, the apparatus further includes a computing module, where the computing module is configured to compute a computing capacity of each cloud node according to an actual computing rate of each cloud node in the cloud resources and a maximum bearable computing amount, the actual computing rate of each cloud node is determined by a computing rate of a floating point operation of each cloud node and a frequency of a CPU, and the maximum bearable computing amount of each cloud node is determined by computing resources occupied by each cloud node.
With reference to the second aspect or any one of the foregoing possible implementation manners, in a fourth possible implementation manner, the determining module is specifically configured to: and the determining module determines the target cloud nodes distributed to each virtual network according to the first communication delay, the first data processing delay and the second data processing delay by using a heuristic algorithm.
In a third aspect, the present application provides an apparatus for allocating cloud resources for a network slice, including: a plurality of memories and a plurality of processors; the memory is to store program instructions; the processor is configured to invoke program instructions in the memory to perform a method according to the first aspect or any one of its possible implementations.
When the system is a computing device, in some implementations, the system can also include a transceiver or a communication interface for communicating with other devices.
Where the system is a chip for a computing device, in some implementations the system may also include a communication interface for communicating with other apparatus in the computing device, such as for communicating with a transceiver of the computing device.
In a fourth aspect, the present application provides a computer-readable medium storing program code for execution by a computer, the program code comprising instructions for performing the method according to the first aspect or any one of its possible implementations.
In a fifth aspect, the present application provides a computer program product comprising instructions which, when run on a processor, cause the processor to carry out the method of the first aspect or any one of its implementations.
Drawings
Fig. 1 is a schematic diagram of cloud resource distribution according to an embodiment of the present application;
FIG. 2 is a flowchart of a method for allocating cloud resources for a network slice according to an embodiment of the present application;
fig. 3 is a schematic diagram of an apparatus for allocating cloud resources for a network slice according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of allocating cloud resources for a network slice according to an embodiment of the present application.
Detailed Description
For the sake of understanding, the relevant terms referred to in this application will first be described.
1. Network slicing
The network slice is a networking mode according to needs, an operator can separate a plurality of virtual end-to-end networks on a unified infrastructure, and each network slice is logically isolated from a wireless access network bearer network to a core network so as to adapt to various types of applications. In one network slice, at least three parts of a wireless network sub-slice, a bearer network sub-slice and a core network sub-slice can be divided.
Network Functions Virtualization (NFV) is a core of a network slicing technology, and the NFV separates hardware and software parts from a conventional network, the hardware is deployed by a unified server, and the software is born by different Network Functions (NF), so that a demand for flexibly assembling services is met.
Network slicing is a logic-based concept, which is the reorganization of resources, i.e., virtual machines and physical resources required for a specific communication service type are selected according to a Service Level Agreement (SLA).
2. Edge clouds
The currently widely accepted definition of cloud computing is: cloud computing is a model for provisioning and managing scalable, elastic, shared pools of physical and virtual resources in an on-demand, self-service manner, and providing network access. The cloud computing mode comprises key features, cloud computing roles and activities, capability types and cloud service categories, a cloud deployment model and cloud computing common concerns. At present, the concept of cloud computing is provided based on centralized resource management and control, and all software and hardware resources are still regarded as unified resources to be managed, scheduled and sold even if a plurality of data center interconnection and intercommunication forms are adopted. With the arrival of the age of 5G and the internet of things and the gradual increase of cloud computing applications, centralized clouds cannot meet the cloud resource requirements of large connection, low time delay and large bandwidth on the terminal side. By combining the concept of edge computing, cloud computing is inevitably developed to the next technical stage, namely, the capability of cloud computing is expanded to an edge side closer to a terminal, the cloud computing service is sunk through the unified management and control of the cloud edge side, end-to-end cloud service is provided, and the concept of edge cloud computing is generated.
Edge cloud computing, edge cloud for short, is a cloud computing platform constructed on an edge infrastructure based on the core of cloud computing technology and the capability of edge computing. An elastic cloud platform with comprehensive computing, network, storage, safety and other capabilities at the edge position is formed, an end-to-end technical framework of 'cloud edge end three-body cooperation' is formed with a central cloud and an internet of things terminal, and by putting network forwarding, storage, computing, intelligent data analysis and other works on the edge for processing, response delay is reduced, cloud pressure is relieved, bandwidth cost is reduced, and cloud services such as whole network scheduling and computing power distribution are provided. The infrastructure of edge cloud computing includes, but is not limited to: distributed IDC, operator communication network edge infrastructure, edge devices such as edge side customer nodes (e.g., edge gateways, home gateways, etc.), and their corresponding network environments.
Edge cloud computing is essentially based on cloud computing technology, and provides distributed cloud services with low time delay, self-organization, definable, schedulable, high security and open standards for terminals interconnected by everything. The edge cloud and the central cloud can adopt a unified architecture, a unified interface and unified management to the greatest extent, so that the development, operation and maintenance cost of a user can be reduced to the greatest extent, the cloud computing scope is really expanded to a place closer to a data source, and the defects of the cloud computing of the traditional architecture in certain application scenes are overcome.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of cloud resource distribution according to an embodiment of the present application. As shown in fig. 1, one central cloud may extend multiple edge clouds, the central cloud having storage, computing, networking, artificial Intelligence (AI), big data, and security functions, and the edge clouds serving as extensions of the central cloud to extend part of the cloud services or capabilities (including but not limited to storage, computing, networking, AI, big data, security, etc.) onto the edge infrastructure, including but not limited to: edge devices such as a distributed Internet Data Center (IDC), an operator communication network edge infrastructure, and edge side customer nodes (e.g., edge gateways, home gateways, etc.), and their corresponding network environments. The central cloud and the edge cloud are mutually matched, so that the capabilities of central and edge cooperation, whole-network computing power scheduling, whole-network unified management and control and the like are realized, and ubiquitous cloud is really realized.
The edge cloud is also called edge cloud computing, is essentially based on a cloud computing technology, and provides distributed cloud services with low time delay, self-organization, definable, schedulable, high security and open standards for terminals interconnected by everything. The edge cloud and the central cloud can adopt a unified architecture, a unified interface and unified management to the greatest extent, so that the development, operation and maintenance cost of a user can be reduced to the greatest extent, the cloud computing scope is really expanded to a place closer to a data source, and the defects of the cloud computing of the traditional architecture in certain application scenes are overcome.
In the embodiment of the present application, a node in a center cloud is referred to as a center cloud node, and a node in an edge cloud is referred to as an edge cloud node.
Fig. 2 is a flowchart of a method for allocating cloud resources for a network slice according to an embodiment of the present application. As shown in fig. 2, the method may include S201, S202, S203, and S204.
S201, obtaining a first communication delay between each edge cloud node of the edge cloud nodes and the center cloud node.
As an example, the distribution positions of the edge cloud nodes and the center cloud node may be obtained, where the distribution positions include the distances between the nodes and the spatial positions; then determining a first distance between each edge cloud node of the plurality of edge cloud nodes and the center cloud node according to the acquired distribution position; calculating a first communication delay from the derived first distance, the first communication delay being equal to a ratio of the first distance to a transmission rate of the optical fiber link between each of the edge cloud nodes and the center cloud node.
Optionally, in this embodiment, a bitmap between the edge cloud node and the center cloud node may be simulated and drawn according to the obtained distribution positions of the edge cloud node and the center cloud node, and after the first communication delay is obtained through calculation, the first communication delay may be marked on the drawn bitmap, so that the data is more intuitive.
S202, acquiring first data processing time delay when each virtual network in the plurality of virtual networks is deployed at each edge cloud node.
As an example, the first data processing delay of each virtual network when deployed at each edge cloud node may be determined according to the resource requirement of each virtual network and the resource situation of each edge cloud node.
S203, acquiring a second data processing time delay of each virtual network in the plurality of virtual networks when the virtual network is deployed at the central cloud node.
As an example, the second data processing delay of each virtual network when deployed at the central cloud node may be determined according to the resource requirement of each virtual network and the resource condition of the central cloud node.
S204, determining a target cloud node allocated to each virtual network according to the first communication delay, the first data processing delay, the second data processing delay, the computing capacity of each edge cloud node and the computing capacity of the center cloud node, wherein the target cloud node is a cloud node in the cloud resources.
In some implementations, the cloud resources include a central cloud node and a plurality of edge cloud nodes, and the computing capacity of each cloud node is calculated according to the actual computing rate of each cloud node in the cloud resources and the maximum bearable computing amount, wherein the actual computing rate of each cloud node is determined by the computing rate of the floating point operation of each cloud node and the frequency of the CPU; the maximum bearable computing amount of each cloud node is determined by computing resources occupied by each cloud node, for example, if the computing resources are more, the bearable computing amount is large, the computing amount can be carried out in a mode of carrying out pressure test on the node, namely, in a mode of continuously pressurizing, and the maximum bearable pressure of the node is determined to be the maximum computing amount under the condition that the node meets the performance condition of the node.
In some implementation manners of this embodiment, a target cloud node, that is, a central cloud node and a plurality of edge cloud nodes, allocated to each virtual network is determined according to the first communication delay, the first data processing delay, the second data processing delay, the computing capability of each edge cloud node, and the computing capability of the central cloud node, and when each virtual network in the plurality of virtual networks runs on the corresponding target cloud node, total computing resources required by the plurality of virtual networks may satisfy the following exemplary relationship:
Figure BDA0003149405360000081
Figure BDA0003149405360000082
Figure BDA0003149405360000083
Figure BDA0003149405360000084
wherein l a Representing an a-th link in a link set A including links between each edge node and the central node, I representing the plurality of virtual networks, I representing an ith virtual network in the plurality of virtual networks,
Figure BDA0003149405360000085
represents l a The computing rate of the edge cloud node of (a),
Figure BDA0003149405360000086
representing the computing rate of the central cloud node, E representing the sum of the maximum computing capacity which can be carried by the plurality of edge cloud nodes, F representing the sum of the maximum computing capacity which can be carried by the central cloud node,
Figure BDA0003149405360000087
is represented by a The first data processing delay corresponding to the edge cloud node of (1),
Figure BDA0003149405360000088
representing the second data processing latency corresponding to the central cloud node,
Figure BDA0003149405360000089
represents l a Corresponding to the first communication delay, beta is a preset value,
Figure BDA00031494053600000810
is a value of 1 or 0,
Figure BDA00031494053600000811
representing the deployment of the ith virtual network at a On the edge cloud node of (1) and (2),
Figure BDA00031494053600000812
representing that the ith virtual network is deployed on the central cloud node.
In the present embodiment of the present invention,
Figure BDA00031494053600000813
which may be referred to as an objective function,
Figure BDA0003149405360000091
a resource allocation constraint function that may be referred to as an edge cloud node,
Figure BDA0003149405360000092
the resource allocation preset function can be called as a central cloud node.
That is to say, in the method of this embodiment, when at least one of the resource allocation constraint function and the delay constraint function of the plurality of cloud nodes and the constraint function corresponding to the computing capability of the cloud node is satisfied, the optimal solver is used to determine the optimal deployment scheme of the virtual network that minimizes the value of the objective function.
Optionally, the optimal solver may be based on a particle swarm algorithm, a heuristic algorithm, a genetic algorithm, and the like, for example, when the heuristic algorithm is specifically used for solving, under the condition that constraint conditions are met, the virtual network is deployed on some of the cloud nodes, and is regarded as an optimal scheme, where the cloud nodes may be selected after being sorted in an ascending order or a descending order according to the delay time of the cloud nodes.
In some implementation manners, objective function values of total calculation resources required by a plurality of virtual networks in the scheme are regarded as optimal values, all possible schemes are traversed, objective function values of other possible schemes are calculated one by one, if the objective function value of one scheme is lower than the optimal value, the scheme is used for replacing the previously determined optimal scheme, and the traversal is terminated until the traversal time reaches preset time; and deploying the plurality of virtual networks by using the determined optimal deployment scheme, namely segmenting the virtual networks to each edge node as much as possible, so that the available computing resources are optimally utilized.
In the embodiment, the central cloud node and the plurality of edge cloud nodes distributed for the virtual network are determined by obtaining the communication delay, the data processing delay and the computing capacity of the nodes between the central cloud node and each edge cloud node, so that the delay limit of the existing mobile communication system is improved, the balance between the computing efficiency and the transmission delay is realized, and the utilization rate of cloud resources is improved.
As an implementation manner, in this embodiment of the application, the second communication delay between each edge cloud node in the multiple edge cloud nodes and each other edge cloud node in the multiple edge cloud nodes may also be obtained.
As an example, the distribution positions of the edge cloud nodes and other edge cloud nodes may be obtained, where the distribution positions include distances and spatial positions between the nodes; obtaining a second distance between each edge cloud node in the plurality of edge cloud nodes and each other edge cloud node in the plurality of edge cloud nodes according to the obtained distribution position; and calculating a second communication delay according to the obtained second distance, wherein the second communication delay is equal to the ratio of the second distance to the transmission rate of the optical fiber link between each edge cloud node and each other edge cloud node.
Optionally, in this embodiment, a bitmap between the edge cloud node and the other edge cloud nodes may be simulated and drawn according to the obtained distribution positions of the edge cloud node and the other cloud edge nodes, and after the second communication delay is obtained through calculation, the second communication delay may be marked on the drawn bitmap, so that the data is more intuitive.
In this implementation, the link set may further include links between each edge cloud node and each other edge cloud node.
Fig. 3 is a schematic diagram of an apparatus for allocating cloud resources for a network slice according to an embodiment of the present application. The apparatus shown in fig. 3 may be used to perform the method described in any of the previous embodiments. As shown in fig. 3, the apparatus 300 of the present embodiment may include: an acquisition module 301, a determination module 302 and a calculation module 303.
In one example, the apparatus 300 may be configured to perform the method described in fig. 2. For example, the obtaining module 301 may be configured to perform S201, S202, and S203, the determining module 302 may be configured to perform S204, and the calculating module 303 may be configured to perform S201 and S204.
Fig. 4 is a schematic structural diagram of allocating cloud resources for a network slice according to an embodiment of the present application. The apparatus shown in fig. 4 may be used to perform the method described in any of the previous embodiments.
As shown in fig. 4, the apparatus 400 of the present embodiment includes: memory 401, processor 402, communication interface 403, and bus 404. The memory 401, the processor 402 and the communication interface 403 are connected to each other by a bus 404.
The memory 401 may be a Read Only Memory (ROM), a static memory device, a dynamic memory device, or a Random Access Memory (RAM). The memory 401 may store a program and the processor 402 is adapted to perform the steps of the method shown in fig. 2 when the program stored in the memory 401 is executed by the processor 402.
The processor 402 may be a general-purpose Central Processing Unit (CPU), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits, and is configured to execute related programs to implement the methods of the embodiments of the present application.
The processor 402 may also be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the method of the embodiments of the present application may be implemented by integrated logic circuits of hardware or instructions in the form of software in the processor 402.
The processor 402 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory 401, and a processor 402 reads information in the memory 401, and performs functions required by units included in the apparatus according to the present application in combination with hardware thereof, for example, to perform various steps/functions of the embodiment shown in fig. 2.
The communication interface 403 may enable communication between the apparatus 400 and other devices or communication networks using, but not limited to, a transceiver device such as a transceiver.
Bus 404 may include a path that transfers information between various components of apparatus 400 (e.g., memory 401, processor 402, communication interface 403).
It should be understood that the apparatus 400 shown in the embodiment of the present application may be a computing device, or may also be a chip configured in a computing device.
It will also be appreciated that the memory in the embodiments of the subject application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of Random Access Memory (RAM) are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchlink DRAM (SLDRAM), and direct bus RAM (DR RAM).
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions or computer programs. The procedures or functions described in accordance with the embodiments of the present application are produced in whole or in part when the computer instructions or the computer program are loaded or executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more collections of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
It should be understood that the term "and/or" herein is merely one type of association relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. In addition, the "/" in this document generally indicates that the former and latter associated objects are in an "or" relationship, but may also indicate an "and/or" relationship, which may be understood with particular reference to the former and latter text.
In this application, "at least one" means one or more, "a plurality" means two or more. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not imply any order of execution, and the order of execution of the processes should be determined by their functions and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: u disk, removable hard disk, read only memory, random access memory, magnetic or optical disk, etc. for storing program codes.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. A method of allocating cloud resources for a network slice, the network slice comprising a plurality of virtual networks, the cloud resources comprising a central cloud node and a plurality of edge cloud nodes, the method comprising:
obtaining a first communication delay between each edge cloud node of the plurality of edge cloud nodes and the center cloud node;
acquiring first data processing time delay when each virtual network in the plurality of virtual networks is deployed at each edge cloud node;
acquiring a second data processing time delay when each virtual network in the plurality of virtual networks is deployed at the central cloud node;
determining a target cloud node allocated to each virtual network according to the first communication delay, the first data processing delay, the second data processing delay, the computing capacity of each edge cloud node and the computing capacity of the center cloud node, wherein the target cloud node is a cloud node in the cloud resources, and when each virtual network in the plurality of virtual networks runs on the corresponding target cloud node, the total computing resources required by the plurality of virtual networks are minimum;
the total computing resources required by the plurality of virtual networks satisfy the following relation:
Figure FDA0003851826470000011
Figure FDA0003851826470000012
Figure FDA0003851826470000013
Figure FDA0003851826470000014
wherein l a Representing an a-th link in a link set A, the link set A including a link between each edge cloud node and the center cloud node, I representing the plurality of virtual networks, I representing an ith virtual network in the plurality of virtual networks,
Figure FDA0003851826470000015
is represented by a The computing rate of the edge cloud node of (a),
Figure FDA0003851826470000016
representing the computing rate of the central cloud node, E representing the sum of the maximum computing capacity which can be carried by the plurality of edge cloud nodes, F representing the sum of the maximum computing capacity which can be carried by the central cloud node,
Figure FDA0003851826470000017
represents the said l a The first data processing delay corresponding to the edge cloud node of (1),
Figure FDA0003851826470000018
representing the second data processing latency corresponding to the central cloud node,
Figure FDA0003851826470000019
is represented by a Corresponding to the first communication delay, beta is a preset value,
Figure FDA00038518264700000110
is a value of 1 or 0,
Figure FDA00038518264700000111
representing the i-th virtual network deployment at a On the edge cloud node of (1) a,
Figure FDA00038518264700000112
indicating that the ith virtual network is deployed on the central cloud node, and S indicates total computing resources required by the plurality of virtual networks.
2. The method of claim 1, wherein obtaining the first communication delay between each edge cloud node of the plurality of edge cloud nodes and the central cloud node comprises:
obtaining a first distance between each edge cloud node of the plurality of edge cloud nodes and the center cloud node;
calculating the first communication delay as a function of the first distance, the first communication delay being equal to a ratio of the first distance to a transmission rate of a fiber link between each of the edge cloud nodes and the center cloud node.
3. The method according to any one of claims 1 to 2, wherein the determining the target cloud node allocated to each virtual network according to the first communication delay, the first data processing delay and the second data processing delay comprises:
and determining a target cloud node distributed to each virtual network according to the first communication delay, the first data processing delay and the second data processing delay by using a heuristic algorithm.
4. An apparatus for allocating cloud resources for a network slice, the network slice comprising a plurality of virtual networks, the cloud resources comprising a central cloud node and a plurality of edge cloud nodes, the apparatus comprising:
an obtaining module, configured to obtain a first communication delay between each edge cloud node of the plurality of edge cloud nodes and the center cloud node;
the obtaining module is further configured to obtain a first data processing delay of each virtual network in the multiple virtual networks when the virtual network is deployed at each edge cloud node;
the obtaining module is further configured to obtain a second data processing delay of each of the multiple virtual networks when the virtual network is deployed at the central cloud node;
a determining module, configured to determine, according to the first communication delay, the first data processing delay, the second data processing delay, the computing capability of each edge cloud node, and the computing capability of the center cloud node, a target cloud node allocated to each virtual network, where the target cloud node is a cloud node in the cloud resources, and when each virtual network in the multiple virtual networks runs on the corresponding target cloud node, a total computing resource required by the multiple virtual networks is minimum;
the total computing resources required by the plurality of virtual networks satisfy the following relation:
Figure FDA0003851826470000021
Figure FDA0003851826470000022
Figure FDA0003851826470000031
Figure FDA0003851826470000032
wherein l a Representing an a-th link in a link set A, wherein the link set A comprises links between each edge cloud node and the center cloud node, I represents the multiple virtual networks, I represents the ith virtual network in the multiple virtual networks,
Figure FDA0003851826470000033
represents l a The computing rate of the edge cloud node of (c),
Figure FDA0003851826470000034
representing the computing rate of the central cloud node, E representing the sum of the maximum computing capacity which can be carried by the plurality of edge cloud nodes, F representing the sum of the maximum computing capacity which can be carried by the central cloud node,
Figure FDA0003851826470000035
is represented by a The first data processing delay corresponding to the edge cloud node of (a),
Figure FDA0003851826470000036
representing the second data processing latency corresponding to the central cloud node,
Figure FDA0003851826470000037
represents l a Corresponding to the first communication delay, beta is a preset value,
Figure FDA0003851826470000038
is a value of 1 or 0, and,
Figure FDA0003851826470000039
representing the deployment of the ith virtual network at a On the edge cloud node of (1) a,
Figure FDA00038518264700000310
indicating that the ith virtual network is deployed on the central cloud node, and S indicates total computing resources required by the plurality of virtual networks.
5. The apparatus of claim 4, wherein the obtaining module is specifically configured to:
obtaining a first distance between each edge cloud node of the plurality of edge cloud nodes and the center cloud node;
calculating the first communication delay as a function of the first distance, the first communication delay being equal to a ratio of the first distance to a transmission rate of a fiber link between each of the edge cloud nodes and the center cloud node.
6. The apparatus according to any one of claims 4 to 5, wherein the determining module is specifically configured to:
and determining the target cloud nodes distributed to each virtual network according to the first communication delay, the first data processing delay and the second data processing delay by using a heuristic algorithm.
7. An apparatus for allocating cloud resources for a network slice, comprising: a plurality of memories and a plurality of processors;
the memory is to store program instructions;
the processor is configured to invoke program instructions in the memory to perform the method of any of claims 1 to 3.
8. A computer-readable medium, characterized in that the computer-readable medium stores program code for computer execution, the program code comprising instructions for performing the method of any of claims 1 to 3.
CN202110762144.0A 2021-07-06 2021-07-06 Method and device for distributing cloud resources for network slices Active CN113438678B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110762144.0A CN113438678B (en) 2021-07-06 2021-07-06 Method and device for distributing cloud resources for network slices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110762144.0A CN113438678B (en) 2021-07-06 2021-07-06 Method and device for distributing cloud resources for network slices

Publications (2)

Publication Number Publication Date
CN113438678A CN113438678A (en) 2021-09-24
CN113438678B true CN113438678B (en) 2022-11-22

Family

ID=77759239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110762144.0A Active CN113438678B (en) 2021-07-06 2021-07-06 Method and device for distributing cloud resources for network slices

Country Status (1)

Country Link
CN (1) CN113438678B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114826900B (en) * 2022-04-22 2024-03-29 阿里巴巴(中国)有限公司 Service deployment processing method and device for distributed cloud architecture

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112087332A (en) * 2020-09-03 2020-12-15 哈尔滨工业大学 Virtual network performance optimization system under cloud edge cooperation
WO2020258920A1 (en) * 2019-06-26 2020-12-30 华为技术有限公司 Network slice resource management method and apparatus

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10019278B2 (en) * 2014-06-22 2018-07-10 Cisco Technology, Inc. Framework for network technology agnostic multi-cloud elastic extension and isolation
US11005750B2 (en) * 2016-08-05 2021-05-11 Huawei Technologies Co., Ltd. End point to edge node interaction in wireless communication networks
WO2018224151A1 (en) * 2017-06-08 2018-12-13 Huawei Technologies Co., Ltd. Device and method for providing a network slice
KR102133814B1 (en) * 2017-10-31 2020-07-14 에스케이텔레콤 주식회사 Application distribution excution system based on network slicing, apparatus and control method thereof using the system
US10530645B2 (en) * 2018-06-02 2020-01-07 Verizon Patent And Licensing Inc. Systems and methods for localized and virtualized radio access networks
US10833951B2 (en) * 2018-11-06 2020-11-10 Telefonaktiebolaget Lm Ericsson (Publ) System and method for providing intelligent diagnostic support for cloud-based infrastructure
US11711267B2 (en) * 2019-02-25 2023-07-25 Intel Corporation 5G network slicing with distributed ledger traceability and resource utilization inferencing
CN111800283B (en) * 2019-04-08 2023-03-14 阿里巴巴集团控股有限公司 Network system, service providing and resource scheduling method, device and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020258920A1 (en) * 2019-06-26 2020-12-30 华为技术有限公司 Network slice resource management method and apparatus
CN112087332A (en) * 2020-09-03 2020-12-15 哈尔滨工业大学 Virtual network performance optimization system under cloud edge cooperation

Also Published As

Publication number Publication date
CN113438678A (en) 2021-09-24

Similar Documents

Publication Publication Date Title
CN109218355B (en) Load balancing engine, client, distributed computing system and load balancing method
CN112153700B (en) Network slice resource management method and equipment
CN111176792B (en) Resource scheduling method and device and related equipment
CN112752302A (en) Power service time delay optimization method and system based on edge calculation
CN111966496B (en) Data processing method, device, system and computer readable storage medium
KR102326830B1 (en) Methods, devices and devices for determining transport block size
CN107070709B (en) NFV (network function virtualization) implementation method based on bottom NUMA (non uniform memory Access) perception
CN115460216A (en) Calculation force resource scheduling method and device, calculation force resource scheduling equipment and system
CN112698952A (en) Unified management method and device for computing resources, computer equipment and storage medium
CN113438678B (en) Method and device for distributing cloud resources for network slices
CN114816738A (en) Method, device and equipment for determining calculation force node and computer readable storage medium
CN109547356A (en) A kind of data transmission method of electrical energy measurement, system, equipment and computer storage medium
Zhang et al. Optimal server resource allocation using an open queueing network model of response time
EP3398304B1 (en) Network service requests
CN112488563A (en) Determination method and device for force calculation parameters
CN110958666A (en) Network slice resource mapping method based on reinforcement learning
US20200322431A1 (en) Selective instantiation of a storage service for a mapped redundant array of independent nodes
JP2014154107A (en) Binary decision graph processing system and method
CN113472591B (en) Method and device for determining service performance
CN114443293A (en) Deployment system and method for big data platform
CN110113269B (en) Flow control method based on middleware and related device
CN111694670A (en) Resource allocation method, device, equipment and computer readable medium
CN111585784A (en) Network slice deployment method and device
CN116566992B (en) Dynamic collaboration method, device, computer equipment and storage medium for edge calculation
CN111885625B (en) Method and device for determining resource utilization rate

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant