CN112506648A - Traffic stateless migration method of virtual network function instance and electronic equipment - Google Patents

Traffic stateless migration method of virtual network function instance and electronic equipment Download PDF

Info

Publication number
CN112506648A
CN112506648A CN202011310178.8A CN202011310178A CN112506648A CN 112506648 A CN112506648 A CN 112506648A CN 202011310178 A CN202011310178 A CN 202011310178A CN 112506648 A CN112506648 A CN 112506648A
Authority
CN
China
Prior art keywords
instance
allocated
traffic
cpu resource
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011310178.8A
Other languages
Chinese (zh)
Other versions
CN112506648B (en
Inventor
李清
黄河
江勇
段经璞
吴宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest University of Science and Technology
Shenzhen International Graduate School of Tsinghua University
Peng Cheng Laboratory
Original Assignee
Southwest University of Science and Technology
Shenzhen International Graduate School of Tsinghua University
Peng Cheng Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest University of Science and Technology, Shenzhen International Graduate School of Tsinghua University, Peng Cheng Laboratory filed Critical Southwest University of Science and Technology
Priority to CN202011310178.8A priority Critical patent/CN112506648B/en
Publication of CN112506648A publication Critical patent/CN112506648A/en
Application granted granted Critical
Publication of CN112506648B publication Critical patent/CN112506648B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45591Monitoring or debugging support
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5022Workload threshold
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/508Monitor

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a flow stateless migration method of a virtual network function instance and electronic equipment, comprising the following steps: acquiring CPU resource use information corresponding to the distributed instances of the server and loads of the distributed instances at preset time intervals; when CPU resource use information and load meet preset conditions, installing a target rule and a temporary rule in the switch, and deleting an original rule corresponding to the distributed instance; and forwarding the first traffic corresponding to the allocated instance to the target instance for processing through the target rule, and forwarding the second traffic corresponding to the allocated instance for processing through the temporary rule. The invention transfers the distributed example flow to the target example without state, avoids interruption in the flow transfer process, saves the time overhead of example state synchronization, improves the flow delay caused by flow interruption, and can meet the requirement of delay sensitive flow.

Description

Traffic stateless migration method of virtual network function instance and electronic equipment
Technical Field
The invention relates to the technical field of internet, in particular to a traffic stateless migration method of a virtual network function instance and electronic equipment.
Background
The trend of Network Function Virtualization (NFV) is intended to replace hardware Network Function (NFs) devices with Virtual Network Function (VNF) instances that run as virtual machines or containers in servers, which tenants typically manage on the NFV platform of multiple servers in order to deploy a service chain consisting of multiple VNF instances for complex Network services. The usual approach is to create a service chain within the server and connect the deployed VNF instance with a high performance software SDN switch for traffic handling.
To better utilize server resources, operators typically package different service chains subscribed by different cloud tenants in the same server. Therefore, correctly scheduling different VNF instances occupied by different service chains in time is crucial to the overall performance of the entire NFV system. In order to meet different QoS (time delay, jitter, packet loss, etc.) indicators of traffic, the traffic on the VNF instance needs to be dynamically migrated. However, the traffic migration method of the existing VNF instance may cause traffic processing interruption, and it is difficult to meet the requirement of delay-sensitive traffic.
Accordingly, the prior art is yet to be improved and developed.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a traffic stateless migration method of a virtual network function instance and electronic equipment, and aims to solve the problem that the traffic processing interruption occurs in the existing traffic migration method of a VNF instance and the requirement of delay sensitive traffic is difficult to meet.
The technical scheme adopted by the invention for solving the technical problem is as follows:
in a first aspect, an embodiment of the present invention provides a method for stateless traffic migration of a virtual network function instance, where the method includes:
acquiring CPU resource use information corresponding to an allocated instance of a server and the load of the allocated instance at preset time intervals;
when the CPU resource use information and the load meet preset conditions, installing a target rule and a temporary rule in the switch, and deleting an original rule corresponding to the distributed instance; wherein the temporary rule is higher in priority than the target rule;
forwarding a first flow corresponding to the allocated instance to a target instance for processing through the target rule, and forwarding a second flow corresponding to the allocated instance for processing through the temporary rule, so as to migrate the flow of the allocated instance to the target instance in a stateless manner; wherein the first traffic is the traffic that the allocated instance is not processing, and the second traffic is the traffic that the allocated instance is processing.
The method for the stateless migration of the flow of the virtual network function instance comprises the following steps that the CPU resource is used by a plurality of instances and the load is smaller than a preset first threshold value.
The traffic stateless migration method of the virtual network function instance, wherein the step of forwarding the second traffic corresponding to the allocated instance for processing through the temporary rule includes:
collecting second flow corresponding to the distributed instances through the temporary rules, and generating a plurality of sub-rules according to flow information corresponding to the second flow;
forwarding the second traffic to the allocated instance for processing by the number of sub-rules.
The traffic stateless migration method of the virtual network function instance, wherein the step of generating a plurality of sub-rules according to the traffic information corresponding to the second traffic includes:
generating a plurality of optimal prefixes according to the flow information corresponding to the second flow;
and generating a plurality of sub-rules according to the plurality of optimal prefixes.
The method for stateless migration of traffic of the virtual network function instance, wherein the step of generating a plurality of optimal prefixes according to the traffic information corresponding to the second traffic includes:
constructing a binary tree according to the traffic information corresponding to the second traffic;
and generating a plurality of optimal prefixes according to the binary tree.
The traffic stateless migration method of the virtual network function instance, wherein the method further comprises:
when an example to be allocated is received, acquiring idle CPU resources of a server side and the quantity of the CPU resources required by the example to be allocated;
when the idle CPU resource is larger than a preset second threshold value, determining a target CPU resource corresponding to the example to be allocated according to the idle CPU resource and the quantity of the CPU resources, and allocating the determined target CPU resource to the example to be allocated;
when the idle CPU resource is smaller than or equal to a preset second threshold value, determining a target CPU resource corresponding to the instance to be allocated according to the idle CPU resource, the number of the CPU resources and the load of the instance to be allocated, and allocating the determined target CPU resource to the instance to be allocated.
The method for stateless migration of the flow of the virtual network function instance, wherein the step of determining the target CPU resource corresponding to the instance to be allocated according to the idle CPU resource, the number of the CPU resources and the load of the instance to be allocated comprises the following steps:
when the number of the CPU resources is less than or equal to the idle CPU resources, acquiring the load of the instance to be distributed;
when the load of the instance to be distributed is smaller than or equal to a preset third threshold value, obtaining a distributed instance corresponding to each CPU resource in the distributed CPU resources;
determining a target CPU resource corresponding to the example to be allocated according to the allocated example corresponding to each CPU resource; wherein the target CPU resource is included within the allocated CPU resource.
The method for stateless migration of traffic of a virtual network function instance, wherein the step of determining a target CPU resource corresponding to the to-be-allocated instance according to the allocated instance corresponding to each CPU resource comprises:
determining candidate CPU resources corresponding to the to-be-allocated instances according to the allocated instances corresponding to the CPU resources respectively; the candidate CPU resources are included in the allocated CPU resources, and the allocated instances corresponding to the candidate CPU resources and the instances to be allocated do not form a service chain;
and determining the target CPU resource corresponding to the to-be-distributed instance according to the candidate CPU resource.
A server, comprising: a processor, a storage medium communicatively coupled to the processor, the storage medium adapted to store a plurality of instructions; the processor is adapted to call instructions in the storage medium to execute the steps of implementing the traffic stateless migration method of the virtual network function instance described above.
A computer readable storage medium having stored thereon a plurality of instructions adapted to be loaded and executed by a processor to perform the steps of implementing the traffic stateless migration method for a virtual network function instance as described above.
Has the advantages that: the invention forwards the unprocessed flow of the distributed example to the target example for processing through the target rule, forwards the flow which is processed by the distributed example to the distributed example for processing through the temporary rule, and migrates the flow of the distributed example to the target example in a stateless manner, thereby avoiding interruption in the flow transfer process, saving the time overhead of example state synchronization, improving the flow delay caused by flow interruption, and meeting the requirement of delay sensitive flow.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of an embodiment of a traffic stateless migration method for a virtual network function instance according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of example 1 of flow statistics provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of an embodiment of the present invention providing migration of traffic from example 1 to example 2;
fig. 4 is a K prefix cover algorithm mask for constructing a binary tree according to traffic information provided in the embodiment of the present invention;
FIG. 5 is a binary tree diagram constructed by the K prefix override algorithm mask of FIG. 4 in an embodiment of the present invention;
fig. 6 is a flowchart of an embodiment of a specific application of a traffic stateless migration method for a virtual network function instance according to an embodiment of the present invention;
fig. 7 is a schematic block diagram of an internal structure of a server according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It should be noted that, if directional indications (such as up, down, left, right, front, and back … …) are involved in the embodiment of the present invention, the directional indications are only used to explain the relative positional relationship between the components, the movement situation, and the like in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indications are changed accordingly.
When traffic migration is performed by the conventional traffic migration method of the VNF instance, first, all traffic of the instance to be migrated is cached to a cache region of the controller by the SDN controller, traffic processing is interrupted, meanwhile, state information synchronization is performed on the instance to be migrated until the state information synchronization is completed, and the controller redistributes the traffic to the two instances for processing, so as to balance loads of the instances. In the whole VNF instance traffic migration process, there may be an interruption in traffic processing, which is intolerable for critical traffic such as video playing. And the processing of time delay is increased steeply in a period of time after the flow is transferred, so that the indexes of QoS (time delay, jitter, packet loss and the like) are reduced steeply, and the requirement of time delay sensitive flow is difficult to meet.
In order to solve the problems in the prior art, this embodiment provides a traffic stateless migration method for a virtual network function instance, by which traffic of an allocated instance can be migrated to a target instance stateless, which avoids interruption in a traffic transfer process, saves time overhead of instance state synchronization, improves traffic delay caused by traffic interruption, and can meet a demand for delay-sensitive traffic. In specific implementation, CPU resource use information corresponding to an allocated instance of a server and a load of the allocated instance are obtained at intervals of preset time, then, when the CPU resource use information and the load meet preset conditions, a target rule and a temporary rule are installed in a switch, and an original rule corresponding to the allocated instance is deleted; and finally, forwarding the first traffic corresponding to the allocated instance to a target instance for processing through the target rule, and forwarding the second traffic corresponding to the allocated instance for processing through the temporary rule so as to transfer the traffic of the allocated instance to the target instance in a stateless manner. Therefore, the allocated example unprocessed flow is forwarded to the target example for processing through the target rule, the flow processed by the allocated example is forwarded to the allocated example for processing through the temporary rule, the flow of the allocated example is stateless migrated to the target example, interruption in the flow transfer process is avoided, time overhead of example state synchronization is saved, flow delay caused by flow interruption is improved, and the requirement of delay sensitive flow can be met.
Exemplary method
The present embodiment provides a traffic stateless migration method for a virtual network function instance, where the method may be applied to a server, and specifically as shown in fig. 1, the method includes:
s100, obtaining CPU resource use information corresponding to the distributed instance of the server and the load of the distributed instance at preset time intervals.
The allocated instance refers to a virtual network function instance of the allocated CPU resource, and the CPU resource corresponding to the allocated instance refers to the CPU resource allocated to the allocated instance. In the process of allocating the instance CPU resources, each CPU resource may be commonly used by a plurality of virtual function network instances, each CPU resource may also be used by only one virtual function network instance, the CPU resource usage information indicates the number of virtual function network instances using the CPU resource, and the load of the allocated instance indicates the workload of the allocated instance. Generally, when the workload between different VNF instances of the same type is balanced, the VNF instances with lower load and corresponding CPU resources used by multiple VNF instances are preferentially recycled. Therefore, in this embodiment, the CPU resource usage information corresponding to the allocated instance of the server and the load of the allocated instance are obtained at preset time intervals, so as to determine whether to migrate the traffic of the allocated instance in the subsequent step. For example, as shown in fig. 2 and fig. 3, after the traffic with the IP prefix of 192.168.1.0/24 is subjected to the large flow statistics in example 1, the traffic is forwarded to the destination server through the gateway processing. At this point, since most of the traffic on instance 1 goes away and its load is low, in order to balance the workload between different VNF instances of the same type, the traffic on instance 1 is migrated to instance 2 and the CPU resources allocated to instance 1 are released.
S200, when the CPU resource use information and the load meet preset conditions, installing a target rule and a temporary rule in a switch, and deleting an original rule corresponding to the distributed instance; wherein the temporary rule is prioritized over the target rule.
Specifically, in order to measure the load condition of the allocated instance, a first threshold is preset in this embodiment, and after obtaining the CPU resource usage information corresponding to the allocated instance and the load of the allocated instance, it is determined whether the CPU resource corresponding to the allocated instance is used by multiple instances and whether the load corresponding to the allocated instance is smaller than a preset first threshold; if so, indicating that the allocated instance meets reclamation requirements. And installing a target rule and a temporary rule in the switch, and deleting the original rule corresponding to the distributed instance. The preset first threshold may be set as needed, and in a specific embodiment, the preset first threshold is a medium throughput, that is, a medium load of the to-be-allocated instance, and may be represented as: where c is the maximum load of the instance to be allocated, the value of c can be measured by monitoring the overload status of the instance.
And the original rule corresponding to the distributed instance is used for forwarding the traffic to the distributed instance for processing, and the target rule is used for forwarding the traffic unprocessed by the distributed instance to the target instance for processing. In this embodiment, before installing the target rule, the traffic is forwarded to the allocated instance through the original rule for processing, and after installing the target rule with a higher priority than the original rule and deleting the original rule, the traffic that is not processed by the allocated instance and the traffic that is being processed by the allocated instance are both forwarded to the target instance through the target rule for processing.
Considering that most stateful instances need to maintain state information of each flow in the stateful instances, and guarantee correct processing of the flows, because the target instance does not have records of the allocated instance processing flows, if the flows on the allocated instances are directly migrated to the target instance, an incorrect flow processing result may be caused, and if the operations according to the prior synchronous instance state are performed, flow processing may be interrupted, and the flow processing requirements may not be met. In this embodiment, before deleting an old rule, a temporary rule with a priority higher than that of a target rule is installed on a switch, and the temporary rule is used to forward traffic being processed by the allocated instance to the allocated instance for processing.
S300, forwarding first traffic corresponding to the allocated instance to a target instance for processing through the target rule, and forwarding second traffic corresponding to the allocated instance for processing through the temporary rule, so as to transfer the traffic of the allocated instance to the target instance in a stateless manner; wherein the first traffic is the traffic that the allocated instance is not processing, and the second traffic is the traffic that the allocated instance is processing.
Specifically, after a target rule and a temporary rule are installed in a switch and an original rule corresponding to an allocated instance is deleted, a first flow corresponding to the allocated instance, namely the unprocessed flow of the allocated instance, is forwarded to the target instance for processing through the target rule, and a second flow corresponding to the allocated instance, namely the ongoing flow of the allocated instance, is forwarded to the allocated instance for processing through the temporary rule, so that the flow of the allocated instance is transited to the target instance in a stateless manner, interruption in a flow transition process is avoided, time overhead of instance state synchronization is saved, flow delay caused by flow interruption is improved, and the requirement of delay sensitive flow can be met.
In a specific embodiment, the step of forwarding the second traffic corresponding to the allocated instance for processing through the temporary rule in step S300 includes:
s310, collecting second flow corresponding to the distributed instances through the temporary rules, and generating a plurality of sub-rules according to flow information corresponding to the second flow;
s320, collecting second flow corresponding to the distributed instances through the temporary rules, and generating a plurality of sub-rules according to flow information corresponding to the second flow.
Specifically, when the second traffic corresponding to the allocated instance is forwarded to the allocated instance for processing by the temporary rule, the second traffic corresponding to the allocated instance is collected by the temporary rule first, and when the second traffic is collected, whether the collection of the second traffic data is completed may be determined by the flow heartbeat interruption time. The flow heartbeat interruption time represents the maximum interval between two consecutive packets in the flow, and if there are no incoming packets of the flow during the heartbeat interruption, the flow is considered to have expired and the flow collection is complete. And after the flow heartbeat is finished, generating a plurality of sub-rules according to the flow information corresponding to the second flow, then installing the plurality of sub-rules with the priority higher than that of the target rule by the controller, deleting the temporary rules, and forwarding the second flow to the distributed example for processing through the plurality of sub-rules.
In an embodiment, step S310 specifically includes:
s311, generating a plurality of optimal prefixes according to the flow information corresponding to the second flow;
s312, generating a plurality of sub-rules according to the optimal prefixes.
In this embodiment, when generating the plurality of sub-rules according to the traffic information corresponding to the second traffic, firstly, a binary tree is constructed according to the traffic information corresponding to the second traffic by using an algorithm shown in fig. 4, then, according to the binary tree, a plurality of optimal prefixes are determined, and a plurality of sub-rules covering the second traffic are generated according to the plurality of optimal prefixes.
In order to better understand the process of generating several sub-rules according to the traffic information corresponding to the second traffic in this embodiment, the inventor further explains the whole flow of updating the prefix rule through fig. 4 and fig. 5. For example, given six IP addresses: 192.168.1.129, 192.168.1.130, 192.168.1.135, 192.168.1.138, 192.168.1.150, 192.168.1.153, following lines 1-10 of the algorithm in FIG. 4, a binary tree as shown in FIG. 5 may be constructed and leaf nodes a, b, c, d, e, f initialized. The a-f nodes are then initialized as a binary tree with a height of 0, S [ <1, 0, 0> ], which are currently the root nodes with no child nodes. Then calculate the number that the prefix can cover 2 heights according to the algorithm of fig. 4 lines 12-14, merge IPs for every two nodes, and set the default last node to the maximum prefix coverage space in order to finally integrate all IPs into one IP prefix in the above algorithm mask line 16. The algorithm mask begins iterative prefix normalization operations on lines 18-27, combining smaller values to larger values and deleting prefixes of smaller coverage spaces based on comparison of coverage spaces of two nodes. When two nodes have the same prefix space, then the node will move to the next, and the algorithmic mask in FIG. 4 ensures that all nodes eventually merge into one prefix. For example, as shown by nodes a and b in FIG. 5, when node a calculates the height (line 13 of the algorithm mask described above), node a and node b are taken into account. Since node a and node b can be merged into prefix 192.168.1.128/30, the height is calculated to be 2. When node b is computed, node b and node c may be combined into prefix 192.168.1.128/29, with a height computation result of 3. Masking lines 18-27 according to the algorithm of fig. 4, the height value of a is less than the height value of b, a will be merged to b with a merged prefix of 192.168.1.128/29, and then moving to the next cycle. The calculation formula can be expressed as follows:
Figure BDA0002789498900000091
wherein height represents a 32-prefix value, K represents the number of total prefixes, s represents the minimum space accommodated by the binary tree, j represents the number of left sub-tree nodes on the binary tree, m represents the number of right sub-tree nodes on the binary tree, and min (IS)1[j].s+IS2[m]S) represents the minimum space accommodated by the binary tree.
The algorithm mask in fig. 4 also calculates the temporary prefix of the current stage when merging nodes at each stage, as shown in fig. 5, the leaf nodes a, b, c, d, e, f: height IS 0, IS [ <1, 0, 0> ]. S indicates that the space of these nodes is 1 and there are no children. By combining nodes a and b, the value of node Q can be obtained: height IS 2, IS [ <4, 0, 0>, <2, 1, 1> ]. S of the Q node means that if 1 prefix is allocated, the prefix is Q; if 2 prefixes are allocated, the prefixes are a and b. For calculating IS [6] for R, two prefixes may be assigned to the left subtree E and one prefix or the opposite may be assigned to the right subtree T. The allocation is then obtained with minimal space. After the binary tree is constructed, K prefixes are obtained from top to bottom. In order for the prefix rule to disappear as soon as possible, the number of K should be maximum, i.e. each child gets its 32-bit mask prefix, the total number of K being 6. But in a practical large scale scenario, it is difficult for K to just make each leaf node get its 32-bit mask prefix, so the optimal allocation scheme is: the combination of K prefixes covers all nodes while containing as many leaf nodes as possible. Assuming K IS 3, the assignment IS obtained from IS [3] in the root [ R ], and we find that it assigns a prefix to the left subtree node and two prefixes to its right subtree node. Finally, E, E and f will be obtained.
In a specific embodiment, the method further comprises:
s410, when the to-be-allocated instance is received, obtaining the idle CPU resource of the server side and the quantity of the CPU resource needed by the to-be-allocated instance.
S420, when the idle CPU resource is larger than a preset second threshold value, determining a target CPU resource corresponding to the example to be allocated according to the idle CPU resource and the quantity of the CPU resources, and allocating the determined target CPU resource to the example to be allocated.
S430, when the idle CPU resource is smaller than or equal to a preset second threshold value, determining a target CPU resource corresponding to the example to be allocated according to the idle CPU resource, the number of the CPU resources and the load of the example to be allocated, and allocating the determined target CPU resource to the example to be allocated.
Specifically, when an instance on the server runs, CPU resources need to be allocated to the running instance, and the instance is an instance to be allocated. The CPU resources on the server are divided into idle CPU resources and allocated CPU resources, the idle CPU resources are CPU resources which are not allocated to the instances, and the allocated CPU resources are CPU resources which are allocated to one or more instances. When the server receives the instance to be allocated, the server may allocate the idle CPU resource of the server to the instance to be allocated, or may allocate the allocated resource of the server to the instance to be allocated. The existing CPU resource allocation methods of virtual network function instances are mainly divided into two types, one type is that a single VNF instance is bound to a fixed number of CPU cores, and the CPU cores are not shared with other VNF instances. Another type of CPU resource allocation method may start a new VNF instance on an underutilized CPU core to perform fine-grained resource sharing, but this type of CPU resource allocation method causes an unacceptable packet processing delay because a scheduler will continuously interrupt its processing because multiple VNF instances share the same CPU core.
In order to improve the utilization rate of the CPU resources and avoid the delay of processing the data packet, in this embodiment, when the to-be-allocated instance is received, the idle CPU resources of the server and the number of the CPU resources required by the to-be-allocated instance are obtained, so that the CPU resources are allocated to the to-be-allocated instance in the subsequent step based on the idle CPU resources and the number of the CPU resources required by the to-be-allocated instance. The CPU resource required by the instances to be allocated is determined by the number of the instances to be allocated and the number of the starting threads required by each instance to be allocated. For example, there are 3 instances to be allocated, where two instances to be allocated need to start up a thread 2, and one instance to be allocated needs to start up a thread 1, then the number of CPU resources needed by the instances to be allocated is 5.
Specifically, the target CPU resource is a CPU resource allocated to an instance to be allocated, the target CPU resource is a CPU resource of a server, which may be included in an idle CPU resource or an allocated CPU resource, and the number of the target CPU resource is equal to the number of the CPU resources required by the instance to be allocated. When the idle CPU resources of the server are sufficient, determining target CPU resources from the idle CPU resources, and keeping good data processing delay; when the idle CPU resources of the server are insufficient, the target CPU resources are determined from the allocated CPU resources, and the utilization rate of the CPU resources in the server can be improved. Therefore, in this embodiment, after the idle CPU resources and the number of CPU resources required by the instance to be allocated are obtained, the target CPU resources corresponding to the instance to be allocated are determined based on the idle CPU resources and the number of CPU resources required by the instance to be allocated, and the determined target CPU resources are allocated to the instance to be allocated.
In this embodiment, whether the idle CPU resources are sufficient is measured by a preset second threshold, after the idle CPU resources of the server are obtained, the idle CPU resources are compared with the preset second threshold, and when the idle CPU resources are greater than the preset second threshold, it is indicated that the CPU resources of the server are sufficient, at this time, the CPU resources may be allocated to the instances to be allocated from the idle CPU resources, or the CPU resources may be allocated to the instances to be allocated from the allocated CPU resources. However, considering that allocating the CPU resource to the to-be-allocated instance from the allocated CPU resource may cause a data packet processing delay, in this embodiment, when it is determined that the idle CPU resource is greater than the preset second threshold, according to the number of the CPU resources required by the to-be-allocated instance, the target CPU resource corresponding to the to-be-allocated instance is determined from the idle CPU resource, and the determined target CPU resource is allocated to the to-be-allocated instance. The preset second threshold may be set as needed, and in a specific embodiment, the preset second threshold is 20% of the total CPU resources in the server, for example, when the total CPU resource amount is 20, the preset second threshold is 4, and when the idle CPU resource amount is greater than 4, the target CPU resource corresponding to the instance to be allocated is determined from the idle CPU resource according to the CPU resource amount required by the instance to be allocated, and the determined target CPU resource is allocated to the instance to be allocated.
Specifically, when the idle CPU resources are judged to be less than or equal to the preset second threshold value, which indicates that the CPU resources of the server are insufficient, the target CPU resources corresponding to the to-be-allocated instance are determined according to the idle CPU resources, the number of the CPU resources, and the load of the to-be-allocated instance, and the determined target CPU resources are allocated to the to-be-allocated instance.
In a specific embodiment, the step of determining, according to the idle CPU resources, the number of CPU resources, and the load of the to-be-allocated instance, a target CPU resource corresponding to the to-be-allocated instance in step S430 includes:
s431, when the number of the CPU resources is less than or equal to the idle CPU resources, acquiring the load of the instance to be distributed;
s432, when the load of the to-be-distributed instance is smaller than or equal to a preset third threshold value, obtaining distributed instances corresponding to the CPU resources in the distributed CPU resources;
s433, determining target CPU resources corresponding to the to-be-allocated instances according to the allocated instances corresponding to the CPU resources; wherein the target CPU resource is included within the allocated CPU resource.
Specifically, when the idle CPU resource is less than or equal to the preset second threshold, there are two cases, one of which is that although the CPU resource is insufficient, the CPU resource can also meet the number of the CPU resources required by the to-be-allocated instance, and in this case, the target CPU resource corresponding to the to-be-allocated instance may be determined from the idle CPU resource, or the target CPU resource corresponding to the to-be-allocated instance may be determined from the allocated CPU core. And obviously, in this case, only the target CPU resource corresponding to the instance to be allocated can be determined from the allocated CPU resources.
Considering that the target CPU resource corresponding to the to-be-allocated instance is determined from the allocated CPU resources, the load of the to-be-allocated instance may affect the packet processing delay. In this embodiment, when it is determined that the number of the CPU resources required by the to-be-allocated instance is less than or equal to the number of the idle CPU resources, the target CPU resource corresponding to the to-be-allocated instance is determined according to the load of the to-be-allocated instance and the allocated CPU resource or the idle CPU resource.
In order to measure the load size of the to-be-allocated instance, a third threshold is also preset in this embodiment, when it is determined that the number of the CPU resources required by the to-be-allocated instance is less than or equal to the idle CPU resources, the load of the to-be-allocated instance is further compared with the preset third threshold, when the load of the to-be-allocated instance is less than or equal to the preset third threshold, it is indicated that the load of the to-be-allocated instance is low, and in order to improve the usage rate of the CPU resources, the target CPU resources corresponding to the to-be-allocated instance are determined according to the allocated CPU resources.
The preset third threshold may be set as needed, and in a specific embodiment, the preset third threshold is a medium throughput, that is, a medium load, of the to-be-allocated instance, and may be represented as: where c is the maximum load of the instance to be allocated, the value of c can be measured by monitoring the overload status of the instance.
Deploying service chains on one CPU resource can reduce the throughput performance of the CPU resource due to the time slice rotation and latency during instance processing traffic. Therefore, in this embodiment, when determining the target CPU resource corresponding to the to-be-allocated instance according to the allocated CPU resource, first obtaining an allocated instance corresponding to each CPU resource in the allocated CPU resource, then sequentially comparing the to-be-allocated instance with the allocated instance corresponding to each CPU resource, determining whether the to-be-allocated instance and the allocated instance corresponding to each CPU resource form a service chain, if the to-be-allocated instance and the allocated instance corresponding to a certain CPU resource do not form a service chain, using the CPU resource as a candidate CPU resource corresponding to the to-be-allocated instance, and continuing to compare the next CPU resource until the comparison between the to-be-allocated instance and each CPU resource in the allocated CPU resource is completed, and obtaining candidate CPU resources corresponding to all the to-be-allocated instances; and then determining target CPU resources corresponding to the instances to be distributed according to the candidate CPU resources.
For example, the allocated CPU resources of the server include CPU resource 1, CPU resource 2 and CPU resource 3, where the allocated instance corresponding to CPU resource 1 includes instance 1 and instance 2, the allocated instance corresponding to CPU resource 2 includes instance 3 and instance 4, the allocated instance corresponding to CPU resource 3 includes instance 5, the instance to be allocated is instance 6, first, it is determined whether instance 6 and instance 1 and instance 2 constitute a service chain, if not, CPU resource 1 is determined to be a candidate CPU resource corresponding to instance 6, and it is continuously determined whether instance 6 and instances 3 and 4 constitute a service chain, if instance 6 and instance 3 or instance 4 constitute a service chain, it is determined that CPU resource 2 is not a candidate CPU resource corresponding to instance 6, and it is continuously determined whether instance 6 and instance 5 constitute a service chain, if instance 6 and instance 5 do not constitute a service chain, determining that the CPU resource 1 and the CPU resource 3 are candidate CPU resources corresponding to the instances to be allocated, and then determining a target CPU resource corresponding to the instances to be allocated according to the CPU resource 1 and the CPU resource 3.
In a specific embodiment, step S431 is followed by:
m431, when the load of the example to be distributed is larger than a preset third threshold value, determining a target CPU resource corresponding to the example to be distributed according to the idle CPU resource; wherein the target CPU resource is included in the idle CPU resource.
Specifically, when the number of the CPU resources is less than or equal to the idle CPU resources, the load of the to-be-allocated instance is obtained, when the load of the to-be-allocated instance is greater than a preset third threshold, it indicates that the load of the to-be-allocated instance is higher, and in order to maintain a stable packet processing delay, in this embodiment, when the load of the to-be-allocated instance is greater than the preset third threshold, the target CPU resource corresponding to the to-be-allocated instance is determined according to the idle CPU resources, and the determined target CPU resource is allocated to the to-be-allocated instance.
In a specific embodiment, the step of determining, according to the idle CPU resources, the number of CPU resources, and the load of the to-be-allocated instance, the target CPU resources corresponding to the to-be-allocated instance in step S430 further includes:
s434, when the number of the CPU resources is larger than the idle CPU resources, acquiring the allocated CPU resources;
s435, determining target CPU resources corresponding to the instances to be allocated according to the allocated CPU resources; wherein the target CPU resource is included within the allocated CPU resource.
Specifically, when the idle CPU resources are less than or equal to a preset second threshold and the number of the CPU resources is greater than the idle CPU resources, it is indicated that the idle CPU resources are not enough to be allocated to the to-be-allocated instance. At the moment, the allocated CPU resources are obtained, and the target CPU resources corresponding to the to-be-allocated instances are determined according to the allocated CPU resources, so that the utilization rate of the CPU resources is improved.
In order to better understand the technology of the present invention, the present invention further provides a specific application example, as shown in fig. 6, which specifically includes the following steps:
601, acquiring CPU resource use information corresponding to an allocated instance of a server and a load of the allocated instance at preset time intervals;
step 602, judging whether the CPU resource use information and the load meet preset conditions; if yes, go to step S603; if not, executing step S601;
step 603, installing a target rule and a temporary rule in the switch;
step 604, deleting the original rule corresponding to the distributed instance;
step 605, forwarding the first traffic corresponding to the allocated instance to the target instance for processing through the target rule;
step 606, collecting a second flow corresponding to the distributed instance through a temporary rule;
step 607, constructing a binary tree according to the traffic information corresponding to the second traffic;
step 608, generating a plurality of optimal prefixes according to the binary tree;
step 609, generating a plurality of sub-rules according to the plurality of optimal prefixes;
step 610, forwarding the second traffic to the allocated instance for processing through a number of sub-rules.
Exemplary device
Based on the above embodiments, the present invention further provides a server, and a schematic block diagram thereof may be as shown in fig. 7. The server includes a processor, a memory, a network interface, and a temperature sensor connected by a system bus. Wherein the processor of the server is configured to provide computing and control capabilities. The memory of the server comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the server is used for communicating with an external terminal through network connection. The computer program is executed by a processor to implement a method of traffic stateless migration of virtual network function instances. The temperature sensor of the server is arranged in the server in advance and used for detecting the current operating temperature of the internal equipment.
It will be appreciated by those skilled in the art that the block diagram of fig. 7 is a block diagram of only a portion of the structure associated with the inventive arrangements and is not intended to limit the servers to which the inventive arrangements may be applied, and that a particular server may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a server is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor when executing the computer program implementing at least the following steps:
acquiring CPU resource use information corresponding to an allocated instance of a server and the load of the allocated instance at preset time intervals;
when the CPU resource use information and the load meet preset conditions, installing a target rule and a temporary rule in the switch, and deleting an original rule corresponding to the distributed instance; wherein the temporary rule is higher in priority than the target rule;
forwarding a first flow corresponding to the allocated instance to a target instance for processing through the target rule, and forwarding a second flow corresponding to the allocated instance for processing through the temporary rule, so as to migrate the flow of the allocated instance to the target instance in a stateless manner; wherein the first traffic is the traffic that the allocated instance is not processing, and the second traffic is the traffic that the allocated instance is processing.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
In summary, the present invention discloses a method for stateless traffic migration of a virtual network function instance and an electronic device, where the method includes: acquiring CPU resource use information corresponding to an allocated instance of a server and the load of the allocated instance at preset time intervals; when the CPU resource use information and the load meet preset conditions, installing a target rule and a temporary rule in the switch, and deleting an original rule corresponding to the distributed instance; and forwarding the first traffic corresponding to the allocated instance to a target instance for processing through the target rule, and forwarding the second traffic corresponding to the allocated instance for processing through the temporary rule. The invention forwards the unprocessed flow of the distributed example to the target example for processing through the target rule, forwards the flow which is processed by the distributed example to the distributed example for processing through the temporary rule, and migrates the flow of the distributed example to the target example in a stateless manner, thereby avoiding interruption in the flow transfer process, saving the time overhead of example state synchronization, improving the flow delay caused by flow interruption, and meeting the requirement of delay sensitive flow.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (10)

1. A traffic stateless migration method of a virtual network function instance is characterized by comprising the following steps:
acquiring CPU resource use information corresponding to an allocated instance of a server and the load of the allocated instance at preset time intervals;
when the CPU resource use information and the load meet preset conditions, installing a target rule and a temporary rule in the switch, and deleting an original rule corresponding to the distributed instance; wherein the temporary rule is higher in priority than the target rule;
forwarding a first flow corresponding to the allocated instance to a target instance for processing through the target rule, and forwarding a second flow corresponding to the allocated instance for processing through the temporary rule, so as to migrate the flow of the allocated instance to the target instance in a stateless manner; wherein the first traffic is the traffic that the allocated instance is not processing, and the second traffic is the traffic that the allocated instance is processing.
2. The method according to claim 1, wherein the predetermined condition is that the CPU resource is used by multiple instances and the load is less than a predetermined first threshold.
3. The method according to claim 1, wherein the step of forwarding the second traffic corresponding to the allocated instance for processing by the temporary rule comprises:
collecting second flow corresponding to the distributed instances through the temporary rules, and generating a plurality of sub-rules according to flow information corresponding to the second flow;
forwarding the second traffic to the allocated instance for processing by the number of sub-rules.
4. The traffic stateless migration method of the virtual network function instance according to claim 3, wherein the step of generating a plurality of sub-rules according to the traffic information corresponding to the second traffic includes:
generating a plurality of optimal prefixes according to the flow information corresponding to the second flow;
and generating a plurality of sub-rules according to the plurality of optimal prefixes.
5. The traffic stateless migration method of the virtual network function instance according to claim 4, wherein the step of generating a plurality of optimal prefixes according to the traffic information corresponding to the second traffic includes:
constructing a binary tree according to the traffic information corresponding to the second traffic;
and generating a plurality of optimal prefixes according to the binary tree.
6. The method of traffic stateless migration of a virtual network function instance according to claim 1, further comprising:
when an example to be allocated is received, acquiring idle CPU resources of a server side and the quantity of the CPU resources required by the example to be allocated;
when the idle CPU resource is larger than a preset second threshold value, determining a target CPU resource corresponding to the example to be allocated according to the idle CPU resource and the quantity of the CPU resources, and allocating the determined target CPU resource to the example to be allocated;
when the idle CPU resource is smaller than or equal to a preset second threshold value, determining a target CPU resource corresponding to the instance to be allocated according to the idle CPU resource, the number of the CPU resources and the load of the instance to be allocated, and allocating the determined target CPU resource to the instance to be allocated.
7. The traffic stateless migration method of the virtual network function instance according to claim 6, wherein the step of determining the target CPU resource corresponding to the to-be-allocated instance according to the idle CPU resource, the number of CPU resources, and the load of the to-be-allocated instance includes:
when the number of the CPU resources is less than or equal to the idle CPU resources, acquiring the load of the instance to be distributed;
when the load of the instance to be distributed is smaller than or equal to a preset third threshold value, obtaining a distributed instance corresponding to each CPU resource in the distributed CPU resources;
determining a target CPU resource corresponding to the example to be allocated according to the allocated example corresponding to each CPU resource; wherein the target CPU resource is included within the allocated CPU resource.
8. The traffic stateless migration method of the virtual network function instance according to claim 7, wherein the step of determining the target CPU resource corresponding to the to-be-allocated instance according to the allocated instance corresponding to each CPU resource comprises:
determining candidate CPU resources corresponding to the to-be-allocated instances according to the allocated instances corresponding to the CPU resources respectively; the candidate CPU resources are included in the allocated CPU resources, and the allocated instances corresponding to the candidate CPU resources and the instances to be allocated do not form a service chain;
and determining the target CPU resource corresponding to the to-be-distributed instance according to the candidate CPU resource.
9. A server, comprising: a processor, a storage medium communicatively coupled to the processor, the storage medium adapted to store a plurality of instructions; the processor is adapted to invoke instructions in the storage medium to perform the steps of implementing a traffic stateless migration method of a virtual network function instance of any of the preceding claims 1-8.
10. A computer readable storage medium having stored thereon a plurality of instructions adapted to be loaded and executed by a processor to perform the steps of implementing a traffic stateless migration method of a virtual network function instance according to any of the preceding claims 1-8.
CN202011310178.8A 2020-11-20 2020-11-20 Traffic stateless migration method of virtual network function instance and electronic equipment Active CN112506648B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011310178.8A CN112506648B (en) 2020-11-20 2020-11-20 Traffic stateless migration method of virtual network function instance and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011310178.8A CN112506648B (en) 2020-11-20 2020-11-20 Traffic stateless migration method of virtual network function instance and electronic equipment

Publications (2)

Publication Number Publication Date
CN112506648A true CN112506648A (en) 2021-03-16
CN112506648B CN112506648B (en) 2022-05-03

Family

ID=74959075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011310178.8A Active CN112506648B (en) 2020-11-20 2020-11-20 Traffic stateless migration method of virtual network function instance and electronic equipment

Country Status (1)

Country Link
CN (1) CN112506648B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113452756A (en) * 2021-06-02 2021-09-28 鹏城实验室 Service chain flow migration method and device, terminal equipment and storage medium
CN115914405A (en) * 2022-11-30 2023-04-04 支付宝(杭州)信息技术有限公司 Service processing method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105117280A (en) * 2015-08-24 2015-12-02 用友网络科技股份有限公司 Virtual machine migration device and method
US10055246B2 (en) * 2014-03-31 2018-08-21 Huawei Technologies Co., Ltd. Method and device for data flow migration during virtual machine migration
US20190089814A1 (en) * 2016-03-24 2019-03-21 Alcatel Lucent Method for migration of virtual network function
CN105978952B (en) * 2016-04-28 2019-04-30 中国科学院计算技术研究所 A kind of flow migration method and system based on network function virtualization scene
CN105915467B (en) * 2016-05-17 2019-06-18 清华大学 A kind of data center network flow equalization method and device that software-oriented defines
US20190215332A1 (en) * 2018-01-11 2019-07-11 Perspecta Labs, Inc. Migration of traffic flows
US20190268269A1 (en) * 2019-04-26 2019-08-29 Intel Corporation Migration from a legacy network appliance to a network function virtualization (nfv) appliance
CN110275758A (en) * 2019-05-09 2019-09-24 重庆邮电大学 A kind of virtual network function intelligence moving method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10055246B2 (en) * 2014-03-31 2018-08-21 Huawei Technologies Co., Ltd. Method and device for data flow migration during virtual machine migration
CN105117280A (en) * 2015-08-24 2015-12-02 用友网络科技股份有限公司 Virtual machine migration device and method
US20190089814A1 (en) * 2016-03-24 2019-03-21 Alcatel Lucent Method for migration of virtual network function
CN105978952B (en) * 2016-04-28 2019-04-30 中国科学院计算技术研究所 A kind of flow migration method and system based on network function virtualization scene
CN105915467B (en) * 2016-05-17 2019-06-18 清华大学 A kind of data center network flow equalization method and device that software-oriented defines
US20190215332A1 (en) * 2018-01-11 2019-07-11 Perspecta Labs, Inc. Migration of traffic flows
US20190268269A1 (en) * 2019-04-26 2019-08-29 Intel Corporation Migration from a legacy network appliance to a network function virtualization (nfv) appliance
CN110275758A (en) * 2019-05-09 2019-09-24 重庆邮电大学 A kind of virtual network function intelligence moving method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GENGBIAO SHEN, ET AL.,: "A four-stage adaptive scheduling scheme for service function chain in NFV", 《COMPUTER NETWORKS》 *
王进文,等;: "网络功能虚拟化技术研究进展", 《计算机学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113452756A (en) * 2021-06-02 2021-09-28 鹏城实验室 Service chain flow migration method and device, terminal equipment and storage medium
CN113452756B (en) * 2021-06-02 2022-02-25 鹏城实验室 Service chain flow migration method and device, terminal equipment and storage medium
CN115914405A (en) * 2022-11-30 2023-04-04 支付宝(杭州)信息技术有限公司 Service processing method and device

Also Published As

Publication number Publication date
CN112506648B (en) 2022-05-03

Similar Documents

Publication Publication Date Title
JP7083901B2 (en) Dark Roch Realization Method, Equipment, Computation Node and System
CN112506648B (en) Traffic stateless migration method of virtual network function instance and electronic equipment
EP2803168B1 (en) Network device control in a software defined network
US20150309842A1 (en) Core Resource Allocation Method and Apparatus, and Many-Core System
CN111614746B (en) Load balancing method and device of cloud host cluster and server
WO2012027907A1 (en) Method for parallelizing automatic control programs and compiler
CN107070709B (en) NFV (network function virtualization) implementation method based on bottom NUMA (non uniform memory Access) perception
US11438271B2 (en) Method, electronic device and computer program product of load balancing
WO2020100581A1 (en) Evaluation device, evaluation method and evaluation program
WO2024120205A1 (en) Method and apparatus for optimizing application performance, electronic device, and storage medium
CN114500355B (en) Routing method, network-on-chip, routing node and routing device
US20160179584A1 (en) Virtual service migration method for routing and switching platform and scheduler
CN113904923A (en) Service function chain joint optimization method based on software defined network
WO2012119436A1 (en) Method, device and system for migrating resources
CN113032102A (en) Resource rescheduling method, device, equipment and medium
CN111555987B (en) Current limiting configuration method, device, equipment and computer storage medium
CN110618946A (en) Stack memory allocation method, device, equipment and storage medium
CN111045819A (en) Resource request method, device, equipment and storage medium of distributed system
CN115361349A (en) Resource using method and device
KR101558807B1 (en) Processor scheduling method for the cooperation processing between host processor and cooperation processor and host processor for performing the method
CN109145052B (en) Data partition storage method, device, system, storage medium and electronic device
CN114064226A (en) Resource coordination method and device for container cluster and storage medium
CN117544513B (en) Novel Internet of things customized service providing method and device based on fog resources
US20240073094A1 (en) Dynamic virtual network function placement in edge computing environments
CN113452756B (en) Service chain flow migration method and device, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant