CN116528255B - Network slice migration method, device, equipment and storage medium - Google Patents

Network slice migration method, device, equipment and storage medium Download PDF

Info

Publication number
CN116528255B
CN116528255B CN202310791136.8A CN202310791136A CN116528255B CN 116528255 B CN116528255 B CN 116528255B CN 202310791136 A CN202310791136 A CN 202310791136A CN 116528255 B CN116528255 B CN 116528255B
Authority
CN
China
Prior art keywords
network slice
bandwidth
prediction model
value
slice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310791136.8A
Other languages
Chinese (zh)
Other versions
CN116528255A (en
Inventor
佘蕊
王旭亮
林显成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202310791136.8A priority Critical patent/CN116528255B/en
Publication of CN116528255A publication Critical patent/CN116528255A/en
Application granted granted Critical
Publication of CN116528255B publication Critical patent/CN116528255B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W16/00Network planning, e.g. coverage or traffic planning tools; Network deployment, e.g. resource partitioning or cells structures
    • H04W16/22Traffic simulation tools or models
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/04Arrangements for maintaining operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/16Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]
    • H04W28/18Negotiating wireless communication parameters
    • H04W28/20Negotiating bandwidth
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The disclosure provides a network slice migration method, a device, equipment and a storage medium, and relates to the technical field of communication. The method comprises the following steps: acquiring historical moment edge layer resource occupation information and service switching information; processing the historical moment edge layer resource occupation information and the service switching information through a network slice prediction model to be trained to obtain a bandwidth prediction value and a slice migration duration of a target moment; based on a pre-constructed rewarding function, obtaining a corresponding rewarding value according to a bandwidth predicted value and a slice migration time length; and if the preset training stopping condition is met, obtaining a target network slice prediction model. The method and the device can meet the requirements of diversified network slice migration on instantaneity and reliability, reduce bandwidth occupation and improve the utilization rate of resources.

Description

Network slice migration method, device, equipment and storage medium
Technical Field
The disclosure relates to the technical field of mobile communication, and in particular relates to a network slice migration method, a network slice migration device, electronic equipment and a computer readable storage medium.
Background
The network slice is used as a key technology of 5G scale deployment, can meet the requirement of relative vertical closed management of diversified services, and can dynamically migrate slice resources based on the changes of network load and node and link states in an infrastructure.
Along with the large-scale intervention of unmanned and other mobile enhancement technologies, the concurrent migration quantity is continuously increased, and how to allocate virtual network resources to slice in a specified time range to ensure network stability becomes a key challenge for the large-scale application of the current network slicing technology in 5G.
In the related art, service function chaining (Service Function Chain, SFC) migration is mainly performed in a manner based on real-time network information and traffic prediction. However, in the manner of realizing slice dynamic migration through SFC, when the concurrency quantity is large enough, the time delay and accuracy of migration cannot be ensured, a large amount of bandwidth is occupied, and network resources are wasted.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure provides a network slice migration method, device, equipment and storage medium, which at least overcome the problems that the existing network slice migration mode cannot guarantee the time delay and accuracy of migration and occupy bandwidth to a certain extent.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to one aspect of the present disclosure, there is provided a network slice migration method, including: acquiring historical moment edge layer resource occupation information and service switching information; processing the historical moment edge layer resource occupation information and the service switching information through a network slice prediction model to be trained to obtain a bandwidth prediction value and a slice migration duration of a target moment; based on a pre-constructed rewarding function, obtaining a corresponding rewarding value according to the bandwidth predicted value and the slice migration time length; and if the preset training stopping condition is met, obtaining a target network slice prediction model.
In one embodiment of the disclosure, the network slice prediction model includes a first prediction model and a second prediction model; the processing of the historical moment edge layer resource occupation information and the service switching information through the network slice prediction model to be trained to obtain a bandwidth prediction value and a slice migration duration of a target moment comprises the following steps: processing the historical moment edge layer resource occupation information and the service switching information through the first prediction model to obtain a first bandwidth predicted value and a first slice migration duration; processing the historical moment edge layer resource occupation information and the service switching information through the second prediction model to obtain a second bandwidth predicted value and a second slice migration duration; fusing the first bandwidth predicted value and the second bandwidth predicted value to obtain a bandwidth predicted value at the next moment; and fusing the first slice migration duration and the second slice migration duration to obtain the slice migration duration at the next moment.
In one embodiment of the disclosure, the first predictive model comprises a deep Q network algorithm DQN model; the second prediction model comprises a depth deterministic strategy gradient algorithm DDPG model.
In one embodiment of the present disclosure, the prize value is obtained by: obtaining a bandwidth rewarding value according to the bandwidth predicted value of the target moment; obtaining a time delay rewarding value according to the slice migration time of the target moment; and carrying out weighted summation on the bandwidth rewarding value and the time delay rewarding value to obtain the rewarding value.
In one embodiment of the present disclosure, when switching from a computationally intensive service to a delay sensitive service, the weight corresponding to the delay reward value is greater than the weight corresponding to the bandwidth reward value; when switching from the time delay sensitive service to the computation intensive service, the weight corresponding to the time delay rewarding value is smaller than the weight corresponding to the bandwidth rewarding value.
In an embodiment of the disclosure, the obtaining the target network slice prediction model if the preset training stop condition is met includes: if the training times meet a preset times threshold value and/or the rewarding value meets a preset rewarding threshold value, judging that the preset training stopping condition is met; or if the training duration meets the preset duration threshold value and/or the reward value meets the preset reward threshold value, judging that the preset training stop condition is met.
In one embodiment of the present disclosure, the method further comprises: and if the training stopping condition is not met, adjusting model parameters of the network slice prediction model to be trained until the training stopping condition is met.
In one embodiment of the present disclosure, the edge layer resource occupation information includes at least one of network residual bandwidth resource information and network bandwidth resource information; the service switching information includes a switching condition between a computationally intensive service and a delay sensitive service.
According to another aspect of the present disclosure, there is also provided a network slice migration method, including: acquiring the current time edge layer resource occupation information and service switching information; processing the current time edge layer resource occupation information and the service switching information through a target network slice prediction model to obtain a bandwidth prediction value and a slice migration duration of the next time, wherein the target network slice prediction model is obtained through training by the network slice migration method; and generating a slice migration strategy according to the bandwidth predicted value of the next moment.
In one embodiment of the present disclosure, the method further comprises: the slice migration strategy is fed back to the operation/transaction support system.
In one embodiment of the present disclosure, the method further comprises: and transferring the slice migration strategy to the edge layer.
According to another aspect of the present disclosure, there is provided a network slice migration apparatus, including: the first data acquisition module is used for acquiring historical moment edge layer resource occupation information and service switching information; the first training planning module is used for processing the historical moment edge layer resource occupation information and the service switching information through a network slice prediction model to be trained to obtain a bandwidth prediction value and a slice migration duration of a target moment; based on a pre-constructed rewarding function, obtaining a corresponding rewarding value according to the bandwidth predicted value and the slice migration time length; and if the preset training stopping condition is met, obtaining a target network slice prediction model.
According to another aspect of the present disclosure, there is provided a network slice migration apparatus, including: the second data acquisition module is used for acquiring the current time edge layer resource occupation information and service switching information; the second training planning module is used for processing the current time edge layer resource occupation information and the service switching information through a target network slice prediction model to obtain a bandwidth prediction value and a slice migration duration of the next time, wherein the target network slice prediction model is obtained through training of the network slice migration device; and generating a slice migration strategy according to the bandwidth predicted value of the next moment.
According to another aspect of the present disclosure, there is provided an electronic device including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the network slice migration method described above via execution of the executable instructions.
According to another aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the network slice migration method described above.
According to another aspect of the present disclosure, there is provided a computer program product comprising executable instructions stored in a computer readable storage medium, which are read from the computer readable storage medium by a processor of an electronic device, the executable instructions being executed by the processor, causing the electronic device to perform the above-described network slice migration method.
The embodiment of the disclosure provides a network slice migration method, a device, equipment and a storage medium, which are used for collecting historical moment edge layer resource occupation information and service switching information; processing the historical moment edge layer resource occupation information and the service switching information through a network slice prediction model to be trained to obtain a bandwidth prediction value and a slice migration duration of a target moment; based on a pre-constructed rewarding function, obtaining a corresponding rewarding value according to a bandwidth predicted value and a slice migration time length; if the preset training stop condition is met, a target network slice prediction model is obtained, the requirements of diversified slice migration on instantaneity and reliability are met, the occupied bandwidth is reduced as much as possible, and the utilization rate of resources is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
Fig. 1 shows a schematic diagram of an exemplary system architecture of a network slice migration method or a network slice migration apparatus in an embodiment of the present disclosure.
Fig. 2 shows a flowchart of a network slice migration method in an embodiment of the present disclosure.
Fig. 3 illustrates another network slice migration method flow diagram in an embodiment of the present disclosure.
Fig. 4 illustrates a network slice prediction model training flowchart in an embodiment of the present disclosure.
Fig. 5 shows a network slice prediction model training schematic in an embodiment of the present disclosure.
Fig. 6 shows a flowchart of a prize value obtaining method in an embodiment of the present disclosure.
Fig. 7 shows a flowchart of a network slice migration method in an embodiment of the disclosure.
Fig. 8 illustrates a flowchart of yet another network slice migration method in an embodiment of the present disclosure.
Fig. 9 shows a schematic diagram of a network slice migration apparatus in an embodiment of the disclosure.
Fig. 10 shows a schematic diagram of a network slice migration apparatus in an embodiment of the disclosure.
Fig. 11 shows a block diagram of an electronic device in an embodiment of the disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
Fig. 1 shows a schematic diagram of an exemplary system architecture of a network slice migration method or a network slice migration apparatus that may be applied to embodiments of the present disclosure.
As shown in fig. 1, the system architecture may include a user layer, an edge layer, and an orchestration layer.
The user layer is oriented to the bottom layer service, and realizes the distributed access of different terminal devices 110. Terminal device 110 may be a variety of electronic devices including, but not limited to, smart phones, tablet computers, laptop portable computers, desktop computers, wearable devices, augmented reality devices, virtual reality devices, and the like.
In one embodiment, the clients of the applications installed in different terminal devices 110 are the same or clients of the same type of application based on different operating systems. The specific form of the application client may also be different based on the different terminal platforms, for example, the application client may be a mobile phone client, a PC client, etc.
And the edge layer is used for completing reasonable resource allocation based on the communication requirement of the bottom layer service migration process. The edge has included one or more virtualized infrastructure managers (Virtualized Infrastructure Manager, VIM) 120 and a VFN manager (Virtualized Network Function Manager, VNFM) 130. Wherein the virtualized infrastructure manager 120 is responsible for controlling and managing computing resources, storage resources, and network resources of the network function virtualized infrastructure (Network Functions Virtualization Infrastructure, NFVI), such as starting and stopping, initializing, upgrading, etc., of hardware and virtual machines; the VNFM is used to manage the life cycle and capacity of the service function chain, e.g. the VNFM may be used to expand virtual machines.
As shown in fig. 1, different bandwidth resources and delays may be allocated for different services, for example: for computationally intensive traffic, the bandwidth resource allocation is w1=3 GB/s; for other low-delay services, the bandwidth resource is distributed as W2=2.5GB/s, and the delay is T=1-1.2s; or the bandwidth resource non-accompany is W3=2GB/s, the time delay is T=1-1.2s, and the bandwidth resource non-accompany can be determined according to actual requirements, and the disclosure is not particularly limited.
In some embodiments, the edge layer further includes a plurality of edge computation technology (Mobile Edge Computing, MEC) modules, such as MEC1 and MEC2 in fig. 1. The MEC module is based on the edge network and the edge computing resources, provides a combination of connection, computation, capability and application, and provides services for users nearby.
The user layer and the edge layer communicate through a network, which is a medium for providing a communication link between the terminal device 110 of the user layer and the edge layer, and the network may be a network device in a wired network or a network device in a wireless network.
Alternatively, the wireless network or wired network described above uses standard communication techniques and/or protocols. The network is typically the Internet, but may be any network including, but not limited to, a local area network (Local Area Network, LAN), metropolitan area network (Metropolitan Area Network, MAN), wide area network (Wide Area Network, WAN), mobile, wired or wireless network, private network, or any combination of virtual private networks. In some embodiments, data exchanged over a network is represented using techniques and/or formats including HyperText Mark-up Language (HTML), extensible markup Language (Extensible MarkupLanguage, XML), and the like. All or some of the links may also be encrypted using conventional encryption techniques such as secure sockets layer (Secure Socket Layer, SSL), transport layer security (Transport Layer Security, TLS), virtual private network (Virtual Private Network, VPN), internet protocol security (Internet ProtocolSecurity, IPsec), etc. In other embodiments, custom and/or dedicated data communication techniques may also be used in place of or in addition to the data communication techniques described above.
And the arrangement layer is responsible for dynamically arranging network resources in the slice migration process. The orchestration layer includes an operations/transaction support system 150 and an NFV Orchestrator (NFVO) 140. The operation/transaction support system 150 includes network management, charging, service provisioning, etc.; NFV orchestrator 140 is responsible for unified provisioning and orchestration of resources for the entire NFV network.
In one embodiment, the orchestration layer further comprises an intelligent network aware agent that includes message agent/APIs interfaces 160, bandwidth allocator expansion module 170, training planning module 180, and request processing unit 190. The intelligent network aware agent collects the edge layer resource occupation situation and the service switching situation in real time downwards through the message agent/APIs interface 160, connects the operation/transaction support system 150 and the NFV orchestrator 140 upwards through the request processing unit 190, and feeds back the slice orchestration policy after the intelligent network aware agent optimization to the NFV orchestrator 140, so as to realize rapid and accurate migration of the network slice under the condition of as little bandwidth resource occupation as possible.
The training planning module 180 collects the occupation condition of the edge layer resources and the service switching condition in real time through the bandwidth allocation expansion module 170, combines the index requirements of the slice bearing service bandwidth, time delay and the like, and performs training and optimization of the slice migration strategy with the reward function as a target to determine the final slice migration strategy.
The request processing unit 190 feeds back the final slice migration policy to the operation/transaction support system 150, and simultaneously transmits the final slice migration policy to the edge layer based on the message broker/APIs interface 160, so as to realize rapid and accurate migration of the network slice under the condition of limited bandwidth resource occupation.
The scheme provided by the embodiment of the application relates to the technologies of mobile communication, machine learning and the like, is a software program applied to a computer, and is used for acquiring historical moment edge layer resource occupation information and service switching information; processing the historical moment edge layer resource occupation information and the service switching information through a network slice prediction model to be trained to obtain a bandwidth prediction value and a slice migration duration of a target moment; based on a pre-constructed rewarding function, obtaining a corresponding rewarding value according to a bandwidth predicted value and a slice migration time length; if the preset training stop condition is met, a target network slice prediction model is obtained, a slice migration architecture comprising a user layer, an edge layer and an arrangement layer is constructed, the instantaneity and the accuracy of the slice migration process are further improved by introducing an intelligent network perception agent, the requirements of diversified slice migration on instantaneity and reliability are met, the bandwidth occupation is reduced as much as possible, and the utilization rate of resources is improved. The following examples are provided to illustrate the application:
First, in the embodiments of the present disclosure, a network slice migration method is provided, which may be performed by any system having computing processing capabilities. The corresponding flow of the method can be executed by a network slice migration device, which can be an intelligent network aware agent located in the orchestration layer.
Fig. 2 shows a flowchart of a network slice migration method in an embodiment of the present disclosure, as shown in fig. 2, where the network slice migration method provided in the embodiment of the present disclosure includes the following steps:
s202, acquiring historical moment edge layer resource occupation information and service switching information.
In one embodiment, the edge layer resource occupancy information includes at least one of network bandwidth remaining resource information, network bandwidth resource information. The traffic switching information includes a switching situation between a computationally intensive traffic and a delay sensitive traffic.
The 5G three major application scenarios include enhanced mobile broadband (Enhance Mobile Broadband, emmbb), mass internet of things communication (Massive Machine T ype Communication, mctc), and Ultra high reliability and Ultra low latency services (URLLC).
Wherein, the eMBB comprises ultra-high definition video, virtual reality, augmented reality and the like, the scene has high requirement on bandwidth, the key performance indexes comprise 100Mbit/s user experience Soviet Union, tens of Gbit/s peak rate and the like, the application related to interactive operation is sensitive to time delay, for example, the requirement on time delay of the virtual reality immersion experience is in the order of 10 ms. mctc includes smart cities, smart homes, etc. The URLLC application comprises industrial control, unmanned plane control, intelligent driving control and the like, and the high reliability of such scenes is also a basic requirement for delay and sensitive business thereof.
In one embodiment, for the scenes of Internet of vehicles, AR/VR and the like, the method belongs to time delay sensitive services; for the scenes of intensive computing auxiliary service, internet of things service and the like, the method belongs to the computing intensive service.
The history time is a concept of a target time, and the history time is a time before the target time.
In one embodiment, the edge layer resource occupancy information and the service switching information are collected through a message proxy/APIs interface, and the message proxy/APIs interface sends the collected edge layer resource occupancy information and service switching information to the bandwidth allocator expansion module as input of the training planning module.
S204, processing the historical moment edge layer resource occupation information and the service switching information through a network slice prediction model to be trained to obtain a bandwidth prediction value and a slice migration duration of a target moment.
In one embodiment, the network slice prediction model to be trained is used for representing the corresponding relation between the historical moment edge layer resource occupation information and the service switching information, the bandwidth prediction value of the target moment and the slice migration duration.
And training a network slice prediction model to be trained by taking the historical moment edge layer resource occupation information and service switching information as input and taking the bandwidth predicted value and the slice migration duration of the target moment as output.
When the network slice prediction model is trained, the historical time edge layer resource occupation information and the service switching information as well as the corresponding bandwidth prediction value and the corresponding slice migration time length at the target time are adopted as training samples, the training samples are divided into training sets and test sets according to a certain proportion, the training sets are used for training and verifying the network slice prediction model, when the network slice prediction model is used for predicting data in the test sets, the obtained bandwidth prediction value and the slice migration time length at the target time are more in line with the actual situation, and the accuracy of the network slice prediction model is improved.
The number of samples in the training set and the test set may be practical, for example, in the order of 2:8 or 3:7, etc.
S206, obtaining a corresponding rewarding value according to the bandwidth predicted value and the slice migration duration based on a pre-constructed rewarding function.
In one embodiment, a reward function of the network slice prediction model optimization process is constructed based on latency, reliability, bandwidth occupancy of the network slice migration process to evaluate the effectiveness of the slice migration process through the reward function.
And inputting the bandwidth predicted value and the slice migration time which are output by the network slice predicted model into a reward function, and calculating to obtain a corresponding reward value.
And S208, if the preset training stop condition is met, obtaining a target network slice prediction model.
It should be noted that, the training stop conditions are pre-configured in the training planning module, and the training stop conditions include, but are not limited to, a reward value, training times, training duration, and the like.
In an embodiment, if the preset training stop condition is met, obtaining the target network slice prediction model includes: if the training times meet the preset times threshold value and/or the rewarding value meets the preset rewarding threshold value, judging that the preset training stopping condition is met; or if the training duration meets the preset duration threshold value and/or the reward value meets the preset reward threshold value, judging that the preset training stopping condition is met.
It should be noted that the preset number of times threshold, the preset reward threshold, the preset duration threshold, etc. may be set according to actual situations, and the disclosure is not limited specifically.
As shown in fig. 3, in one embodiment, if the above S208 meets the preset training stop condition, a target network slice prediction model is obtained, which includes:
s2082, judging whether a preset training stop condition is met, and if so, executing S2084; if not, executing S2086;
S2086, adjusting model parameters of the network slice prediction model to be trained, returning to S204, and re-training the model until the training stopping condition is met.
In order to verify the accuracy of the trained network slice prediction model, when the network slice prediction model is trained, 70% of data of a training set are randomly selected as training data, the rest 30% of the data are used as test data, model training is carried out through the training data, after the network slice prediction model is trained, the rest 30% of the test data are input into the model for verification, if the verification result passes, a target network slice prediction model is obtained, and if the verification result does not pass, model parameters of the network slice prediction model are adjusted until training stop conditions are met, and the target network slice prediction model is obtained.
It should be noted that after the network slice prediction model is trained, the application program is generated by packing for calling. The network slice prediction model may be regularly model trained, updated, for example, updated once a month.
In the embodiment of the disclosure, the historical moment edge layer resource occupation information and service switching information are collected; processing the historical moment edge layer resource occupation information and the service switching information through a network slice prediction model to be trained to obtain a bandwidth prediction value and a slice migration duration of a target moment; based on a pre-constructed rewarding function, obtaining a corresponding rewarding value according to a bandwidth predicted value and a slice migration time length; if the preset training stop condition is met, a target network slice prediction model is obtained, the requirements of diversified slice migration on instantaneity and reliability are met, the occupied bandwidth is reduced as much as possible, and the utilization rate of resources is improved.
Fig. 4 illustrates a network slice prediction model training flowchart in an embodiment of the present disclosure. As shown in fig. 4, in one embodiment, the network slice prediction model includes a first prediction model and a second prediction model. The step S204 is to process the historical time edge layer resource occupation information and the service switching information by training a network slice prediction model to obtain a bandwidth prediction value and a slice migration duration, wherein the bandwidth prediction value and the slice migration duration are targeted to be available, and the method comprises the following steps:
s402, processing the historical moment edge layer resource occupation information and the service switching information through a first prediction model to obtain a first bandwidth predicted value and a first slice migration duration;
s404, processing the historical moment edge layer resource occupation information and the service switching information through a second prediction model to obtain a second bandwidth predicted value and a second slice migration duration;
s406, fusing the first bandwidth predicted value and the second bandwidth predicted value to obtain a bandwidth predicted value at a target moment; and fusing the first slice migration time length and the second slice migration time length to obtain the slice migration time length at the target moment.
In one embodiment, the first predictive model includes a Deep Q-network algorithm (DQN) model; the second predictive model includes a depth deterministic strategy gradient algorithm (Deep Deterministic Policy Gradient, DDPG) model.
Fusing the first bandwidth predicted value and the second bandwidth predicted value in a weighted summation mode to obtain a bandwidth predicted value at a target moment, wherein the weights of the first bandwidth predicted value and the second bandwidth predicted value can be used as part of model parameters of a network slice prediction model; and fusing the first slice migration time length and the second slice migration time length in a weighted summation mode to obtain the slice migration time length at the target moment, wherein the weights of the first slice migration time length and the second slice migration time length can be used as part of model parameters of the network slice prediction model.
Fig. 5 shows a network slice prediction model training schematic in an embodiment of the present disclosure. As shown in fig. 5, the network slice prediction model includes a first prediction model 510 and a second prediction model 520, wherein the first prediction model 510 may include a DQN model and the second prediction model 520 may include a DDPG model.
The bandwidth allocation extension module 170 sends the edge layer resource occupancy information and the service switching information collected by the message broker/APIs interface to the network slice prediction model as input for model training.
The DQN model utilizes discretization modules (e.g., neural networks) to extract features from edge side resource occupancy information and traffic switching information. The DQN model comprises a DQN network and a DQN target network, the extracted characteristics are input into the DQN network, a first bandwidth predicted value and a first slice migration duration are output, and the DQN network is trained iteratively based on gradient updating; and when the parameters of the DQN target network are kept unchanged in a certain iteration process, replacing the parameters of the DQN target network with the parameters of the DQN target network after the preset iteration times.
The memory playback is to set an experience pool for storing state transfer processes, record each process of interaction with the environment, randomly extract small batch of state transfer processes from the experience pool for learning each time of model training, and aim to break the time correlation among data in a learning sample, so that the learning can be performed from wider experience in the past, and is not limited to the current environment.
The DDPG model is based on an AC algorithm architecture, features are extracted from a continuous state space and a high-dimensional action space by utilizing a neural network, and the ideal convergence rate and stability of the algorithm can be achieved by combining the ideas of 'memory playback' and 'fixed target network' in the DQN algorithm. The DDPG model comprises an Actor and a Critic, wherein the Actor is used for constructing a parameterized strategy, and outputting actions according to the current state, such as a strategy network in FIG. 5; critic is used to construct Q networks, evaluate the current strategy based on the prize value of the environmental feedback, such as the evaluation network in fig. 5, and calculate the time difference error using the gradient loss function to update the parameters of Actor and Critic.
The memory playback is to set an experience pool for storing state transfer processes, record each process of interaction with the environment, randomly extract small batch of state transfer processes from the experience pool for learning each time of model training, and aim to break the time correlation among data in a learning sample, so that the learning can be performed from wider experience in the past, and is not limited to the current environment.
Because of the high dimensionality of the state space and the action space, in both the Actor and the Critic parts, a neural network is used to construct parameterized strategies and action value functions, and the neural network changes simultaneously with the parameters of the target value and the parameters of the estimated value, so that the learning process is unstable and diverged. The method of 'fixing the target network' in the DQN model can effectively solve the problem that one neural network estimates value and simultaneously establishes the other neural network as a target network (a target strategy network and a target estimator network), and parameters of the strategy network and the estimated network update the parameters of the target strategy network and the parameters of the target estimator network in each step in the iterative process, but the updating amplitude is very small, so that the learning process is closer to the supervised learning process, and the convergence process of the neural network is more stable.
In the embodiment of the disclosure, different models are selected according to service differences based on the network slice prediction model of the DQN model and the DDPG model, and an optimization result is fed back to an operation/transaction support system and an edge side, so that rapid and accurate migration of the network slice in different service requests is realized.
Fig. 6 shows a flowchart of a prize value obtaining method in an embodiment of the present disclosure. As shown in FIG. 6, in one embodiment, the prize value is obtained by:
S602, obtaining a bandwidth rewarding value according to a bandwidth predicted value of a target moment;
s604, obtaining a time delay rewarding value according to the slice migration time of the target moment;
s606, the bandwidth rewarding value and the time delay rewarding value are weighted and summed to obtain the rewarding value.
In one embodiment, the inverse of the bandwidth forecast value at the target time is used as the bandwidth rewards value; and taking the reciprocal of the slice migration duration at the target moment as a delay rewarding value.
It should be noted that, the weights of the bandwidth rewards and the weights of the delay rewards may be preconfigured, and the values of the two may be determined according to the actual situation.
In one embodiment, when switching from a computationally intensive service to a delay sensitive service, the weight corresponding to the slice migration duration is greater than the weight corresponding to the bandwidth forecast value;
when switching from the time delay sensitive service to the computation intensive service, the weight corresponding to the slice migration time length is smaller than the weight corresponding to the bandwidth predicted value.
Illustratively, when switching from a computationally intensive service to a delay sensitive service, the weight corresponding to the delay prize value may be configured to be 0.6 and the weight corresponding to the bandwidth prize value may be configured to be 0.4.
Illustratively, when switching from a delay sensitive service to a computation intensive service, the weight corresponding to the delay prize value may be configured to be 0.4 and the weight corresponding to the bandwidth prize value may be configured to be 0.6.
For other traffic situations, the weight corresponding to the delay reward value and the weight corresponding to the bandwidth reward value may both be configured to be 0.5.
In the embodiment of the disclosure, key factors such as memory pages and storage space affecting network slice migration are evaluated according to the difference of service types, bandwidth resources are dynamically adjusted according to service requirements, and a reward function in the optimization process of a network slice prediction model is constructed based on time delay, reliability and bandwidth occupation conditions of a slice migration process, so that effective evaluation of the migration process is realized.
Fig. 7 shows a flowchart of a network slice migration method in an embodiment of the disclosure. As shown in fig. 7, in an embodiment, the embodiment of the present disclosure further provides a network slice migration method, including:
s702, acquiring the current time edge layer resource occupation information and service switching information;
s704, processing the edge layer resource occupation information and the service switching information at the current moment through a target network slice prediction model to obtain a bandwidth prediction value and a slice migration duration at the next moment, wherein the target network slice prediction model is obtained through training by the network slice migration method;
s706, generating a slice migration strategy according to the bandwidth predicted value of the next moment.
The current time is a time before the next time.
As shown in fig. 8, in one embodiment, the method further comprises: s708, feeding back the slice migration strategy to the operation/transaction supporting system.
As shown in fig. 8, in one embodiment, the method further comprises: s710, transferring the slice migration strategy to an edge layer.
In the embodiment of the disclosure, the information of the occupation of the edge layer resources and the information of the service switching at the current moment are collected; processing the edge layer resource occupation information and the service switching information at the current moment through a target network slice prediction model to obtain a bandwidth prediction value and a slice migration duration at the next moment, wherein the target network slice prediction model is obtained through training by the network slice migration method; and generating a slice migration strategy according to the bandwidth predicted value at the next moment, and realizing rapid and accurate migration of the slice when different service requests are sliced.
Based on the same inventive concept, a network slice migration device is also provided in the embodiments of the present disclosure, as described in the following embodiments. Since the principle of solving the problem of the system embodiment is similar to that of the method embodiment, the implementation of the system embodiment can be referred to the implementation of the method embodiment, and the repetition is omitted.
Fig. 9 shows a schematic diagram of a network slice migration apparatus according to an embodiment of the present disclosure, and as shown in fig. 9, the network slice migration apparatus includes a first data acquisition module 910 and a first training planning module 333.
The first data acquisition module 910 is configured to acquire historical moment edge layer resource occupation information and service switching information;
the first training planning module 920 is configured to process, through a network slice prediction model to be trained, the historical moment edge layer resource occupation information and the service switching information to obtain a bandwidth prediction value and a slice migration duration at a target moment; based on a pre-constructed rewarding function, obtaining a corresponding rewarding value according to a bandwidth predicted value and a slice migration time length; and if the preset training stopping condition is met, obtaining a target network slice prediction model.
In one embodiment, the network slice prediction model includes a first prediction model and a second prediction model; the first training planning module 920 is configured to process, through a first prediction model, the historical moment edge layer resource occupation information and the service switching information to obtain a first bandwidth predicted value and a first slice migration duration; processing the historical moment edge layer resource occupation information and the service switching information through a second prediction model to obtain a second bandwidth predicted value and a second slice migration duration; fusing the first bandwidth predicted value and the second bandwidth predicted value to obtain a bandwidth predicted value at the next moment; and fusing the first slice migration time length and the second slice migration time length to obtain the slice migration time length at the next moment.
It should be noted that, the first prediction model includes a deep Q network algorithm DQN model; the second predictive model comprises a depth deterministic strategy gradient algorithm DDPG model.
Note that the prize value is obtained by: obtaining a bandwidth rewarding value according to the bandwidth predicted value of the target moment; obtaining a time delay rewarding value according to the slice migration time of the target moment; and weighting and summing the bandwidth rewarding value and the time delay rewarding value to obtain the rewarding value.
It should be noted that, when the computation intensive service is switched to the delay sensitive service, the weight corresponding to the slice migration duration is greater than the weight corresponding to the bandwidth predicted value; when switching from the time delay sensitive service to the computation intensive service, the weight corresponding to the slice migration time length is smaller than the weight corresponding to the bandwidth predicted value.
In one embodiment, the first training planning module 920 is configured to determine that the preset training stop condition is met if the training frequency meets a preset frequency threshold and/or the reward value meets a preset reward threshold; or if the training duration meets the preset duration threshold value and/or the reward value meets the preset reward threshold value, judging that the preset training stopping condition is met.
In one embodiment, the first training planning module 920 is configured to adjust the model parameters of the network slice prediction model to be trained until the training stop condition is met if the training stop condition is not met.
It should be noted that the edge layer resource occupation information includes at least one of network residual bandwidth resource information and network bandwidth resource information; the traffic switching information includes a switching situation between a computationally intensive traffic and a delay sensitive traffic.
The network slice migration device provided by the embodiment of the disclosure collects historical moment edge layer resource occupation information and service switching information; processing the historical moment edge layer resource occupation information and the service switching information through a network slice prediction model to be trained to obtain a bandwidth prediction value and a slice migration duration of a target moment; based on a pre-constructed rewarding function, obtaining a corresponding rewarding value according to a bandwidth predicted value and a slice migration time length; if the preset training stop condition is met, a target network slice prediction model is obtained, the requirements of diversified slice migration on instantaneity and reliability are met, the occupied bandwidth is reduced as much as possible, and the utilization rate of resources is improved.
Fig. 10 illustrates a schematic diagram of a network slice migration apparatus in an embodiment of the present disclosure, as shown in fig. 10, including a second data acquisition module 1010 and a second training planning module 1020.
The second data acquisition module 1010 is configured to acquire current edge layer resource occupation information and service switching information;
the second training planning module 1020 is configured to process the current time edge layer resource occupation information and the service switching information through a target network slice prediction model, to obtain a bandwidth prediction value and a slice migration duration at the next time, where the target network slice prediction model is obtained through training by the network slice migration device; and generating a slice migration strategy according to the bandwidth predicted value of the next moment.
In one embodiment, the apparatus further comprises a request processing unit 1030 for feeding back the slice migration policy to the operation/transaction support system.
In one embodiment, the apparatus further comprises a message proxy/APIs interface and a bandwidth allocator extension module for communicating the slice migration policy to the edge layer via the message proxy/APIs interface.
In the embodiment of the disclosure, the information of the occupation of the edge layer resources and the information of the service switching at the current moment are collected; processing the edge layer resource occupation information and the service switching information at the current moment through a target network slice prediction model to obtain a bandwidth prediction value and a slice migration duration at the next moment, wherein the target network slice prediction model is obtained through training by the network slice migration method; and generating a slice migration strategy according to the bandwidth predicted value at the next moment, and realizing rapid and accurate migration of the slice when different service requests are sliced.
Those skilled in the art will appreciate that the various aspects of the invention may be implemented as a system, method, or program product. Accordingly, aspects of the invention may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 1100 according to this embodiment of the invention is described below with reference to fig. 11. The electronic device 1100 shown in fig. 11 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 11, the electronic device 1100 is embodied in the form of a general purpose computing device. Components of electronic device 1100 may include, but are not limited to: the at least one processing unit 1110, the at least one memory unit 1120, a bus 1130 connecting the different system components, including the memory unit 1120 and the processing unit 1110.
Wherein the storage unit stores program code that is executable by the processing unit 1110 such that the processing unit 1110 performs steps according to various exemplary embodiments of the present invention described in the above-described "exemplary methods" section of the present specification. For example, the processing unit 1110 may perform acquisition of history time edge layer resource occupancy information and traffic switching information as shown in fig. 2; processing the historical moment edge layer resource occupation information and the service switching information through a network slice prediction model to be trained to obtain a bandwidth prediction value and a slice migration duration of a target moment; based on a pre-constructed rewarding function, obtaining a corresponding rewarding value according to a bandwidth predicted value and a slice migration time length; and if the preset training stopping condition is met, obtaining a target network slice prediction model.
For example, the processing unit 1110 may perform the collection of the current time edge layer resource occupancy information and the service switching information as shown in fig. 7; processing the edge layer resource occupation information and the service switching information at the current moment through a target network slice prediction model to obtain a bandwidth prediction value and a slice migration duration at the next moment, wherein the target network slice prediction model is obtained through training by the network slice migration method; and generating a slice migration strategy according to the bandwidth predicted value of the next moment.
The storage unit 1120 may include a readable medium in the form of a volatile storage unit, such as a Random Access Memory (RAM) 11201 and/or a cache memory 11202, and may further include a Read Only Memory (ROM) 11203.
The storage unit 1120 may also include a program/utility 11204 having a set (at least one) of program modules 11205, such program modules 11205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The bus 1130 may be a local bus representing one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a bus using any of a variety of bus architectures.
The electronic device 1100 may also communicate with one or more external devices 1140 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the system, and/or any devices (e.g., routers, modems, etc.) that enable the electronic device 1100 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 1150. Also, the system may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through a network adapter 1160. As shown, network adapter 1160 communicates with other modules of electronic device 1100 via bus 1130. It should be appreciated that although not shown in fig. 11, other hardware and/or software modules may be used in connection with electronic device 1100, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible embodiments, the various aspects of the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the invention as described in the "exemplary methods" section of this specification, when said program product is run on the terminal device.
A program product for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read-only memory (CD-ROM) and comprise program code and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Furthermore, although the steps of the methods in the present disclosure are depicted in a particular order in the drawings, this does not require or imply that the steps must be performed in that particular order or that all illustrated steps be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a mobile terminal, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (13)

1. A network slice migration method, comprising:
acquiring historical moment edge layer resource occupation information and service switching information;
processing the historical moment edge layer resource occupation information and the service switching information through a network slice prediction model to be trained to obtain a bandwidth prediction value and a slice migration duration of a target moment;
based on a pre-constructed rewarding function, obtaining a corresponding rewarding value according to the bandwidth predicted value and the slice migration time length;
if the preset training stopping condition is met, a target network slice prediction model is obtained;
the network slice prediction model comprises a first prediction model and a second prediction model, wherein the first prediction model comprises a depth Q network algorithm DQN model, and the network slice prediction model is used for extracting characteristics of edge layer resource occupation information and service switching information by using a discretization module; the second prediction model comprises a depth deterministic strategy gradient algorithm DDPG model, and is used for extracting features from a continuous state space and a high-dimensional action space by utilizing a neural network;
The processing of the historical moment edge layer resource occupation information and the service switching information through the network slice prediction model to be trained to obtain a bandwidth prediction value and a slice migration duration of a target moment comprises the following steps:
processing the historical moment edge layer resource occupation information and the service switching information through the first prediction model to obtain a first bandwidth predicted value and a first slice migration duration;
processing the historical moment edge layer resource occupation information and the service switching information through the second prediction model to obtain a second bandwidth predicted value and a second slice migration duration;
fusing the first bandwidth predicted value and the second bandwidth predicted value to obtain a bandwidth predicted value of the target moment; and fusing the first slice migration duration and the second slice migration duration to obtain the slice migration duration of the target moment.
2. The network slice migration method of claim 1, wherein the prize value is obtained by:
obtaining a bandwidth rewarding value according to the bandwidth predicted value of the target moment;
obtaining a time delay rewarding value according to the slice migration time of the target moment;
And carrying out weighted summation on the bandwidth rewarding value and the time delay rewarding value to obtain the rewarding value.
3. The network slice migration method of claim 2, wherein the weight corresponding to the delay prize value is greater than the weight corresponding to the bandwidth prize value when switching from a computationally intensive service to a delay sensitive service;
when switching from the time delay sensitive service to the computation intensive service, the weight corresponding to the time delay rewarding value is smaller than the weight corresponding to the bandwidth rewarding value.
4. The network slice migration method according to claim 1, wherein the obtaining the target network slice prediction model if the preset training stop condition is satisfied includes:
if the training times meet a preset times threshold value and/or the rewarding value meets a preset rewarding threshold value, judging that the preset training stopping condition is met; or alternatively, the process may be performed,
and if the training duration meets a preset duration threshold value and/or the rewarding value meets a preset rewarding threshold value, judging that the preset training stopping condition is met.
5. The network slice migration method of claim 1, further comprising:
and if the training stopping condition is not met, adjusting model parameters of the network slice prediction model to be trained until the training stopping condition is met.
6. The network slice migration method of claim 1, wherein the edge layer resource occupancy information includes at least one of network bandwidth remaining resource information and network bandwidth resource information;
the service switching information includes a switching condition between a computationally intensive service and a delay sensitive service.
7. A network slice migration method, comprising:
acquiring the current time edge layer resource occupation information and service switching information;
processing the current time edge layer resource occupation information and the service switching information through a target network slice prediction model to obtain a bandwidth prediction value and a slice migration duration of the next time, wherein the target network slice prediction model is obtained through training of the network slice migration method according to any one of claims 1-6; and generating a slice migration strategy according to the bandwidth predicted value of the next moment.
8. The network slice migration method of claim 7, further comprising:
the slice migration strategy is fed back to the operation/transaction support system.
9. The network slice migration method of claim 7, further comprising:
And transferring the slice migration strategy to the edge layer.
10. A network slice migration apparatus, comprising:
the first data acquisition module is used for acquiring historical moment edge layer resource occupation information and service switching information;
the first training planning module is used for processing the historical moment edge layer resource occupation information and the service switching information through a network slice prediction model to be trained to obtain a bandwidth prediction value and a slice migration duration of a target moment; based on a pre-constructed rewarding function, obtaining a corresponding rewarding value according to the bandwidth predicted value and the slice migration time length; if the preset training stopping condition is met, a target network slice prediction model is obtained, wherein the network slice prediction model comprises a first prediction model and a second prediction model, the first prediction model comprises a depth Q network algorithm DQN model, and the discretization module is used for extracting features of edge layer resource occupation information and service switching information; the second prediction model comprises a depth deterministic strategy gradient algorithm DDPG model, and is used for extracting features from a continuous state space and a high-dimensional action space by utilizing a neural network;
The first training planning module is used for processing the historical moment edge layer resource occupation information and the service switching information through the first prediction model to obtain a first bandwidth predicted value and a first slice migration duration; processing the historical moment edge layer resource occupation information and the service switching information through the second prediction model to obtain a second bandwidth predicted value and a second slice migration duration; fusing the first bandwidth predicted value and the second bandwidth predicted value to obtain a bandwidth predicted value of the target moment; and fusing the first slice migration duration and the second slice migration duration to obtain the slice migration duration of the target moment.
11. A network slice migration apparatus, comprising:
the second data acquisition module is used for acquiring the current time edge layer resource occupation information and service switching information;
the second training planning module is used for processing the current time edge layer resource occupation information and the service switching information through a target network slice prediction model to obtain a bandwidth prediction value and a slice migration duration of the next time, wherein the target network slice prediction model is obtained through training of the network slice migration device according to claim 10; and generating a slice migration strategy according to the bandwidth predicted value of the next moment.
12. An electronic device, comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the network slice migration method of any one of claims 1-6, or to perform the network slice migration method of any one of claims 7-9, via execution of the executable instructions.
13. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the network slice migration method according to any one of claims 1-6 or implements the network slice migration method according to any one of claims 7-9.
CN202310791136.8A 2023-06-29 2023-06-29 Network slice migration method, device, equipment and storage medium Active CN116528255B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310791136.8A CN116528255B (en) 2023-06-29 2023-06-29 Network slice migration method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310791136.8A CN116528255B (en) 2023-06-29 2023-06-29 Network slice migration method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116528255A CN116528255A (en) 2023-08-01
CN116528255B true CN116528255B (en) 2023-10-10

Family

ID=87392477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310791136.8A Active CN116528255B (en) 2023-06-29 2023-06-29 Network slice migration method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116528255B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110275758A (en) * 2019-05-09 2019-09-24 重庆邮电大学 A kind of virtual network function intelligence moving method
CN112153700A (en) * 2019-06-26 2020-12-29 华为技术有限公司 Network slice resource management method and equipment
CN114374605A (en) * 2022-01-12 2022-04-19 重庆邮电大学 Dynamic adjustment and migration method for service function chain in network slice scene

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10506489B2 (en) * 2015-09-18 2019-12-10 Huawei Technologies Co., Ltd. System and methods for network slice reselection
CN109196828A (en) * 2016-06-16 2019-01-11 华为技术有限公司 A kind of method for managing resource and device of network slice
CN112511342B (en) * 2020-11-16 2022-04-15 北京邮电大学 Network slicing method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110275758A (en) * 2019-05-09 2019-09-24 重庆邮电大学 A kind of virtual network function intelligence moving method
CN112153700A (en) * 2019-06-26 2020-12-29 华为技术有限公司 Network slice resource management method and equipment
CN114374605A (en) * 2022-01-12 2022-04-19 重庆邮电大学 Dynamic adjustment and migration method for service function chain in network slice scene

Also Published As

Publication number Publication date
CN116528255A (en) 2023-08-01

Similar Documents

Publication Publication Date Title
US11410046B2 (en) Learning-based service migration in mobile edge computing
US11527889B2 (en) Congestion control in electric power system under load and uncertainty
Wang et al. A survey on service migration in mobile edge computing
US9363154B2 (en) Prediction-based provisioning planning for cloud environments
Etemad et al. Using DEVS for modeling and simulating a Fog Computing environment
US11704123B2 (en) Automated orchestration of containers by assessing microservices
US20210158099A1 (en) Federated learning of clients
JP2014532247A (en) Discoverable identification and migration of easily cloudable applications
US11564063B2 (en) Intelligent dynamic communication handoff for mobile applications
Alhumaima et al. Modelling the power consumption and trade‐offs of virtualised cloud radio access networks
CN115766875A (en) Edge computing power resource scheduling method, device, system, electronic equipment and medium
Shafik et al. Internet of things-based energy efficiency optimization model in fog smart cities
US20220114019A1 (en) Distributed resource-aware training of machine learning pipelines
Singh et al. To offload or not? an analysis of big data offloading strategies from edge to cloud
CN116528255B (en) Network slice migration method, device, equipment and storage medium
US11410023B2 (en) Lexicographic deep reinforcement learning using state constraints and conditional policies
CN112433733A (en) Controlling performance of deep learning models deployed on edge devices via predictive models
Ricardo et al. Developing machine learning and deep learning models for host overload detection in cloud data center
Skarin et al. An assisting model predictive controller approach to control over the cloud
CN114020469A (en) Edge node-based multi-task learning method, device, medium and equipment
CN115702565A (en) Improved cross component intra prediction mode
Ivanova et al. Significant simulation parameters for RESTART/LRE method in teletraffic systems of ultra broadband convergence networks
US20240103903A1 (en) Dynamic pod priority inference utilizing service mesh telemetry data
RU2784416C1 (en) Encoding multicomponent attributes for encoding a point cloud
CN116521377B (en) Service computing unloading method, system, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant