CN114489938A - Method for constructing user side QoS prediction model based on cloud edge collaborative mode - Google Patents

Method for constructing user side QoS prediction model based on cloud edge collaborative mode Download PDF

Info

Publication number
CN114489938A
CN114489938A CN202210010687.1A CN202210010687A CN114489938A CN 114489938 A CN114489938 A CN 114489938A CN 202210010687 A CN202210010687 A CN 202210010687A CN 114489938 A CN114489938 A CN 114489938A
Authority
CN
China
Prior art keywords
service
matrix
model
edge
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210010687.1A
Other languages
Chinese (zh)
Other versions
CN114489938B (en
Inventor
许建龙
林健
黎宇森
佘薇薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shantou University
Original Assignee
Shantou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shantou University filed Critical Shantou University
Priority to CN202210010687.1A priority Critical patent/CN114489938B/en
Publication of CN114489938A publication Critical patent/CN114489938A/en
Application granted granted Critical
Publication of CN114489938B publication Critical patent/CN114489938B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45504Abstract machines for programme code execution, e.g. Java virtual machine [JVM], interpreters, emulators
    • G06F9/45508Runtime interpretation or emulation, e g. emulator loops, bytecode interpretation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method for constructing a user side QoS prediction model based on a cloud edge collaborative mode, which comprises the following steps: the cloud server sends corresponding first model configuration files to the M user sides; each user side creates a local prediction model; the cloud server circularly appoints M1 user sides to execute a model pre-training instruction; the cloud server sends corresponding second model configuration files to the N edge servers, and M2 user sides are appointed to execute iterative training of the current local prediction model; each edge server creates an edge service matrix, and updates the associated edge service matrix according to the model training results of M2 clients; each edge server circularly appoints K1 user terminals to execute the model fine-tuning instruction from the controlled K user terminals at random; and each user side loads the optimal edge service matrix of the edge server to which the user side belongs, and constructs a QoS prediction model. The invention can accelerate the convergence speed of model training while protecting the privacy of users.

Description

Method for constructing user side QoS prediction model based on cloud edge collaborative mode
Technical Field
The invention relates to the technical field of QoS (Quality of Service) prediction application, in particular to a method for constructing a user-side QoS prediction model based on a cloud edge collaborative mode.
Background
The user side QoS value is used as a parameter index that can help a user side select high-quality service from cloud services with similar functions, and the current scholars have proposed to use a collaborative filtering method to predict the user side QoS value, but the method needs a cloud server to collect historical QoS data of the user side, which may result in that the user privacy is not protected. On the basis, another learner introduces the federal learning technology into the field of user side QoS value prediction, and at the moment, the cloud server only needs to collect the local model training result of the user side, so that the purpose of protecting the user privacy is achieved to a certain extent, however, the convergence rate of model training is low due to the fact that the difference of the user side QoS data distribution of different geographic areas is large. Therefore, how to perform efficient model training under the condition of protecting the privacy of the user is a technical problem to be solved by the invention.
Disclosure of Invention
The invention provides a user side QoS prediction model construction method based on a cloud edge collaborative mode, which is used for solving one or more technical problems in the prior art and at least providing a beneficial selection or creation condition.
The invention provides a method for constructing a user side QoS prediction model based on a cloud edge collaborative mode, which comprises the following steps:
the cloud server creates N service matrixes and corresponding N temporary service gradient matrixes according to edge servers deployed in an area and user ends controlled by the edge servers, wherein the number of the edge servers is N (N is more than 0), the number of the user ends controlled by the N edge servers is M (M is more than or equal to N), and then sends corresponding first model configuration files to the M user ends, wherein the first model configuration files comprise user matrix structure parameters and service matrix structure parameters;
each user side in the M user sides creates a local prediction model according to the received first model configuration file;
the cloud server circularly appoints M1(M1 < M) user side auxiliary execution model pre-training instructions from the M user sides at random until the N service matrixes are updated to N optimal service matrixes;
the cloud server sends corresponding second model configuration files to the N edge servers, wherein the second model configuration files comprise optimal service matrixes and service matrix structure parameters corresponding to the optimal service matrixes, and then M2(M2 < M) user sides are randomly appointed from the M user sides to execute iterative training of a current local prediction model;
each edge server in the N edge servers creates an edge service matrix according to the received second model configuration file, and updates the associated edge service matrix according to the model training results fed back by the M2 clients, so as to obtain N edge service matrices to be adjusted;
each edge server in the N edge servers circularly randomly appoints K1(K1 < K) user sides from K (K < M) user sides controlled by each edge server to assist in executing the model fine-tuning instruction until the edge service matrix to be adjusted is updated to be the optimal edge service matrix;
and each user side in the M user sides loads the optimal edge service matrix sent by the edge server to which the user side belongs, and further constructs a QoS prediction model of each user side.
Further, the model pre-training instruction sequentially comprises a model prediction instruction, a model training instruction and a matrix updating instruction;
the model prediction instruction is used for appointing the M1 user sides to execute result prediction and error feedback of the current local prediction model by calling self historical QoS data;
the model training instruction is used for appointing the M1 user terminals to execute iterative training and gradient feedback of the current local prediction model by calling self historical QoS data;
the matrix updating instruction is used for updating the current N service matrixes when the average error is judged to be larger than or equal to a first preset threshold value;
or the matrix updating instruction is used for defining the current N service matrixes as N optimal service matrixes when the average error is judged to be smaller than a first preset threshold value.
Further, before the cloud server specifies that the M1 user terminals execute the model prediction instruction and the model training instruction, the method further includes:
based on the edge server to which each of the M1 user terminals belongs, the cloud server sends a corresponding current service matrix to each of the M1 user terminals.
Further, the matrix update instruction is configured to update the current N service matrices, including:
updating the associated current temporary service gradient matrix by using the model training results fed back by the M1 user terminals, and further obtaining N updated temporary service gradient matrices;
updating the current N service matrixes by using the updated N temporary service gradient matrixes to further obtain updated N service matrixes, wherein any one updated service matrix is as follows:
Figure BDA0003457214690000031
of formula (II) to (III)'iFor the updated i-th service matrix, CSiIs the current ith service matrix, a is a weighted value and a < 1, CG'jIs the updated jth temporary service gradient matrix, CG'iThe updated ith temporary service gradient matrix.
Further, the performing iterative training of the current local prediction model from randomly designated M2(M2 < M) user terminals among the M user terminals comprises:
based on the edge server to which each of the M2 user terminals belongs, the cloud server sends a corresponding optimal service matrix to each of the M2 user terminals;
and each of the M2 user terminals loads the respectively received optimal service matrix to the current local prediction model, and then performs iterative training of the current local prediction model by calling self historical QoS data.
Further, the model fine-tuning instruction sequentially comprises a model re-prediction instruction, a model re-training instruction and a matrix re-updating instruction;
the model re-prediction instruction is used for appointing the K1 user sides to call self historical QoS data to perform result prediction and error feedback of the current local prediction model;
the model retraining instruction is used for appointing the K1 user sides to call self historical QoS data to perform iterative training and gradient feedback of the current local prediction model;
the matrix re-updating instruction is used for updating the current edge service matrix to be adjusted when the average error is judged to be larger than or equal to a second preset threshold value;
or, the matrix re-updating instruction is used for defining the current edge service matrix to be adjusted as the optimal edge service matrix when the average error is judged to be smaller than the second preset threshold.
Further, before each of the K1 clients that each edge server specifies its management executes the model re-prediction instruction and the model re-training instruction, the method further includes:
each edge server sends a current edge service matrix to be adjusted to each client in the K1 clients it manages.
Further, the matrix re-update instruction is used to update the current edge service matrix to be adjusted, and the update is as follows:
Figure BDA0003457214690000041
wherein ES' is the updated edge service matrix to be adjusted, ES is the current edge service matrix to be adjusted, ei_g1And the local service gradient matrix fed back by the ith user terminal in the K1 user terminals after model iterative training is performed.
The invention has at least the following beneficial effects: by adopting the cloud edge cooperation technology, firstly, data interaction between the cloud server and the user side is used as model pre-training operation, and then data interaction between the edge server and the user side in the geographic area to which the edge server belongs is used as model fine-tuning operation, so that local model training results fed back by the user side in the same geographic area can be processed in a centralized manner, and the convergence rate of model training can be effectively improved. The two operation processes adopt the federal learning technology, so that the cloud server and the edge server can only receive the local model training result fed back by the user side, and the user privacy is effectively protected.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention.
Fig. 1 is a schematic flowchart of a method for building a user-side QoS prediction model based on a cloud-edge collaborative mode in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It should be noted that although functional block divisions are provided in the system drawings and logical sequences are shown in the flowcharts, in some cases, the steps shown or described may be performed in a different order than the block divisions in the systems or in the flowcharts. The terms first, second and the like in the description and in the claims, and the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Referring to fig. 1, fig. 1 is a schematic flowchart of a method for building a user-side QoS prediction model based on a cloud-edge collaborative mode according to an embodiment of the present invention, where the method includes the following steps:
s101, the cloud server creates N service matrixes and corresponding N temporary service gradient matrixes according to edge servers deployed in an area and user sides controlled by the edge servers, wherein the number of the edge servers is N (N is greater than 0), the number of the user sides controlled by the N edge servers is M (M is greater than or equal to N), and then corresponding first model configuration files are sent to the M user sides.
In the embodiment of the present invention, the cloud server creates a service matrix and a temporary service gradient matrix having the same structural parameters as the service matrix for any edge server, where the structural parameters of the service matrix are p rows × q columns, where q columns represent that the edge server can provide q service items, p rows represent that each service item is characterized by p parameters, and internal parameters of the service matrix are randomly assigned when the service matrix is just created, and internal parameters of the temporary service gradient matrix associated with the service matrix when the service matrix is just created are initialized to 0.
In the embodiment of the present invention, each of the M clients is allocated to a corresponding edge server for management and control according to a geographic area in which the client is located, and since the user-side QoS prediction model is a matrix decomposition model formed by multiplying a user matrix and a service matrix, the first model configuration file sent by the cloud server to any client includes that a user matrix structure parameter is 1 row × p columns and a service matrix structure parameter associated with the edge server to which the client belongs is p rows × q columns.
S102, each user side of the M user sides establishes a local prediction model according to the received first model configuration file.
In the embodiment of the invention, when any user side receives the corresponding first model configuration file, a local user matrix is created according to the user matrix structure parameters and a local service matrix is created according to the service matrix structure parameters, the internal parameters of the local user matrix are randomly assigned when the local user matrix is just created, and the internal parameters of the local service matrix are uniformly assigned by the cloud server.
S103, the cloud server circularly appoints M1(M1 < M) user sides from the M user sides to assist in executing a model pre-training instruction, and the N service matrixes are updated to be N optimal service matrixes.
In the embodiment of the present invention, the model pre-training instruction sequentially includes a model prediction instruction, a model training instruction, and a matrix update instruction, where: the model prediction instruction is used for appointing the M1 user terminals to perform result prediction and error feedback of a current local prediction model by calling historical QoS data of the user terminals, wherein an error fed back by each user terminal in the M1 user terminals is a difference value between a predicted value and a true value; the model training instruction is used for appointing the M1 user terminals to execute iterative training and gradient feedback of the current local prediction model by calling self historical QoS data, and a random gradient descent method is adopted in the model training process; the matrix updating instruction is used for updating the current N service matrixes when the average error is judged to be larger than or equal to a first preset threshold value; or the matrix updating instruction is used for defining the current N service matrixes as N optimal service matrixes when the average error is judged to be smaller than a first preset threshold value.
It should be noted that, before the cloud server specifies the M1 user terminals to execute the model prediction instruction and the model training instruction, based on the edge server to which each user terminal of the M1 user terminals belongs, the cloud server needs to send a corresponding current service matrix to each user terminal of the M1 user terminals, so that each user terminal completes a preliminary data loading task.
Specifically, the updating, by the cloud server, of the current N service matrices when executing the matrix update instruction includes:
(1) updating the associated current temporary service gradient matrix by using the model training results fed back by the M1 user terminals, and further obtaining N updated temporary service gradient matrices;
(2) updating the current N service matrixes by using the updated N temporary service gradient matrixes to further obtain the updated N service matrixes, wherein any one updated service matrix is as follows:
Figure BDA0003457214690000061
of formula (II) to (III)'iFor the updated i-th service matrix, CSiIs the current ith service matrix, a is a weighted value and a < 1, CG'jIs the updated jth temporary service gradient matrix, CG'iThe updated ith temporary service gradient matrix.
In step (1), when there is some clients associated with the xth temporary service gradient matrix in the M1 clients, the xth temporary service gradient matrix may be updated as:
Figure BDA0003457214690000062
wherein, CG'xFor the updated xth temporary service gradient matrix, CGxServing the current x-th temporary gradient matrix, ei_g2And the local service gradient matrix fed back by the ith user side in the part of user sides after model iterative training is executed.
The step (1) is exemplified by: assuming that the M1 clients only include client a1, client a2, and client A3, and client a1 and client a2 belong to the 1 st edge server and client A3 belong to the 3 rd edge server, at this time, the cloud server only needs to update the 1 st temporary service gradient matrix associated with the 1 st edge server and the 3 rd temporary service gradient matrix associated with the 3 rd edge server to:
CG′1=CG1+e1_g2+e2_g2,CG′3=CG3+e3_g2
wherein, CG'1For the updated 1 st temporary service gradient matrix, CG1For the current 1 st temporary service gradient matrix, e1_g2Is fed back by the user terminal A1 after performing the model iterative trainingLocal service gradient matrix of e2_g2Is a local service gradient matrix, CG 'fed back by the user side A2 after model iterative training is executed'3For the updated 3 rd temporary service gradient matrix, CG3For the current 3 rd temporary service gradient matrix, e3_g2The local service gradient matrix fed back by the user terminal a3 after performing model iterative training.
S104, the cloud server sends corresponding second model configuration files to the N edge servers, and then M2(M2 < M) user sides are randomly assigned from the M user sides to execute iterative training of the current local prediction model.
The second model configuration file sent by the cloud server to any edge server comprises the optimal service matrix associated with the edge server and the service matrix structure parameters corresponding to the optimal service matrix.
The model iterative training process for the M2 user terminals includes: firstly, based on an edge server to which each of the M2 user terminals belongs, the cloud server sends a corresponding optimal service matrix to each of the M2 user terminals; secondly, each of the M2 user terminals loads the respectively received optimal service matrix to the current local prediction model, and then performs iterative training of the current local prediction model by calling own historical QoS data.
It should be noted that, in the embodiment of the present invention, the cloud server is specified to participate from the user terminals of the M user terminals randomly in a fixed proportion in any round of model training process, that is, M2 is specified to M1.
S105, each edge server in the N edge servers creates an edge service matrix according to the received second model configuration file, and updates the associated edge service matrix according to the model training result fed back by the M2 user ends, so as to obtain N edge service matrices to be adjusted.
In the embodiment of the present invention, when any edge server receives a corresponding second model configuration file, an edge service matrix is created according to the service matrix structure parameters therein, and the internal parameters of the edge service matrix at the time of just creating are assigned by loading the optimal service matrix in the second model configuration file.
In step S105, when there are some clients associated with the xth edge service matrix in the M2 clients, the xth edge service matrix can be updated as:
Figure BDA0003457214690000081
wherein, ES'xFor the updated xth edge-to-be-adjusted service matrix, ESxServing the current x-th edge with a matrix, ei_g3And the local service gradient matrix fed back by the ith user side in the part of the user sides after the model iterative training is executed.
This step S105 is exemplified by: assuming that the M2 ues include the ue B1 and the ue B2, and both of the two ues belong to the 2 nd edge server, the 2 nd edge server is required to update the 2 nd edge service matrix stored therein to:
ES′2=ES2+e1_g3+e2_g3
wherein, ES'2For the updated 2 nd edge service matrix to be adjusted, ES2Serving the current 2 nd edge with the matrix, e1_g3A local service gradient matrix, e, fed back by the user terminal B1 after performing model iterative training2_g3A local service gradient matrix fed back after model iterative training is executed for the user side B2;
in addition, when the M2 ue further includes a ue B3 and the ue B3 belongs to the 4 th edge server, the 4 th edge server is further required to update the 4 th edge service matrix stored therein to:
ES′4=ES4+e3_g3
wherein, ES'4For the updated 4 th edge-to-be-adjusted service matrix, ES4Serving the current 4 th edge with the matrix, e3_g3The local service gradient matrix fed back by the user terminal B3 after performing the model iterative training.
S106, each edge server in the N edge servers randomly appoints K1(K1 < K) user side auxiliary execution model fine adjustment instructions from K (K < M) user sides controlled by each edge server in a circulating mode until the edge service matrix to be adjusted is updated to be the optimal edge service matrix.
In the embodiment of the present invention, the model fine-tuning instruction sequentially includes a model re-prediction instruction, a model re-training instruction, and a matrix re-update instruction, where: the model re-prediction instruction is used for appointing the K1 user terminals to call self historical QoS data to execute result prediction and error feedback of the current local prediction model; the model retraining instruction is used for appointing the K1 user sides to call self historical QoS data to perform iterative training and gradient feedback of the current local prediction model; the matrix re-updating instruction is used for updating the current edge service matrix to be adjusted when the average error is judged to be larger than or equal to a second preset threshold value; or the matrix re-updating instruction is used for defining the current edge service matrix to be adjusted as the optimal edge service matrix when the average error is judged to be smaller than a second preset threshold value.
It should be noted that, before each edge server specifies the K1 ues controlled by the edge server to execute the model re-prediction instruction and the model re-training instruction, each edge server needs to send the current edge service matrix to be adjusted to each of the K1 ues controlled by the edge server, so that each ue completes the preliminary data loading task.
Specifically, when executing the matrix re-update instruction, any edge server updates the current edge service matrix to be adjusted therein to:
Figure BDA0003457214690000091
wherein ES' is the updated edge service matrix to be adjusted, ES is the current edge service matrix to be adjusted, ei_g1And the local service gradient matrix fed back by the ith user terminal in the K1 user terminals after model iterative training is performed.
S107, each user side in the M user sides loads the optimal edge service matrix sent by the edge server to which the user side belongs, and then a QoS prediction model of each user side is constructed.
The embodiment of the present invention takes the QoS prediction model of the user side U1 as an example to perform the task of predicting the QoS value generated when the idle service item is called, and the following description is made:
(1) when the edge server to which the client U1 belongs provides 5 service items (S1-S5), and the client U1 only calls the service item S1 and the service item S5, the actual QoS matrix that the client U1 can directly observe is:
0.43 unknown value Unknown value Unknown value 0.15
From this actual QoS matrix it can be seen that: the QoS value generated by the client U1 invoking service item S1 is 0.43, the QoS value generated by the client U1 invoking service item S5 is 0.15, and the question mark therein indicates the unknown QoS value to be predicted;
(2) since the optimal edge service matrix loaded from the home edge server by the user side U1 is:
0.3 0.23 0.56 0.62 0.45
0.5 0.2 0.78 0.65 0.14
and the QoS matrix is obtained by multiplying the user matrix and the optimal edge service matrix, and at this time, the user matrix corresponding to the user side U1 can be determined to be:
0.1 0.8
and then obtaining a predicted QoS matrix finally output by the QoS prediction model of the user side U1 as follows:
0.43 0.18 0.68 0.58 0.15
from this predicted QoS matrix it can be seen that: the QoS value generated when the user terminal U1 invoked service item S2 was 0.18, the QoS value generated when the user terminal U1 invoked service item S3 was 0.68, and the QoS value generated when the user terminal U1 invoked service item S4 was 0.58.
One of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a central processing unit, digital signal processor or microprocessor, or as hardware, or as integrated circuits. Such software can be distributed on computer readable media, which can include computer storage media (or non-transitory media) and communication media (or transitory media). Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
While the preferred embodiments of the present invention have been described in detail, it will be understood by those skilled in the art that the foregoing and various other changes, omissions and deviations in the form and detail thereof may be made without departing from the scope of this invention.

Claims (8)

1. A method for constructing a user side QoS prediction model based on a cloud edge collaborative mode is characterized by comprising the following steps:
the cloud server creates N service matrixes and corresponding N temporary service gradient matrixes according to edge servers deployed in an area and clients controlled by the edge servers, wherein the number of the edge servers is N (N is more than 0), the number of the clients controlled by the N edge servers is M (M is more than or equal to N), and then sends corresponding first model configuration files to the M clients, wherein the first model configuration files comprise user matrix structure parameters and service matrix structure parameters;
each user side in the M user sides establishes a local prediction model according to the received first model configuration file;
the cloud server circularly appoints M1(M1 < M) user side auxiliary execution model pre-training instructions from the M user sides at random until the N service matrixes are updated to N optimal service matrixes;
the cloud server sends corresponding second model configuration files to the N edge servers, wherein the second model configuration files comprise optimal service matrixes and corresponding service matrix structure parameters, and then M2(M2 < M) user terminals are randomly appointed from the M user terminals to execute iterative training of a current local prediction model;
each edge server in the N edge servers creates an edge service matrix according to the received second model configuration file, and updates the associated edge service matrix according to the model training results fed back by the M2 clients, so as to obtain N edge service matrices to be adjusted;
each edge server in the N edge servers circularly randomly appoints K1(K1 < K) user sides from K (K < M) user sides controlled by each edge server to assist in executing the model fine-tuning instruction until the edge service matrix to be adjusted is updated to be the optimal edge service matrix;
and each user side in the M user sides loads the optimal edge service matrix sent by the edge server to which the user side belongs, and further constructs a QoS prediction model of each user side.
2. The method for building the user-side QoS prediction model based on the cloud-edge collaborative mode according to claim 1, wherein the model pre-training instruction sequentially comprises a model prediction instruction, a model training instruction and a matrix updating instruction;
the model prediction instruction is used for appointing the M1 user sides to execute result prediction and error feedback of the current local prediction model by calling self historical QoS data;
the model training instruction is used for appointing the M1 user terminals to execute iterative training and gradient feedback of the current local prediction model by calling self historical QoS data;
the matrix updating instruction is used for updating the current N service matrixes when the average error is judged to be larger than or equal to a first preset threshold value;
or, the matrix updating instruction is used for defining the current N service matrices as N optimal service matrices when the average error is judged to be smaller than the first preset threshold.
3. The method of claim 2, wherein before the cloud server specifies that the M1 clients execute the model prediction instruction and the model training instruction, the method further comprises:
based on the edge server to which each of the M1 user terminals belongs, the cloud server sends a corresponding current service matrix to each of the M1 user terminals.
4. The method for building the user-side QoS prediction model based on the cloud-edge collaborative mode according to claim 2, wherein the matrix update instruction is used for updating the current N service matrices and comprises:
updating the associated current temporary service gradient matrix by using the model training results fed back by the M1 user terminals, and further obtaining N updated temporary service gradient matrices;
updating the current N service matrixes by using the updated N temporary service gradient matrixes to further obtain the updated N service matrixes, wherein any one updated service matrix is as follows:
Figure FDA0003457214680000021
of formula (II) to (III)'iFor the updated i-th service matrix, CSiFor the current ith service matrix, a is the weight value and a<1,CG′jIs the updated jth temporary service gradient matrix, CG'iThe updated ith temporary service gradient matrix.
5. The method of claim 1, wherein the performing iterative training of the current local prediction model from M randomly designated M2(M2 < M) clients comprises:
based on the edge server to which each of the M2 user terminals belongs, the cloud server sends a corresponding optimal service matrix to each of the M2 user terminals;
and each of the M2 user terminals loads the respectively received optimal service matrix to the current local prediction model, and then performs iterative training of the current local prediction model by calling self historical QoS data.
6. The method for building the user-side QoS prediction model based on the cloud edge collaborative mode according to claim 1, wherein the model fine-tuning instruction sequentially comprises a model re-prediction instruction, a model re-training instruction and a matrix re-update instruction;
the model re-prediction instruction is used for appointing the K1 user sides to call self historical QoS data to execute result prediction and error feedback of the current local prediction model;
the model retraining instruction is used for appointing the K1 user sides to call self historical QoS data to execute iterative training and gradient feedback of the current local prediction model;
the matrix re-updating instruction is used for updating the current edge service matrix to be adjusted when the average error is judged to be larger than or equal to a second preset threshold value;
or, the matrix re-updating instruction is used for defining the current edge service matrix to be adjusted as the optimal edge service matrix when the average error is judged to be smaller than the second preset threshold.
7. The method according to claim 6, wherein before the K1 customer premises that each edge server specifies it to manage execute the model re-prediction instruction and the model re-training instruction, the method further comprises:
each edge server sends the current edge service matrix to be adjusted to each of the K1 clients it manages.
8. The method for building the user-side QoS prediction model based on the cloud-edge collaborative mode according to claim 6, wherein the matrix re-update instruction is used for updating the current edge service matrix to be adjusted into:
Figure FDA0003457214680000031
wherein ES' is the updated edge service matrix to be adjusted, ES is the current edge service matrix to be adjusted, ei_g1And the local service gradient matrix fed back after model iterative training is performed on the ith user side in the K1 user sides.
CN202210010687.1A 2022-01-05 2022-01-05 Cloud edge cooperative mode-based user side QoS prediction model construction method Active CN114489938B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210010687.1A CN114489938B (en) 2022-01-05 2022-01-05 Cloud edge cooperative mode-based user side QoS prediction model construction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210010687.1A CN114489938B (en) 2022-01-05 2022-01-05 Cloud edge cooperative mode-based user side QoS prediction model construction method

Publications (2)

Publication Number Publication Date
CN114489938A true CN114489938A (en) 2022-05-13
CN114489938B CN114489938B (en) 2024-06-25

Family

ID=81509272

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210010687.1A Active CN114489938B (en) 2022-01-05 2022-01-05 Cloud edge cooperative mode-based user side QoS prediction model construction method

Country Status (1)

Country Link
CN (1) CN114489938B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111314120A (en) * 2020-01-23 2020-06-19 福州大学 Cloud software service resource self-adaptive management framework based on iterative QoS model
CN111416735A (en) * 2020-03-02 2020-07-14 河海大学 Federal learning-based safety QoS prediction method under mobile edge environment
CN112685139A (en) * 2021-01-11 2021-04-20 东北大学 K8S and Kubeedge-based cloud edge deep learning model management system and model training method
CN112700067A (en) * 2021-01-14 2021-04-23 安徽师范大学 Method and system for predicting service quality under unreliable mobile edge environment
US20210258230A1 (en) * 2020-02-13 2021-08-19 Acronis International Gmbh Systems and methods for pattern-based quality of service (qos) violation prediction
CN113839838A (en) * 2021-10-20 2021-12-24 西安电子科技大学 Business type identification method for federal learning based on cloud edge cooperation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111314120A (en) * 2020-01-23 2020-06-19 福州大学 Cloud software service resource self-adaptive management framework based on iterative QoS model
US20210258230A1 (en) * 2020-02-13 2021-08-19 Acronis International Gmbh Systems and methods for pattern-based quality of service (qos) violation prediction
CN111416735A (en) * 2020-03-02 2020-07-14 河海大学 Federal learning-based safety QoS prediction method under mobile edge environment
CN112685139A (en) * 2021-01-11 2021-04-20 东北大学 K8S and Kubeedge-based cloud edge deep learning model management system and model training method
CN112700067A (en) * 2021-01-14 2021-04-23 安徽师范大学 Method and system for predicting service quality under unreliable mobile edge environment
CN113839838A (en) * 2021-10-20 2021-12-24 西安电子科技大学 Business type identification method for federal learning based on cloud edge cooperation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
任丽芳;王文剑;: "一种移动边缘计算环境中服务QoS的预测方法", 小型微型计算机***, no. 06, 29 May 2020 (2020-05-29) *
许建龙等: "分布式用户隐私保护可调节的云服务个性化QoS 预测模型", 《网络与信息安全学报》, 30 April 2023 (2023-04-30) *

Also Published As

Publication number Publication date
CN114489938B (en) 2024-06-25

Similar Documents

Publication Publication Date Title
CN110661715B (en) Service path optimization method, device, equipment and readable storage medium
CN109165808B (en) Power communication network on-site operation and maintenance work order distribution method
CN110362380B (en) Network shooting range-oriented multi-objective optimization virtual machine deployment method
CN106878572A (en) Process the contact in contact center system and method, system and the product attended a banquet
CN107241380A (en) For the method and apparatus of the load balancing adjusted based on the time
CN112669084B (en) Policy determination method, device and computer readable storage medium
CN112966832A (en) Multi-server-based federal learning system
CN112016699A (en) Deep learning model training method, working node and parameter server
Sulaiman et al. Coordinated slicing and admission control using multi-agent deep reinforcement learning
CN115129463A (en) Computing power scheduling method, device, system and storage medium
CN116542296A (en) Model training method and device based on federal learning and electronic equipment
CN114489938A (en) Method for constructing user side QoS prediction model based on cloud edge collaborative mode
CN112308749B (en) Culture plan generation device, method, electronic device, and readable storage medium
CN117674957A (en) Scheduling method, scheduling apparatus, computer device, storage medium, and program product
CN109767094B (en) Smart cloud manufacturing task scheduling device
CN109784687B (en) Smart cloud manufacturing task scheduling method, readable storage medium and terminal
CN112131010A (en) Server layout method and device, computer equipment and storage medium
CN113867736B (en) Deployment scheme generation method and device
CN115550373A (en) Combined test task environment load balancing modeling method based on cloud platform management and control
CN112966968B (en) List distribution method based on artificial intelligence and related equipment
US10009468B1 (en) Method, computer program product and computer system for a chat based communication system
Elkael et al. Improved monte carlo tree search for virtual network embedding
CN111163237B (en) Call service flow control method and related device
CN116266321A (en) Advertisement strategy determination method and device
CN109840094A (en) A kind of dispositions method of database, device and storage equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant