CN111754000A - Quality-aware edge intelligent federal learning method and system - Google Patents

Quality-aware edge intelligent federal learning method and system Download PDF

Info

Publication number
CN111754000A
CN111754000A CN202010590843.7A CN202010590843A CN111754000A CN 111754000 A CN111754000 A CN 111754000A CN 202010590843 A CN202010590843 A CN 202010590843A CN 111754000 A CN111754000 A CN 111754000A
Authority
CN
China
Prior art keywords
learning
quality
node
model
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010590843.7A
Other languages
Chinese (zh)
Other versions
CN111754000B (en
Inventor
张尧学
邓永恒
吕丰
任炬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Central South University
Original Assignee
Tsinghua University
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Central South University filed Critical Tsinghua University
Priority to CN202010590843.7A priority Critical patent/CN111754000B/en
Publication of CN111754000A publication Critical patent/CN111754000A/en
Application granted granted Critical
Publication of CN111754000B publication Critical patent/CN111754000B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/08Auctions

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • General Physics & Mathematics (AREA)
  • Economics (AREA)
  • Data Mining & Analysis (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a quality-aware edge intelligent federal learning method and a system, wherein the method comprises the following steps: the cloud platform constructs a federated learning quality optimization problem and solves the following problems that the sum of the quality of the aggregation models of a plurality of learning tasks reaches the maximum optimization target in each iteration: in each iteration, predicting the learning quality of the participating nodes by using the historical learning quality records of the participating nodes; the learning quality of the node training data is quantified by using the reduction amount of the loss function value in each iteration; in each iteration, the cloud platform stimulates the nodes with high learning quality to participate in federal learning through a reverse auction mechanism; to perform the distribution of learning tasks and learning rewards; in each iteration, for each learning task, each participating node uploads its local model parameters to the cloud platform to aggregate to obtain a global model. The invention can provide richer data and more calculation power for the training of the model under the condition of protecting the data privacy so as to improve the quality of the model.

Description

Quality-aware edge intelligent federal learning method and system
Technical Field
The invention relates to a performance optimization technology of a large-scale distributed intelligent learning system, in particular to a quality-aware edge intelligent federated learning method and system.
Background
With the rapid development of the Internet of Things (IoT), the edge of the network generates a lot of data continuously, which provides opportunities for implementing intelligent services based on machine learning. Traditionally, a centralized machine learning framework needs to summarize huge training data to a cloud center for model training. Although the centralized machine learning method can achieve satisfactory learning performance, the transmission and centralized storage of data risk privacy disclosure, and the data transmission overhead is a great obstacle to the implementation and use of the system for the number of mobile devices with limited power and the cost of cloud-centric data maintenance. In recent years, with the development of emerging Mobile Edge Computing (MEC) technology, Mobile devices may be equipped with Computing and storage functions to achieve localization of computation and model training, and thus MEC has also promoted the development of federal learning. The federated learning is a distributed machine learning framework, distributed nodes with computing power train models locally by using local data, then update models are uploaded to the cloud for aggregation, the aggregated model updates can continuously improve the quality of global models, and the local training mode well protects network privacy.
While federal learning has considerable potential, there are still two technical challenges. First, the performance of federal learning is highly dependent on the participation of the training nodes, but without satisfactory return, it is conceivable that mobile devices will not be willing to participate in the training of the federal learning model; secondly, the quality of the model updates contributed by the mobile devices varies greatly, subject to factors such as node data volume, data quality, computational power, etc. In situations where the budget is limited, it is not an easy task to choose the appropriate node to participate in federal learning. One possible approach is to select as many participating nodes as possible, but many studies have demonstrated that blindly aggregating too many low-quality model updates can instead degrade the overall model quality and make the model unable to converge.
There have been some studies in the prior art to improve the performance of the federal learning system, but they do not solve the above problems well. For example, Shiqiang Wang et al have devised a control algorithm to determine the optimal model aggregation frequency; zhibo Wang et al are directed to enhancing the security and privacy of the federal learning system. Although these studies have contributed to federal learning, they are based on a general assumption that there are enough volunteer nodes willing to participate in federal learning. However, volunteer participation is impractical in practice because training of the model typically consumes significant resources, including energy, computing, and bandwidth resources, which can be a significant overhead for the mobile device. Recognizing this, recent work has investigated the incentive mechanism in federal learning. For example, Jiawen Kang et al have designed an incentive mechanism based on the reputation of the mobile node; shashi Raj Pandey et al propose an incentive mechanism based on the Stackelberg game to improve communication overhead; yufeng Zhan et al propose an incentive mechanism based on deep reinforcement learning. However, none of them takes into account the quality of the model update, which can seriously affect the quality of the global model. In particular, in federal learning, mobile devices typically have heterogeneous computing power and different amounts of data and quality of data, which can result in large differences in the learning quality of different nodes. In addition, participants may deliberately degrade learning quality to reduce learning costs. Aggregating too many low quality model updates can in turn degrade the global model quality and cause convergence problems. However, no research has been conducted on integrating the learning quality of nodes into federated learning and performing quality-aware incentive mechanisms and model aggregation.
Disclosure of Invention
The invention provides a quality-aware edge intelligent federated learning method and system, which are used for solving the technical problem that the quality of a global model is deteriorated due to too many low-quality model updates gathered because an existing aggregation model does not consider an incentive system of node learning quality.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
a quality-aware edge intelligent federated learning method is characterized in that in each iteration, a cloud platform issues a group of learning tasks, and provides a learning budget of each learning task so as to recruit a proper node cooperation training model; downloading a global model by the node, training the model by using local data, and updating and uploading the model to the cloud platform; aggregating the model to update the global model; in the process, the cloud platform constructs a federal learning quality optimization problem according to the condition that the sum of the quality of the aggregation models of a plurality of learning tasks reaches the maximum optimization target in each iteration; and the federate learning quality optimization problem is solved by adopting the following steps:
in each iteration, predicting the learning quality of the participating nodes by using the historical learning quality records of the participating nodes; the learning quality of the node training data is quantified by using the reduction amount of the loss function value in each iteration;
in each iteration, the cloud platform stimulates the nodes with high learning quality to participate in federal learning through a reverse auction mechanism; to perform the distribution of learning tasks and learning rewards;
in each iteration, for each learning task, each participating node uploads the local model parameters to the cloud platform, and the cloud platform aggregates a plurality of local models to obtain a global model.
Preferably, the cloud platform aggregates the local model parameters to obtain global model parameters, wherein the local model updates with low quality are filtered out according to the training data amount and the quality of the training data of each participating node, and the global model after aggregation is obtained by aggregating the model updates with high quality.
Preferably, with the aggregate model quality for each learning task being at maximum an optimization goal in each iteration, the federal learning quality optimization problem is as follows:
Figure BDA0002555448840000021
Figure BDA0002555448840000022
Figure BDA0002555448840000023
Figure BDA0002555448840000031
Figure BDA0002555448840000032
wherein the content of the first and second substances,
Figure BDA0002555448840000033
representing a set of nodes in the federation, where i represents a set
Figure BDA0002555448840000034
The ith node;
Figure BDA0002555448840000035
Figure BDA0002555448840000036
represents a set of learning tasks published by the cloud platform in each iteration t, wherein
Figure BDA0002555448840000037
To represent
Figure BDA0002555448840000038
The jth learning task;
Figure BDA0002555448840000039
is a set of tasks that node i can participate in the t-th iteration, an
Figure BDA00025554488400000310
Figure BDA00025554488400000311
Learning task for iteration t provided by cloud platform
Figure BDA00025554488400000312
The learning budget of (2);
Figure BDA00025554488400000313
is a binary variable used to mark tasks
Figure BDA00025554488400000314
If iteration t is assigned to node i, a value equal to 1 means that a task is assigned to a node, otherwise, 0 is set;
Figure BDA00025554488400000315
indicating that node i performs a task
Figure BDA00025554488400000316
The remuneration of (a) to (b),
Figure BDA00025554488400000317
Figure BDA00025554488400000318
representing the learning task assignment result in iteration t,
Figure BDA00025554488400000319
representing the result of distribution of learning reward, f () is a model aggregation function;
constraint (3) indicates that for each learning task, the sum of the rewards to the participating nodes cannot exceed the learning budget provided by the task publisher;
constraint (4) represents that only the learning tasks that it can participate in can be assigned to a node;
constraint (5) represents the restriction that each node can only participate in one learning task at most in each iteration.
Preferably, the participation node with high learning quality is stimulated to participate in the federal learning through a reverse auction mechanism, and the method comprises the following steps:
each computing node i submits its own bid information
Figure BDA00025554488400000320
To cloud platform, array
Figure BDA00025554488400000321
Including learning tasks that node i can participate in
Figure BDA00025554488400000322
And bid price for participating in the task
Figure BDA00025554488400000323
The cloud platform then learns the task for each
Figure BDA00025554488400000324
Selecting a set of participating nodes
Figure BDA00025554488400000325
And determine their remuneration
Figure BDA00025554488400000326
The cloud platform optimizes the goal according to the maximum sum of the learning quality predicted values of the selected nodes and provides each learning task with the maximum sum
Figure BDA00025554488400000327
Selecting a set of participating nodes
Figure BDA00025554488400000328
And determine their learning remuneration
Figure BDA00025554488400000329
Preferably, the cloud platform optimizes the target according to the maximum sum of the learning quality predicted values of the selected nodes, and then the optimization problem of the maximized learning quality is as follows:
Figure BDA00025554488400000330
Figure BDA00025554488400000331
Figure BDA00025554488400000332
Figure BDA00025554488400000333
Figure BDA0002555448840000041
Figure BDA0002555448840000042
wherein, the input of the optimization problem of the maximization of the learning quality is the task set which each node i can participate in
Figure BDA0002555448840000043
Bid price
Figure BDA0002555448840000044
Learning budget
Figure BDA0002555448840000045
And learning quality prediction values
Figure BDA0002555448840000046
The output being a binary variable
Figure BDA0002555448840000047
If it is not
Figure BDA0002555448840000048
Node i will be added to the learning task
Figure BDA0002555448840000049
Of participating node sets
Figure BDA00025554488400000410
The preparation method is as follows; node i is assigned to perform a learning task
Figure BDA00025554488400000411
In addition, consideration will be output for each participating node
Figure BDA00025554488400000412
Constraints (14) require that the reward for each participating node be higher than the training cost for that node, other constraints being the same as the federal learning quality optimization problem.
Preferably, the following greedy algorithm is adopted to solve the optimization problem of learning quality maximization:
in each iteration t, the algorithm first learns for each task according to the node's bid information
Figure BDA00025554488400000413
Finding a set of candidate nodes that can participate in the task
Figure BDA00025554488400000414
Then executing the main loop until no node can participate in the learning task or all the learning tasks are distributed to proper nodes for execution;
in the main loop, the algorithm first learns for each learning task
Figure BDA00025554488400000415
Selecting a subset of participating nodes
Figure BDA00025554488400000416
The subset can be approximately maximized
Figure BDA00025554488400000417
Sum of predicted learning qualities of the participating nodes.
Preferably, in the greedy algorithm, the method further comprises:
according to
Figure BDA00025554488400000418
Size descending order of values to nodes
Figure BDA00025554488400000419
Sorting is carried out, and
Figure BDA00025554488400000420
the value of (b) is used as the ranking index of the node i;
greedy adding node i to a set of participating nodes according to ranking
Figure BDA00025554488400000421
Until the total reward to the participating nodes exceeds the budget
Figure BDA00025554488400000422
Wherein the reward to participating node i is
Figure BDA00025554488400000423
Find out having the maximum
Figure BDA00025554488400000424
Task of learning value
Figure BDA00025554488400000425
Will task
Figure BDA00025554488400000426
Assigning the task to the participating node selected for it
Figure BDA00025554488400000427
Marked as allocated and its participating nodes are also marked as allocated, task
Figure BDA00025554488400000428
And its participating nodes no longer participate in the allocation of the next round of the cycle.
Preferably, when the cloud platform aggregates a plurality of local models to obtain a global model, in iteration t, a set of local models is givenTask
Figure BDA00025554488400000429
Participating node of
Figure BDA00025554488400000430
And their model parameters
Figure BDA00025554488400000431
The parameters of the model after polymerization were calculated in the following manner
Figure BDA00025554488400000432
Figure BDA00025554488400000433
Wherein the content of the first and second substances,
Figure BDA0002555448840000051
and
Figure BDA0002555448840000052
respectively representing nodes i for training tasks
Figure BDA0002555448840000053
Data volume and data quality of the model.
Preferably, filtering out low quality local model updates comprises the steps of:
aggregating local model update parameters received from participating nodes using equation (16)
Figure BDA0002555448840000054
Obtaining post-polymerization model parameters
Figure BDA0002555448840000055
And calculating local model parameters of each node
Figure BDA0002555448840000056
And
Figure BDA0002555448840000057
cosine similarity between them
Figure BDA0002555448840000058
Calculating the average value of cosine similarity
Figure BDA0002555448840000059
Median number
Figure BDA00025554488400000510
And standard deviation σd(ii) a And the following comparisons were made:
when in use
Figure BDA00025554488400000511
The cosine similarity is measured
Figure BDA00025554488400000512
Wherein η is a preset threshold for the controllable range;
when in use
Figure BDA00025554488400000513
The cosine similarity is measured
Figure BDA00025554488400000514
Is considered as an unqualified low quality update.
The invention also provides a computer system comprising a cloud platform, wherein the cloud platform comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, and the processor realizes the steps of any one of the methods when executing the computer program.
The invention has the following beneficial effects:
1. the invention relates to a quality-aware edge intelligent federal learning method and a system, which predict the learning quality of a user by using historical learning quality records; within a limited federal learning budget, the system models a reverse auction problem to encourage high quality, low cost users to participate in federal learning. The invention can complete the cooperative machine learning model training of large-scale distributed mobile edge nodes, can sense the learning quality of the nodes when performing distributed machine learning, and can obviously improve the quality of the distributed cooperative model training; more high-quality low-cost mobile edge nodes can be stimulated to participate in model training. The invention can provide richer data and more calculation power for the training of the model under the condition of protecting the data privacy, so as to improve the quality of the model and provide better intelligent service for users.
2. In a preferred scheme, the aggregation algorithm is adopted, the quality of model updating is considered during aggregation, and undesired model updating is filtered out, so that the quality of the aggregated federal learning model is further optimized, and the robustness and efficiency of model aggregation can be improved. Users may be provided with better intelligent services based on machine learning, such as autopilot, speech recognition, etc., which require higher quality for machine learning models.
In addition to the objects, features and advantages described above, other objects, features and advantages of the present invention are also provided. The present invention will be described in further detail below with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic structural diagram of a distributed federated learning system upon which a preferred embodiment of the present invention is based;
FIG. 2 is a flow diagram of a quality-aware edge intelligent federated learning method of a preferred embodiment of the present invention;
FIG. 3 is a schematic illustration of model accuracy achieved with different excitation mechanisms in accordance with a preferred embodiment of the present invention;
FIG. 4 is a schematic illustration of the amount of reduction of the learning task loss function value during an iteration process in accordance with a preferred embodiment of the present invention;
FIG. 5 is a graphical illustration of the performance of a model aggregation algorithm under different scenarios in accordance with a preferred embodiment of the present invention;
FIG. 6 is a diagram illustrating the accuracy of the model obtained by the system at different data qualities in accordance with a preferred embodiment of the present invention;
figure 7 is a schematic illustration of model loss function value reductions at different budgets for a preferred embodiment of the present invention.
Detailed Description
The embodiments of the invention will be described in detail below with reference to the drawings, but the invention can be implemented in many different ways as defined and covered by the claims.
FIG. 1 is a typical distributed federated learning system including a cloud platform and mobile computing nodes for use with the mobile computing nodes
Figure BDA0002555448840000061
Representation, where i represents a set
Figure BDA0002555448840000062
The ith node. The system manages interaction between the cloud platform and the computing nodes in a time slot mode, and time is divided into T continuous iterations with equal length. In each iteration t, the cloud platform issues a group of learning tasks for
Figure BDA0002555448840000063
Is shown in which
Figure BDA0002555448840000064
To represent
Figure BDA0002555448840000065
The jth learning task. For learning tasks in iteration t
Figure BDA0002555448840000066
The cloud platform, i.e. the task publisher, provides the learning budget
Figure BDA0002555448840000067
To recruit appropriate computing nodes to collaborate on trainingAnd (5) practicing the model. The node downloads the global model firstly, then trains the model by using local data, updates and uploads the model to the cloud platform, and finally the cloud platform aggregates the model to update the global model. Since computing nodes have a wide variety of data, they can typically participate in a variety of learning tasks, representing the set of tasks that node i can participate in the tth iteration as
Figure BDA0002555448840000068
But each node has limited computational power and therefore each node is limited to participate in only one learning task at one iteration. In each iteration t, the platform needs to determine which compute node performs which training task, and determine the learning reward of the compute node, under a limited budget.
The quality-aware edge intelligent federated learning method of the embodiment is established based on the architecture of a typical distributed federated learning system shown in fig. 1. In each iteration, the cloud platform issues a group of learning tasks, and the cloud platform provides a learning budget of each learning task so as to recruit a proper node cooperation training model; downloading a global model by the node, training the model by using local data, and updating and uploading the model to the cloud platform; the cloud platform aggregates the models to update the global model.
Referring to fig. 2, in the distributed federal learning process, the cloud platform constructs a federal learning quality optimization problem in such a way that the sum of the aggregate model qualities of a plurality of learning tasks reaches the maximum optimization target in each iteration; and the federate learning quality optimization problem is solved by adopting the following steps:
in each iteration, predicting the learning quality of the participating nodes by using the historical learning quality records of the participating nodes; the learning quality of the node training data is quantified by using the reduction amount of the loss function value in each iteration;
in each iteration, the cloud platform stimulates the nodes with high learning quality to participate in federal learning through a reverse auction mechanism; to perform the distribution of learning tasks and learning rewards;
in each iteration, for each learning task, each participating node uploads the local model parameters to the cloud platform, and the cloud platform aggregates a plurality of local models to obtain a global model. In this embodiment, the cloud platform aggregates the plurality of local model parameters to obtain global model parameters, including filtering out low-quality local model updates according to the training data amount and the quality of the training data of each participating node, and aggregating the high-quality model updates to obtain an aggregated global model. The robustness and efficiency of model aggregation can be improved. Users may be provided with better intelligent services based on machine learning, such as autopilot, speech recognition, etc., which require higher quality for machine learning models.
The steps can complete the cooperative machine learning model training of the large-scale distributed mobile edge nodes, can sense the learning quality of the nodes when performing distributed machine learning, and can obviously improve the quality of the distributed cooperative model training; more high-quality low-cost mobile edge nodes can be stimulated to participate in model training. The invention can provide richer data and more calculation power for the training of the model under the condition of protecting the data privacy, so as to improve the quality of the model and provide better intelligent service for users.
The following is a detailed description:
since the global model is used in each iteration, the aggregate model quality for each learning task should be maximized in each iteration. Therefore, the federal learning quality optimization problem is defined as follows:
Figure BDA0002555448840000071
Figure BDA0002555448840000072
Figure BDA0002555448840000073
Figure BDA0002555448840000074
Figure BDA0002555448840000075
wherein the content of the first and second substances,
Figure BDA0002555448840000076
is a binary variable used to mark tasks
Figure BDA0002555448840000077
If iteration t is assigned to node i, a value equal to 1 means that a task is assigned to a node, otherwise it is set to 0. By using
Figure BDA0002555448840000078
Indicating that node i performs a task
Figure BDA0002555448840000079
The remuneration of (a) to (b),
Figure BDA00025554488400000710
Figure BDA00025554488400000711
representing the learning task assignment result in iteration t,
Figure BDA00025554488400000712
representing the result of the distribution of learning reward, f () is a model aggregation function. Constraint (3) indicates that for each learning task, the sum of the rewards to the participating nodes cannot exceed the learning budget provided by the task publisher; constraint (4) means that a node can only be assigned to learning tasks it can participate in; constraint (5) limits each node to participate in at most one learning task per iteration.
Fig. 2 shows the flow of the quality-aware edge intelligent federated learning method of the present invention. The method mainly comprises three parts of learning quality prediction, quality perception incentive mechanism and model aggregation.
And I, predicting the learning quality. Mainly comprises the quantification of the learning quality and the prediction of the learning quality.
a) And (5) learning quality quantification.
In federal learning, both the amount of data used to train the federal learning model and the quality of the data significantly affect the quality of learning. Quantification of the quality of learning should adequately reflect the contribution of local model updates to global model aggregation. One plausible approach is to test the accuracy of each local model update on a global test data set, with accuracy as the learning quality. However, in this approach, each iteration requires testing on each local model, which can cause significant overhead to the system in terms of test cost and latency. Unlike the accuracy measure, the loss function value is calculated during the training process with no additional overhead for the system. Thus, the present embodiment quantifies the quality of the node training data with the amount of reduction in the loss function value in each iteration.
Suppose that iteration t starts at time
Figure BDA0002555448840000081
End at the time
Figure BDA0002555448840000082
At the time point
Figure BDA0002555448840000083
The cloud platform converges on the received model updates from the compute nodes, whereupon the next iteration begins. Therefore, the compute node should be at
Figure BDA0002555448840000084
Time point of
Figure BDA0002555448840000085
Uploading own model updates, otherwise there will be no learning reward. Hypothesis learning task
Figure BDA0002555448840000086
At a point in time
Figure BDA0002555448840000087
In the test setHas a loss function value of
Figure BDA0002555448840000088
Local model of node i at a point in time
Figure BDA0002555448840000089
Has a loss function value of
Figure BDA00025554488400000810
Data quality for training at iteration t for node i
Figure BDA00025554488400000811
Is defined as:
Figure BDA00025554488400000812
combined with data volume for training
Figure BDA00025554488400000813
Representation), learning quality of node i at the t-th iteration
Figure BDA00025554488400000814
The definition is as follows:
Figure BDA00025554488400000815
b) and learning quality prediction.
After the cloud platform receives the model update contributed by the computing node, the learning quality can be quantified, but is unknown before the node learning. Thus, at each iteration, the platform will first predict the learning quality of the participant to assist in the learning task and the allocation of learning rewards. Since the training of the federated learning model will typically proceed in an iterative fashion, the participant's historical learning quality records may be utilized to estimate its learning quality. Suppose node i is in iteration t0,t1,…,trParticipate in a learning task
Figure BDA00025554488400000816
Then the quality record can be used
Figure BDA00025554488400000817
To predict the node i at iteration t (t)>tr) Quality of learning of
Figure BDA00025554488400000818
The learning quality of a node may change over time, and intuitively, recent quality records are more representative than older quality records. Thus, quality records are weighted by their staleness, with newer records being weighted more heavily than older records. The present embodiment uses an exponential forgetting function to assign weights, with the latest quality record having a weight of 1 and the weights of the other records being determined by their relative positions with respect to the latest record. Quality recording
Figure BDA0002555448840000091
Are given respective weights of
Figure BDA0002555448840000092
Is a forgetting coefficient, learning quality prediction value
Figure BDA0002555448840000093
Can be calculated by the following formula:
Figure BDA0002555448840000094
and II, exciting mechanism of quality perception. Mainly comprising the allocation of learning tasks and the determination of learning remuneration.
After predicting the learning quality of each candidate participant, the federated learning quality optimization problem defined in (1) can be solved in two steps. Firstly, a quality-aware incentive mechanism is designed to encourage high-quality low-cost nodes to participate in the federal learning task, and then an efficient model aggregation algorithm is designed to further improve the quality of the aggregated model. In each iteration, the bookEmbodiments encourage high-quality learning participating nodes to participate in federated learning through a reverse auction mechanism. Each computing node i submits its own bid information
Figure BDA0002555448840000095
To cloud platform, array
Figure BDA0002555448840000096
Including learning tasks that node i can participate in
Figure BDA0002555448840000097
And bid price for participating in the task
Figure BDA0002555448840000098
The cloud platform then learns the task for each
Figure BDA0002555448840000099
Selecting a set of participating nodes
Figure BDA00025554488400000910
And determine their remuneration
Figure BDA00025554488400000911
Thus, the following Learning Quality Maximization (LQM) problem may be defined: in each iteration t, how to learn the task for each time according to the bid information
Figure BDA00025554488400000912
Selecting a set of participating nodes
Figure BDA00025554488400000913
How to determine their learning remuneration
Figure BDA00025554488400000914
Can the sum of the learning quality predictors for the selected node be maximized?
Figure BDA00025554488400000915
Figure BDA00025554488400000916
Figure BDA00025554488400000917
Figure BDA00025554488400000918
Figure BDA00025554488400000919
Figure BDA00025554488400000920
The input to the LQM problem is a set of tasks that each node i can participate in
Figure BDA00025554488400000921
Bid price
Figure BDA00025554488400000922
Learning budget
Figure BDA00025554488400000923
And learning quality prediction values
Figure BDA00025554488400000924
The output being a binary variable
Figure BDA00025554488400000925
If it is not
Figure BDA00025554488400000926
Figure BDA00025554488400000927
Node i will be added to the learning task
Figure BDA00025554488400000928
Of participating node sets
Figure BDA00025554488400000929
In, meaning that node i is assigned to perform a learning task
Figure BDA00025554488400000930
In addition, consideration will be output for each participating node
Figure BDA00025554488400000931
Constraints (14) require that the reward for each participating node be higher than the training cost for that node, other constraints being the same as the federal learning quality optimization problem.
The LQM problem can be proved to be NP-hard, so that a heuristic algorithm is designed to solve the LQM problem, the algorithm has high calculation efficiency, and meanwhile, the calculation node can have honest bid price.
In this embodiment, the algorithm is a greedy algorithm, and the flow is as follows: in each iteration t, the algorithm first learns for each task according to the node's bid information
Figure BDA0002555448840000101
Finding a set of candidate nodes that can participate in the task
Figure BDA0002555448840000102
The main loop is then executed until no nodes can participate in the learning task or all learning tasks have been assigned to the appropriate nodes for execution. In the main loop, the algorithm first learns for each learning task
Figure BDA0002555448840000103
Selecting a subset of participating nodes
Figure BDA0002555448840000104
The subset can be approximately maximized
Figure BDA0002555448840000105
Sum of predicted learning qualities of the participating nodes. Specifically, the algorithm will be as follows
Figure BDA0002555448840000106
Size descending order of values to nodes
Figure BDA0002555448840000107
The sorting is carried out, and the sorting is carried out,
Figure BDA0002555448840000108
can be used as a ranking index for node i. The algorithm then greedily adds node i to the set of participating nodes according to the ranking
Figure BDA0002555448840000109
Until the total reward to the participating nodes exceeds the budget
Figure BDA00025554488400001010
The method for determining the reward of the participating nodes comprises the following steps: assuming that node k represents the highest ranked node of the nodes failing in bidding competition, the highest bid price b 'of successful bidding competition of the nodes is supported'i,jSatisfy the requirement of
Figure BDA00025554488400001011
Therefore, the reward to participating node i is
Figure BDA00025554488400001012
Finally, the algorithm finds the maximum
Figure BDA00025554488400001013
Task of learning value
Figure BDA00025554488400001014
Then the task is processed
Figure BDA00025554488400001015
Assigning the task to the participating node selected for it
Figure BDA00025554488400001016
Marked as allocated and its participating nodes are also marked as allocated, the task and its participating nodes will not participate in the allocation of the next round of the cycle. In this embodiment, the assigned labeling manner is as follows: each node can only participate in one task in each iteration, if the node i is determined to participate in the task l, the node i is marked as allocated (each node has a 01 variable x, the allocated variable x is 1, and all the values of the nodes x in the next iteration are restored to 0). Each round of circulation can only determine the distribution of one task, and the distributed nodes are not distributed any more; and the next round of the task with the budget (allocated) exhausted is not considered.
Model polymerization. Mainly including detection of outlier model updates and aggregation of model updates.
In each iteration t, for each learning task
Figure BDA00025554488400001017
After one or more gradient descent updates (local model updates), each participating node i will have its local model parameters
Figure BDA00025554488400001018
Uploading the parameters to a cloud platform, and then the platform aggregates the parameters to obtain global model parameters
Figure BDA00025554488400001019
The following model aggregation algorithm Federated Averaging (FA) is widely used in recent research work:
Figure BDA00025554488400001020
wherein the content of the first and second substances,
Figure BDA0002555448840000111
representing node i for training task in iteration t
Figure BDA0002555448840000112
The data volume of the model (2). Unlike existing model aggregation algorithms for federated learning, the present embodiment considers not only the amount of training data per node, but also the quality of the training data when aggregating model parameters. In iteration t, a set of tasks is given
Figure BDA0002555448840000113
Participating node of
Figure BDA0002555448840000114
And their model parameters
Figure BDA0002555448840000115
Calculating the model parameters after the polymerization in the following manner
Figure BDA0002555448840000116
Figure BDA0002555448840000117
Wherein the content of the first and second substances,
Figure BDA0002555448840000118
and
Figure BDA0002555448840000119
respectively representing nodes i for training tasks
Figure BDA00025554488400001110
Data volume and data quality of the model. In addition, to avoid the negative impact of low quality model updates, a computationally efficient outlier model update detection algorithm is designed to detect and filter out low quality local model updates: first, parameters are updated using (16) the aggregated local model received from the participating nodes
Figure BDA00025554488400001111
Obtaining post-polymerization model parameters
Figure BDA00025554488400001112
And calculating local model parameters of each node
Figure BDA00025554488400001113
And
Figure BDA00025554488400001114
cosine similarity between them
Figure BDA00025554488400001115
Then, an average value of the similarity is calculated
Figure BDA00025554488400001116
Median number
Figure BDA00025554488400001117
And standard deviation σd. Most model updates received should be of high quality, since the incentive mechanism will filter out most unreliable nodes prior to training. Thus, median
Figure BDA00025554488400001118
The direction of high quality local model updates may be reflected. Specifically, the first comparison
Figure BDA00025554488400001119
And
Figure BDA00025554488400001120
a value of, if
Figure BDA00025554488400001121
Similarity of low quality model updates
Figure BDA00025554488400001122
Should be higher than
Figure BDA00025554488400001123
Therefore, the similarity of the phases is obtained
Figure BDA00025554488400001124
Low quality updates (η is a preset threshold for controllable range) that are deemed unacceptable
Figure BDA00025554488400001125
When it is similar to each other
Figure BDA00025554488400001126
Figure BDA00025554488400001127
Is considered as an unqualified low quality update. In this way, a high quality, qualified model update can be obtained. Finally, the aggregated results are obtained by (16) aggregating these high quality qualified model updates.
The present invention also provides a computer system, including a cloud platform, where the cloud platform includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of any of the above embodiments when executing the computer program.
The invention was verified by simulation experiments as follows:
the simulation system is constructed with a widely used pytorre 1.4.0 software environment, simulating a real federal learning scenario, where the bid price, computational power and data quality of distributed nodes vary. The simulation experiment used 6 common learning models: multi-layer Perceptin (MLP), LeNet, MobileNet, VGG-11, EffectientNet-B0, ResNet-18, and 4 datasets: MNIST, fast-MNIST (FMNIST), CIFAR-10, and Street View House Numbers (SVHN). MNIST is a dataset of handwritten digits and FMNIST is a dataset of zaalando's fashion merchandise pictures, both of which have a training set of 6 ten thousand examples and a test set of 1 ten thousand examples. The CIFAR-10 dataset consists of 5 ten thousand training images and 1 ten thousand test images of class 10. SVHN is a real house number image dataset with 73000 training data and 26000 test data.
Experiment ofParameters are as follows: there are 4 experimental parameter settings, which are: setting I:
Figure BDA0002555448840000121
Figure BDA0002555448840000122
setting II:
Figure BDA0002555448840000123
Figure BDA0002555448840000124
setting III:
Figure BDA0002555448840000125
Figure BDA0002555448840000126
setting IV:
Figure BDA0002555448840000127
Figure BDA0002555448840000128
wherein the content of the first and second substances,
Figure BDA0002555448840000129
and
Figure BDA00025554488400001210
respectively representing the number of nodes and learning tasks in each iteration,
Figure BDA00025554488400001211
is a learning task
Figure BDA00025554488400001212
The learning budget of (2). Bid price per node i in each iteration
Figure BDA00025554488400001213
And amount of training data
Figure BDA00025554488400001214
Subject to uniform distribution within the above range and in aggregate
Figure BDA00025554488400001215
In (1), N iseThe proportion of error label data in the data set of each node is set as Re
FIG. 3 shows a comparison of model accuracy of the quality-aware excitation mechanism (FAIR) designed according to the present invention with other excitation mechanisms, using parameter set I in this experiment. The MLP model and the Le Net model are trained using MNIST and FMNIST datasets, respectively, the Mobile Net and VGG models are trained using a CIFAR-10 dataset, and the Effect Net and Res Net are trained using an SVHN dataset. FAIR, as well as other incentive mechanisms, uses the FA model aggregation algorithm, and after 30 iterations, the accuracy of the model is shown in FIG. 3. It can be seen that the FAIR designed by the present invention can achieve near-optimal (theoretical optimum) performance and has significant performance advantages over the other two excitation mechanisms. For example, for the Res Net model, two mechanisms, knapskackgreedy (knapsack greedy method) and bid price first (bid price first), can reach 45% accuracy, while the FAIR can reach 76% progress, improving performance by 68.9%.
Fig. 4 is a comparison of loss function value reduction amounts of different excitation mechanisms in a multi-learning task scene, and a parameter setting ii is adopted in the experiment. Each iteration has four learning tasks: an MNIST data set trains an MLP model, an FMNIST data set trains an Le Net model, a CIFAR-10 data set trains a Mobile Net model, and an SVHN data set trains an Efficient Net model. It can be seen that after 30 iterations, the FAIR can reach a 40% reduction in the loss function value, while the other two excitation mechanisms can reach 30% and 25%, respectively, which improve the performance of the FAIR by 33.3% and 60%.
FIG. 5 shows the results of comparing the model aggregation algorithm (FAIR) designed by the present invention with the Federated Averaging (FA) algorithm. The experiment simulates 10 computing nodes and four different scenes: a) clean (data label is completely correct), all nodes are normally trained on an unmodified data set, and the training data volume of the nodes
Figure BDA00025554488400001216
Compliance [1000,10000]Uniform distribution within the range; b) noisy (partial node data label error), of 10 nodes, the data set of 5 nodes is clear, and 50% of the data labels of the data sets of the other 5 nodes are error; c) error (partial node data label Error), of which 10 nodes, the data set of 7 nodes is clear and the data set labels of the other 3 nodes are Error; d) attack (malicious nodes exist), wherein one of 10 nodes submits any model parameter update, and other nodes are normally trained. b) c) d) three scenarios are fixed in each iteration
Figure BDA0002555448840000131
As can be seen from fig. 5, after 30 iterations, the FAIR model aggregation algorithm outperforms the FA algorithm in all scenarios of almost all data sets of the model, and is more robust because the model aggregation performance of the FA algorithm drops sharply when the quality of the model update is reduced, but the FAIR algorithm can operate normally. For example, when training Le Net model with MNIST data set, the accuracy of FA algorithm is reduced from 94.42% (Clean) to 12.72% (attach), but FAIR is reduced from 95.6% to 82.71%.
Fig. 6 and 7 show the performance of the FAIR method (including the excitation mechanism and the model aggregation algorithm) designed by the present invention. FIG. 6a) is a comparison of the MLP model's accuracy at different MNIST training data qualities; FIG. 6b) is a comparison of the accuracy of the Le Net model at different FMNIST training data quality; FIG. 6c) is a comparison of the accuracy of the Mobile Net model under different CIFAR-10 training data qualities; fig. 6d) is a comparison of the accuracy of the efficiency Net model at different SVHN training data qualities. This experiment employs parameter setting III. The knapack greedy excitation mechanism uses an FA model aggregation algorithm. The experiment simulates 20 nodes, and the training data volume of each iteration of each node is fixed
Figure BDA0002555448840000132
Figure BDA0002555448840000133
The noise level (noise level) of 10 nodes is continuously changed. The noise level refers to the percentage of the 20 nodes having 50% of the error label data. The learning budget for each iteration is set to 10, and after 30 iterations, the accuracy of the model is shown in fig. 6. It can be seen that in almost all settings, the FAIR outperforms the knapback greedy method, with a significant performance gap when the noise level is in the range of 20% to 80%. Also, although the learning quality of both mechanisms decreases with increasing noise level, the performance of the knapack greedy mechanism drops dramatically at low noise levels, while the performance of the FAIR remains stable at low noise levels.
Fig. 7a), 7b) and 7c) are the reduction of the model loss function values at a budget of 10, 20, 30 per iteration of learning per task, respectively, using the parameter setting iv in this experiment. The experiment simulates 100 distributed nodes and 4 learning tasks in each iteration: the MLP model was trained using the MNIST data set, the LeNet model was trained using the FMNIST data set, the MobileNet model was trained using the CIFAR-10 data set, and the EfficientNet model was trained using the SVHN data set. It can be seen that for all mechanisms, the learning quality can be improved as the learning budget is increased, and the FAIR can achieve better performance over 30 iterations compared to other mechanisms.
In conclusion, the quality of the Federal learning model is improved through the quality-aware incentive mechanism and the model aggregation mechanism. The device mainly comprises three parts: learning quality prediction, quality perception incentive mechanism and efficient model aggregation. For learning quality prediction, predicting the learning quality of the user by using historical learning quality records; for the quality-aware incentive mechanism, within a limited federal learning budget, the system models a reverse auction problem to encourage high-quality low-cost users to participate in federal learning; and during model aggregation, an aggregation algorithm is designed, the quality of model update is considered during aggregation, and undesirable model update is filtered out, so that the quality of the federated learning model after aggregation is further optimized. The invention can provide richer data and more calculation power for the training of the model under the condition of protecting the data privacy, so as to improve the quality of the model and provide better intelligent service for users. Users may be provided with better machine learning based intelligent services such as autopilot, voice recognition, and the like.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A quality-aware edge intelligent federated learning method is characterized in that in each iteration, a cloud platform issues a group of learning tasks, and provides a learning budget of each learning task so as to recruit a proper node cooperation training model; downloading a global model by the node, training the model by using local data, and updating and uploading the model to the cloud platform; aggregating the model to update the global model; the method is characterized in that in the process, the cloud platform builds a federal learning quality optimization problem by using the sum of the quality of the aggregation models of a plurality of learning tasks to reach the maximum optimization target in each iteration; and the federate learning quality optimization problem is solved by adopting the following steps:
in each iteration, predicting the learning quality of the participating nodes by using the historical learning quality records of the participating nodes; the learning quality of the node training data is quantified by using the reduction amount of the loss function value in each iteration;
in each iteration, the cloud platform stimulates the nodes with high learning quality to participate in federal learning through a reverse auction mechanism; to perform the distribution of learning tasks and learning rewards;
in each iteration, for each learning task, each participating node uploads the local model parameters to the cloud platform, and the cloud platform aggregates a plurality of local models to obtain a global model.
2. The quality-aware edge intelligent federated learning method of claim 1, wherein the cloud platform aggregates a plurality of local model parameters to obtain global model parameters, including filtering out low-quality local model updates according to the training data amount and the quality of training data of each participating node, and aggregating high-quality model updates to obtain an aggregated global model.
3. The quality-aware edge intelligent federated learning method of claim 1 or 2, wherein with the sum of the aggregated model qualities of multiple learning tasks reaching a maximum optimization goal in each iteration, the federated learning quality optimization problem is as follows:
Figure FDA0002555448830000011
Figure FDA0002555448830000012
Figure FDA0002555448830000013
Figure FDA0002555448830000014
Figure FDA0002555448830000015
wherein the content of the first and second substances,
Figure FDA0002555448830000016
representing a set of nodes in the federation, where i represents a set
Figure FDA0002555448830000017
The ith node;
Figure FDA0002555448830000018
Figure FDA0002555448830000019
represents a set of learning tasks published by the cloud platform in each iteration t, wherein
Figure FDA00025554488300000110
To represent
Figure FDA00025554488300000111
The jth learning task;
Figure FDA00025554488300000112
is a set of tasks that node i can participate in the t-th iteration, an
Figure FDA00025554488300000113
Figure FDA00025554488300000114
Learning task for iteration t provided by cloud platform
Figure FDA00025554488300000115
The learning budget of (2);
Figure FDA00025554488300000116
is a binary variable used to mark tasks
Figure FDA00025554488300000117
If iteration t is assigned to node i, a value equal to 1 means that a task is assigned to a node, otherwise, 0 is set;
Figure FDA0002555448830000021
indicating that node i performs a task
Figure FDA0002555448830000022
The remuneration of (a) to (b),
Figure FDA0002555448830000023
Figure FDA0002555448830000024
representing the learning task assignment result in iteration t,
Figure FDA0002555448830000025
representing the result of distribution of learning reward, f () is a model aggregation function;
constraint (3) indicates that for each learning task, the sum of the rewards to the participating nodes cannot exceed the learning budget provided by the task publisher;
constraint (4) represents that only the learning tasks that it can participate in can be assigned to a node;
constraint (5) represents the restriction that each node can only participate in one learning task at most in each iteration.
4. The quality-aware edge intelligent federated learning method of claim 3, wherein the participation node with high learning quality is motivated to participate in federated learning through a reverse auction mechanism, comprising the following steps:
each computing node i submits its own bid information
Figure FDA0002555448830000026
To cloud platform, array
Figure FDA0002555448830000027
Including learning tasks that node i can participate in
Figure FDA0002555448830000028
And bid price for participating in the task
Figure FDA0002555448830000029
The cloud platform then learns the task for each
Figure FDA00025554488300000210
Selecting a set of participating nodes
Figure FDA00025554488300000211
And determine their remuneration
Figure FDA00025554488300000212
The cloud platform optimizes the goal according to the maximum sum of the learning quality predicted values of the selected nodes and provides each learning task with the maximum sum
Figure FDA00025554488300000213
Selecting a set of participating nodes
Figure FDA00025554488300000214
And determine their learning remuneration
Figure FDA00025554488300000215
5. The quality-aware edge intelligent federated learning method of claim 4, wherein the cloud platform optimizes the problem of learning quality maximization according to that the maximum sum of the learning quality predicted values of the selected nodes is an optimization goal as follows:
Figure FDA00025554488300000216
Figure FDA00025554488300000217
Figure FDA00025554488300000218
Figure FDA00025554488300000219
Figure FDA00025554488300000220
Figure FDA00025554488300000221
wherein, the input of the optimization problem of the maximization of the learning quality is the task set which each node i can participate in
Figure FDA00025554488300000222
Bid price
Figure FDA00025554488300000223
Learning budget
Figure FDA00025554488300000224
And learning quality prediction values
Figure FDA00025554488300000225
The output being a binary variable
Figure FDA00025554488300000226
If it is not
Figure FDA00025554488300000227
Node i will be added to the learning task
Figure FDA0002555448830000031
Of participating node sets
Figure FDA0002555448830000032
The preparation method is as follows; node i is assigned to perform a learning task
Figure FDA0002555448830000033
In addition, consideration will be output for each participating node
Figure FDA0002555448830000034
Constraints (14) require that the reward for each participating node be higher than the training cost for that node, other constraints being the same as the federal learning quality optimization problem.
6. The quality-aware edge intelligent federated learning method of claim 5, wherein the optimization problem of learning quality maximization is solved by a greedy algorithm as follows:
in each iteration t, the algorithm first learns for each task according to the node's bid information
Figure FDA0002555448830000035
Finding a set of candidate nodes that can participate in the task
Figure FDA0002555448830000036
Then executing the main loop until no node can participate in the learning task or all the learning tasks are distributed to proper nodes for execution;
in the main loop, the algorithm first learns for each learning task
Figure FDA0002555448830000037
Selecting a subset of participating nodes
Figure FDA0002555448830000038
The subset can be approximately maximized
Figure FDA0002555448830000039
Sum of predicted learning qualities of the participating nodes.
7. The quality-aware edge intelligent federated learning method of claim 6, wherein in the greedy algorithm, the method further comprises:
according to
Figure FDA00025554488300000310
Size descending order of values to nodes
Figure FDA00025554488300000311
Sorting is carried out, and
Figure FDA00025554488300000312
the value of (b) is used as the ranking index of the node i;
greedy adding node i to a set of participating nodes according to ranking
Figure FDA00025554488300000313
Until the total reward to the participating nodes exceeds the budget
Figure FDA00025554488300000314
Wherein the reward to participating node i is
Figure FDA00025554488300000315
Find out having the maximum
Figure FDA00025554488300000316
Task of learning value
Figure FDA00025554488300000317
Will task
Figure FDA00025554488300000318
Assigning the task to the participating node selected for it
Figure FDA00025554488300000319
Marked as allocated and its participating nodes are also marked as allocated, task
Figure FDA00025554488300000320
And its participating nodes no longer participate inThe next cycle of dispensing.
8. The quality-aware edge intelligent federated learning method of claim 2, wherein when a cloud platform aggregates a plurality of local models to obtain a global model, a set of tasks is given in iteration t
Figure FDA00025554488300000321
Participating node of
Figure FDA00025554488300000322
And their model parameters
Figure FDA00025554488300000323
The parameters of the model after polymerization were calculated in the following manner
Figure FDA00025554488300000324
Figure FDA00025554488300000325
Wherein the content of the first and second substances,
Figure FDA00025554488300000326
and
Figure FDA00025554488300000327
respectively representing nodes i for training tasks
Figure FDA00025554488300000328
Data volume and data quality of the model.
9. The quality-aware edge intelligent federated learning method of claim 8, wherein the filtering out low-quality local model updates comprises the steps of:
aggregating the local models received from the participating nodes using equation (16) is furtherNew parameters
Figure FDA0002555448830000041
Obtaining post-polymerization model parameters
Figure FDA0002555448830000042
And calculating local model parameters of each node
Figure FDA0002555448830000043
And
Figure FDA0002555448830000044
cosine similarity between them
Figure FDA0002555448830000045
Calculating the average value of cosine similarity
Figure FDA0002555448830000046
Median number
Figure FDA0002555448830000047
And standard deviation σd(ii) a And the following comparisons were made:
when in use
Figure FDA0002555448830000048
The cosine similarity is measured
Figure FDA0002555448830000049
Wherein η is a preset threshold for the controllable range;
when in use
Figure FDA00025554488300000410
The cosine similarity is measured
Figure FDA00025554488300000411
Model (2)Low quality updates that are newly deemed to be unacceptable.
10. A computer system comprising a cloud platform including a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1 to 9 when executing the computer program.
CN202010590843.7A 2020-06-24 2020-06-24 Quality-aware edge intelligent federal learning method and system Active CN111754000B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010590843.7A CN111754000B (en) 2020-06-24 2020-06-24 Quality-aware edge intelligent federal learning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010590843.7A CN111754000B (en) 2020-06-24 2020-06-24 Quality-aware edge intelligent federal learning method and system

Publications (2)

Publication Number Publication Date
CN111754000A true CN111754000A (en) 2020-10-09
CN111754000B CN111754000B (en) 2022-10-14

Family

ID=72677167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010590843.7A Active CN111754000B (en) 2020-06-24 2020-06-24 Quality-aware edge intelligent federal learning method and system

Country Status (1)

Country Link
CN (1) CN111754000B (en)

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112181971A (en) * 2020-10-27 2021-01-05 华侨大学 Edge-based federated learning model cleaning and equipment clustering method, system, equipment and readable storage medium
CN112216401A (en) * 2020-10-12 2021-01-12 中国石油大学(华东) DF-based new coronary pneumonia federal learning detection method
CN112232519A (en) * 2020-10-15 2021-01-15 成都数融科技有限公司 Joint modeling method based on federal learning
CN112261137A (en) * 2020-10-22 2021-01-22 江苏禹空间科技有限公司 Model training method and system based on joint learning
CN112287990A (en) * 2020-10-23 2021-01-29 杭州卷积云科技有限公司 Model optimization method of edge cloud collaborative support vector machine based on online learning
CN112348204A (en) * 2020-11-05 2021-02-09 大连理工大学 Safe sharing method for marine Internet of things data under edge computing framework based on federal learning and block chain technology
CN112508205A (en) * 2020-12-04 2021-03-16 中国科学院深圳先进技术研究院 Method, device and system for scheduling federated learning
CN112532746A (en) * 2020-12-21 2021-03-19 北京邮电大学 Cloud edge cooperative sensing method and system
CN112637883A (en) * 2020-12-09 2021-04-09 深圳智芯微电子科技有限公司 Federal learning method with robustness to wireless environment change in power Internet of things
CN112668128A (en) * 2020-12-21 2021-04-16 国网辽宁省电力有限公司物资分公司 Method and device for selecting terminal equipment nodes in federated learning system
CN112836822A (en) * 2021-02-26 2021-05-25 浙江工业大学 Federal learning strategy optimization method and device based on width learning
CN112926088A (en) * 2021-03-18 2021-06-08 之江实验室 Federal learning privacy policy selection method based on game theory
CN113139600A (en) * 2021-04-23 2021-07-20 广东安恒电力科技有限公司 Intelligent power grid equipment anomaly detection method and system based on federal learning
CN113157434A (en) * 2021-02-26 2021-07-23 西安电子科技大学 Excitation method and system for user node of horizontal federated learning system
CN113222031A (en) * 2021-05-19 2021-08-06 浙江大学 Photolithographic hot zone detection method based on federal personalized learning
CN113312847A (en) * 2021-06-07 2021-08-27 北京大学 Privacy protection method and system based on cloud-edge computing system
CN113361694A (en) * 2021-06-30 2021-09-07 哈尔滨工业大学 Layered federated learning method and system applying differential privacy protection
CN113360514A (en) * 2021-07-02 2021-09-07 支付宝(杭州)信息技术有限公司 Method, device and system for jointly updating model
CN113379294A (en) * 2021-06-28 2021-09-10 武汉大学 Task deployment method based on federal learning participation user auction incentive mechanism
CN113435534A (en) * 2021-07-09 2021-09-24 新智数字科技有限公司 Data heterogeneous processing method and device based on similarity measurement, computer equipment and computer readable storage medium
CN113537511A (en) * 2021-07-14 2021-10-22 中国科学技术大学 Automatic gradient quantization federal learning framework and method
CN113537518A (en) * 2021-07-19 2021-10-22 哈尔滨工业大学 Model training method and device based on federal learning, equipment and storage medium
CN113591486A (en) * 2021-07-29 2021-11-02 浙江大学 Forgetting verification method based on semantic data loss in federated learning
CN113642737A (en) * 2021-08-12 2021-11-12 广域铭岛数字科技有限公司 Federal learning method and system based on automobile user data
CN113645197A (en) * 2021-07-20 2021-11-12 华中科技大学 Decentralized federal learning method, device and system
CN113660304A (en) * 2021-07-07 2021-11-16 北京邮电大学 Unmanned aerial vehicle group distributed learning resource control method based on bidirectional auction game
CN113689003A (en) * 2021-08-10 2021-11-23 华东师范大学 Safe mixed federal learning framework and method for removing third party
CN113887748A (en) * 2021-12-07 2022-01-04 浙江师范大学 Online federal learning task allocation method and device, and federal learning method and system
CN114065863A (en) * 2021-11-18 2022-02-18 北京百度网讯科技有限公司 Method, device and system for federal learning, electronic equipment and storage medium
WO2022089507A1 (en) * 2020-10-28 2022-05-05 索尼集团公司 Electronic device and method for federated learning
CN114496274A (en) * 2021-12-08 2022-05-13 杭州趣链科技有限公司 Byzantine robust federated learning method based on block chain and application
CN114513270A (en) * 2022-03-07 2022-05-17 苏州大学 Heterogeneous wireless network spectrum resource sensing method and system based on federal learning
CN114648131A (en) * 2022-03-22 2022-06-21 中国电信股份有限公司 Federal learning method, device, system, equipment and medium
WO2022130098A1 (en) * 2020-12-15 2022-06-23 International Business Machines Corporation Federated learning for multi-label classification model for oil pump management
CN114819183A (en) * 2022-04-15 2022-07-29 支付宝(杭州)信息技术有限公司 Model gradient confirmation method, device, equipment and medium based on federal learning
WO2022217784A1 (en) * 2021-04-15 2022-10-20 腾讯云计算(北京)有限责任公司 Data processing methods and apparatus, device, and medium
CN115640852A (en) * 2022-09-09 2023-01-24 湖南工商大学 Federal learning participation node selection optimization method, and federal learning method and system
CN116520814A (en) * 2023-07-03 2023-08-01 清华大学 Equipment fault prediction method and device based on federal learning under cloud edge cooperative architecture

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872058A (en) * 2019-01-31 2019-06-11 南京工业大学 A kind of multimedia intelligent perception motivational techniques for machine learning system
CN110189174A (en) * 2019-05-29 2019-08-30 南京工业大学 A kind of mobile intelligent perception motivational techniques based on quality of data perception
CN110717671A (en) * 2019-10-08 2020-01-21 深圳前海微众银行股份有限公司 Method and device for determining contribution degree of participants
CN111178524A (en) * 2019-12-24 2020-05-19 中国平安人寿保险股份有限公司 Data processing method, device, equipment and medium based on federal learning
CN111222646A (en) * 2019-12-11 2020-06-02 深圳逻辑汇科技有限公司 Design method and device of federal learning mechanism and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872058A (en) * 2019-01-31 2019-06-11 南京工业大学 A kind of multimedia intelligent perception motivational techniques for machine learning system
CN110189174A (en) * 2019-05-29 2019-08-30 南京工业大学 A kind of mobile intelligent perception motivational techniques based on quality of data perception
CN110717671A (en) * 2019-10-08 2020-01-21 深圳前海微众银行股份有限公司 Method and device for determining contribution degree of participants
CN111222646A (en) * 2019-12-11 2020-06-02 深圳逻辑汇科技有限公司 Design method and device of federal learning mechanism and storage medium
CN111178524A (en) * 2019-12-24 2020-05-19 中国平安人寿保险股份有限公司 Data processing method, device, equipment and medium based on federal learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JIAWEN KANG等: "Incentive Mechanism for Reliable Federated Learning: A Joint Optimization Approach to Combining Reputation and Contract Theory", 《IEEE INTERNET OF THINGS JOURNAL》 *

Cited By (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112216401A (en) * 2020-10-12 2021-01-12 中国石油大学(华东) DF-based new coronary pneumonia federal learning detection method
CN112232519A (en) * 2020-10-15 2021-01-15 成都数融科技有限公司 Joint modeling method based on federal learning
CN112232519B (en) * 2020-10-15 2024-01-09 成都数融科技有限公司 Joint modeling method based on federal learning
CN112261137B (en) * 2020-10-22 2022-06-14 无锡禹空间智能科技有限公司 Model training method and system based on joint learning
CN112261137A (en) * 2020-10-22 2021-01-22 江苏禹空间科技有限公司 Model training method and system based on joint learning
CN112287990A (en) * 2020-10-23 2021-01-29 杭州卷积云科技有限公司 Model optimization method of edge cloud collaborative support vector machine based on online learning
CN112287990B (en) * 2020-10-23 2023-06-30 杭州卷积云科技有限公司 Model optimization method of edge cloud collaborative support vector machine based on online learning
CN112181971A (en) * 2020-10-27 2021-01-05 华侨大学 Edge-based federated learning model cleaning and equipment clustering method, system, equipment and readable storage medium
WO2022089507A1 (en) * 2020-10-28 2022-05-05 索尼集团公司 Electronic device and method for federated learning
CN112348204A (en) * 2020-11-05 2021-02-09 大连理工大学 Safe sharing method for marine Internet of things data under edge computing framework based on federal learning and block chain technology
CN112508205A (en) * 2020-12-04 2021-03-16 中国科学院深圳先进技术研究院 Method, device and system for scheduling federated learning
CN112637883A (en) * 2020-12-09 2021-04-09 深圳智芯微电子科技有限公司 Federal learning method with robustness to wireless environment change in power Internet of things
CN112637883B (en) * 2020-12-09 2023-04-28 深圳智芯微电子科技有限公司 Federal learning method with robustness to wireless environment change in electric power Internet of things
WO2022130098A1 (en) * 2020-12-15 2022-06-23 International Business Machines Corporation Federated learning for multi-label classification model for oil pump management
CN112532746A (en) * 2020-12-21 2021-03-19 北京邮电大学 Cloud edge cooperative sensing method and system
CN112668128A (en) * 2020-12-21 2021-04-16 国网辽宁省电力有限公司物资分公司 Method and device for selecting terminal equipment nodes in federated learning system
CN112532746B (en) * 2020-12-21 2021-10-26 北京邮电大学 Cloud edge cooperative sensing method and system
CN112668128B (en) * 2020-12-21 2024-05-28 国网辽宁省电力有限公司物资分公司 Method and device for selecting terminal equipment nodes in federal learning system
CN112836822A (en) * 2021-02-26 2021-05-25 浙江工业大学 Federal learning strategy optimization method and device based on width learning
CN112836822B (en) * 2021-02-26 2024-05-28 浙江工业大学 Federal learning strategy optimization method and device based on width learning
CN113157434A (en) * 2021-02-26 2021-07-23 西安电子科技大学 Excitation method and system for user node of horizontal federated learning system
CN113157434B (en) * 2021-02-26 2024-05-07 西安电子科技大学 Method and system for exciting user nodes of transverse federal learning system
CN112926088A (en) * 2021-03-18 2021-06-08 之江实验室 Federal learning privacy policy selection method based on game theory
CN112926088B (en) * 2021-03-18 2024-03-19 之江实验室 Federal learning privacy policy selection method based on game theory
WO2022217784A1 (en) * 2021-04-15 2022-10-20 腾讯云计算(北京)有限责任公司 Data processing methods and apparatus, device, and medium
CN113139600A (en) * 2021-04-23 2021-07-20 广东安恒电力科技有限公司 Intelligent power grid equipment anomaly detection method and system based on federal learning
CN113222031A (en) * 2021-05-19 2021-08-06 浙江大学 Photolithographic hot zone detection method based on federal personalized learning
CN113312847A (en) * 2021-06-07 2021-08-27 北京大学 Privacy protection method and system based on cloud-edge computing system
CN113379294B (en) * 2021-06-28 2022-07-05 武汉大学 Task deployment method based on federal learning participation user auction incentive mechanism
CN113379294A (en) * 2021-06-28 2021-09-10 武汉大学 Task deployment method based on federal learning participation user auction incentive mechanism
CN113361694A (en) * 2021-06-30 2021-09-07 哈尔滨工业大学 Layered federated learning method and system applying differential privacy protection
CN113361694B (en) * 2021-06-30 2022-03-15 哈尔滨工业大学 Layered federated learning method and system applying differential privacy protection
CN113360514B (en) * 2021-07-02 2022-05-17 支付宝(杭州)信息技术有限公司 Method, device and system for jointly updating model
CN113360514A (en) * 2021-07-02 2021-09-07 支付宝(杭州)信息技术有限公司 Method, device and system for jointly updating model
CN113660304A (en) * 2021-07-07 2021-11-16 北京邮电大学 Unmanned aerial vehicle group distributed learning resource control method based on bidirectional auction game
CN113435534A (en) * 2021-07-09 2021-09-24 新智数字科技有限公司 Data heterogeneous processing method and device based on similarity measurement, computer equipment and computer readable storage medium
CN113537511B (en) * 2021-07-14 2023-06-20 中国科学技术大学 Automatic gradient quantization federal learning device and method
CN113537511A (en) * 2021-07-14 2021-10-22 中国科学技术大学 Automatic gradient quantization federal learning framework and method
CN113537518A (en) * 2021-07-19 2021-10-22 哈尔滨工业大学 Model training method and device based on federal learning, equipment and storage medium
CN113645197B (en) * 2021-07-20 2022-04-29 华中科技大学 Decentralized federal learning method, device and system
CN113645197A (en) * 2021-07-20 2021-11-12 华中科技大学 Decentralized federal learning method, device and system
CN113591486B (en) * 2021-07-29 2022-08-23 浙江大学 Forgetting verification method based on semantic data loss in federated learning
CN113591486A (en) * 2021-07-29 2021-11-02 浙江大学 Forgetting verification method based on semantic data loss in federated learning
CN113689003A (en) * 2021-08-10 2021-11-23 华东师范大学 Safe mixed federal learning framework and method for removing third party
CN113689003B (en) * 2021-08-10 2024-03-22 华东师范大学 Mixed federal learning framework and method for safely removing third party
CN113642737A (en) * 2021-08-12 2021-11-12 广域铭岛数字科技有限公司 Federal learning method and system based on automobile user data
CN113642737B (en) * 2021-08-12 2024-03-05 广域铭岛数字科技有限公司 Federal learning method and system based on automobile user data
CN114065863B (en) * 2021-11-18 2023-08-29 北京百度网讯科技有限公司 Federal learning method, apparatus, system, electronic device and storage medium
CN114065863A (en) * 2021-11-18 2022-02-18 北京百度网讯科技有限公司 Method, device and system for federal learning, electronic equipment and storage medium
CN113887748A (en) * 2021-12-07 2022-01-04 浙江师范大学 Online federal learning task allocation method and device, and federal learning method and system
CN113887748B (en) * 2021-12-07 2022-03-01 浙江师范大学 Online federal learning task allocation method and device, and federal learning method and system
CN114496274A (en) * 2021-12-08 2022-05-13 杭州趣链科技有限公司 Byzantine robust federated learning method based on block chain and application
CN114513270A (en) * 2022-03-07 2022-05-17 苏州大学 Heterogeneous wireless network spectrum resource sensing method and system based on federal learning
CN114648131A (en) * 2022-03-22 2022-06-21 中国电信股份有限公司 Federal learning method, device, system, equipment and medium
CN114819183A (en) * 2022-04-15 2022-07-29 支付宝(杭州)信息技术有限公司 Model gradient confirmation method, device, equipment and medium based on federal learning
CN115640852B (en) * 2022-09-09 2023-06-09 湖南工商大学 Federal learning participation node selection optimization method, federal learning method and federal learning system
CN115640852A (en) * 2022-09-09 2023-01-24 湖南工商大学 Federal learning participation node selection optimization method, and federal learning method and system
CN116520814A (en) * 2023-07-03 2023-08-01 清华大学 Equipment fault prediction method and device based on federal learning under cloud edge cooperative architecture
CN116520814B (en) * 2023-07-03 2023-09-05 清华大学 Equipment fault prediction method and device based on federal learning under cloud edge cooperative architecture

Also Published As

Publication number Publication date
CN111754000B (en) 2022-10-14

Similar Documents

Publication Publication Date Title
CN111754000B (en) Quality-aware edge intelligent federal learning method and system
Lu et al. Optimization of lightweight task offloading strategy for mobile edge computing based on deep reinforcement learning
CN113434212B (en) Cache auxiliary task cooperative unloading and resource allocation method based on meta reinforcement learning
CN110189174A (en) A kind of mobile intelligent perception motivational techniques based on quality of data perception
CN113191484A (en) Federal learning client intelligent selection method and system based on deep reinforcement learning
CN111770454B (en) Game method for position privacy protection and platform task allocation in mobile crowd sensing
Wu et al. Mobility-aware deep reinforcement learning with glimpse mobility prediction in edge computing
CN113869528B (en) De-entanglement individualized federated learning method for consensus characterization extraction and diversity propagation
CN113760511B (en) Vehicle edge calculation task unloading method based on depth certainty strategy
CN111626563B (en) Dual-target robust mobile crowd sensing system and excitation method thereof
Pang et al. An incentive auction for heterogeneous client selection in federated learning
Tang et al. Credit and quality intelligent learning based multi-armed bandit scheme for unknown worker selection in multimedia MCS
CN112148986A (en) Crowdsourcing-based top-N service re-recommendation method and system
CN113256335B (en) Data screening method, multimedia data delivery effect prediction method and device
CN111464620B (en) Edge-assisted mobile crowd sensing true value discovery system and excitation method thereof
KR20220150126A (en) Coded and Incentive-based Mechanism for Distributed Training of Machine Learning in IoT
CN111510473B (en) Access request processing method and device, electronic equipment and computer readable medium
CN117202264A (en) 5G network slice oriented computing and unloading method in MEC environment
Wang et al. Social-aware clustered federated learning with customized privacy preservation
CN116009990B (en) Cloud edge collaborative element reinforcement learning computing unloading method based on wide attention mechanism
Singhal et al. Greedy Shapley Client Selection for Communication-Efficient Federated Learning
Yang et al. Asynchronous Federated Learning with Incentive Mechanism Based on Contract Theory
CN117829274B (en) Model fusion method, device, equipment, federal learning system and storage medium
CN116029370B (en) Data sharing excitation method, device and equipment based on federal learning of block chain
Ercetin et al. Yardstick competition regulation for incentive mechanisms in federated learning: balancing cost optimization and fairness

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant