CN111754000A - Quality-aware edge intelligent federal learning method and system - Google Patents
Quality-aware edge intelligent federal learning method and system Download PDFInfo
- Publication number
- CN111754000A CN111754000A CN202010590843.7A CN202010590843A CN111754000A CN 111754000 A CN111754000 A CN 111754000A CN 202010590843 A CN202010590843 A CN 202010590843A CN 111754000 A CN111754000 A CN 111754000A
- Authority
- CN
- China
- Prior art keywords
- learning
- quality
- node
- model
- task
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/08—Auctions
Landscapes
- Business, Economics & Management (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- General Physics & Mathematics (AREA)
- Economics (AREA)
- Data Mining & Analysis (AREA)
- Strategic Management (AREA)
- Marketing (AREA)
- Development Economics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Business, Economics & Management (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Entrepreneurship & Innovation (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a quality-aware edge intelligent federal learning method and a system, wherein the method comprises the following steps: the cloud platform constructs a federated learning quality optimization problem and solves the following problems that the sum of the quality of the aggregation models of a plurality of learning tasks reaches the maximum optimization target in each iteration: in each iteration, predicting the learning quality of the participating nodes by using the historical learning quality records of the participating nodes; the learning quality of the node training data is quantified by using the reduction amount of the loss function value in each iteration; in each iteration, the cloud platform stimulates the nodes with high learning quality to participate in federal learning through a reverse auction mechanism; to perform the distribution of learning tasks and learning rewards; in each iteration, for each learning task, each participating node uploads its local model parameters to the cloud platform to aggregate to obtain a global model. The invention can provide richer data and more calculation power for the training of the model under the condition of protecting the data privacy so as to improve the quality of the model.
Description
Technical Field
The invention relates to a performance optimization technology of a large-scale distributed intelligent learning system, in particular to a quality-aware edge intelligent federated learning method and system.
Background
With the rapid development of the Internet of Things (IoT), the edge of the network generates a lot of data continuously, which provides opportunities for implementing intelligent services based on machine learning. Traditionally, a centralized machine learning framework needs to summarize huge training data to a cloud center for model training. Although the centralized machine learning method can achieve satisfactory learning performance, the transmission and centralized storage of data risk privacy disclosure, and the data transmission overhead is a great obstacle to the implementation and use of the system for the number of mobile devices with limited power and the cost of cloud-centric data maintenance. In recent years, with the development of emerging Mobile Edge Computing (MEC) technology, Mobile devices may be equipped with Computing and storage functions to achieve localization of computation and model training, and thus MEC has also promoted the development of federal learning. The federated learning is a distributed machine learning framework, distributed nodes with computing power train models locally by using local data, then update models are uploaded to the cloud for aggregation, the aggregated model updates can continuously improve the quality of global models, and the local training mode well protects network privacy.
While federal learning has considerable potential, there are still two technical challenges. First, the performance of federal learning is highly dependent on the participation of the training nodes, but without satisfactory return, it is conceivable that mobile devices will not be willing to participate in the training of the federal learning model; secondly, the quality of the model updates contributed by the mobile devices varies greatly, subject to factors such as node data volume, data quality, computational power, etc. In situations where the budget is limited, it is not an easy task to choose the appropriate node to participate in federal learning. One possible approach is to select as many participating nodes as possible, but many studies have demonstrated that blindly aggregating too many low-quality model updates can instead degrade the overall model quality and make the model unable to converge.
There have been some studies in the prior art to improve the performance of the federal learning system, but they do not solve the above problems well. For example, Shiqiang Wang et al have devised a control algorithm to determine the optimal model aggregation frequency; zhibo Wang et al are directed to enhancing the security and privacy of the federal learning system. Although these studies have contributed to federal learning, they are based on a general assumption that there are enough volunteer nodes willing to participate in federal learning. However, volunteer participation is impractical in practice because training of the model typically consumes significant resources, including energy, computing, and bandwidth resources, which can be a significant overhead for the mobile device. Recognizing this, recent work has investigated the incentive mechanism in federal learning. For example, Jiawen Kang et al have designed an incentive mechanism based on the reputation of the mobile node; shashi Raj Pandey et al propose an incentive mechanism based on the Stackelberg game to improve communication overhead; yufeng Zhan et al propose an incentive mechanism based on deep reinforcement learning. However, none of them takes into account the quality of the model update, which can seriously affect the quality of the global model. In particular, in federal learning, mobile devices typically have heterogeneous computing power and different amounts of data and quality of data, which can result in large differences in the learning quality of different nodes. In addition, participants may deliberately degrade learning quality to reduce learning costs. Aggregating too many low quality model updates can in turn degrade the global model quality and cause convergence problems. However, no research has been conducted on integrating the learning quality of nodes into federated learning and performing quality-aware incentive mechanisms and model aggregation.
Disclosure of Invention
The invention provides a quality-aware edge intelligent federated learning method and system, which are used for solving the technical problem that the quality of a global model is deteriorated due to too many low-quality model updates gathered because an existing aggregation model does not consider an incentive system of node learning quality.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
a quality-aware edge intelligent federated learning method is characterized in that in each iteration, a cloud platform issues a group of learning tasks, and provides a learning budget of each learning task so as to recruit a proper node cooperation training model; downloading a global model by the node, training the model by using local data, and updating and uploading the model to the cloud platform; aggregating the model to update the global model; in the process, the cloud platform constructs a federal learning quality optimization problem according to the condition that the sum of the quality of the aggregation models of a plurality of learning tasks reaches the maximum optimization target in each iteration; and the federate learning quality optimization problem is solved by adopting the following steps:
in each iteration, predicting the learning quality of the participating nodes by using the historical learning quality records of the participating nodes; the learning quality of the node training data is quantified by using the reduction amount of the loss function value in each iteration;
in each iteration, the cloud platform stimulates the nodes with high learning quality to participate in federal learning through a reverse auction mechanism; to perform the distribution of learning tasks and learning rewards;
in each iteration, for each learning task, each participating node uploads the local model parameters to the cloud platform, and the cloud platform aggregates a plurality of local models to obtain a global model.
Preferably, the cloud platform aggregates the local model parameters to obtain global model parameters, wherein the local model updates with low quality are filtered out according to the training data amount and the quality of the training data of each participating node, and the global model after aggregation is obtained by aggregating the model updates with high quality.
Preferably, with the aggregate model quality for each learning task being at maximum an optimization goal in each iteration, the federal learning quality optimization problem is as follows:
wherein the content of the first and second substances,representing a set of nodes in the federation, where i represents a setThe ith node; represents a set of learning tasks published by the cloud platform in each iteration t, whereinTo representThe jth learning task;is a set of tasks that node i can participate in the t-th iteration, an Learning task for iteration t provided by cloud platformThe learning budget of (2);is a binary variable used to mark tasksIf iteration t is assigned to node i, a value equal to 1 means that a task is assigned to a node, otherwise, 0 is set;indicating that node i performs a taskThe remuneration of (a) to (b), representing the learning task assignment result in iteration t,representing the result of distribution of learning reward, f () is a model aggregation function;
constraint (3) indicates that for each learning task, the sum of the rewards to the participating nodes cannot exceed the learning budget provided by the task publisher;
constraint (4) represents that only the learning tasks that it can participate in can be assigned to a node;
constraint (5) represents the restriction that each node can only participate in one learning task at most in each iteration.
Preferably, the participation node with high learning quality is stimulated to participate in the federal learning through a reverse auction mechanism, and the method comprises the following steps:
each computing node i submits its own bid informationTo cloud platform, arrayIncluding learning tasks that node i can participate inAnd bid price for participating in the taskThe cloud platform then learns the task for eachSelecting a set of participating nodesAnd determine their remunerationThe cloud platform optimizes the goal according to the maximum sum of the learning quality predicted values of the selected nodes and provides each learning task with the maximum sumSelecting a set of participating nodesAnd determine their learning remuneration
Preferably, the cloud platform optimizes the target according to the maximum sum of the learning quality predicted values of the selected nodes, and then the optimization problem of the maximized learning quality is as follows:
wherein, the input of the optimization problem of the maximization of the learning quality is the task set which each node i can participate inBid priceLearning budgetAnd learning quality prediction valuesThe output being a binary variableIf it is notNode i will be added to the learning taskOf participating node setsThe preparation method is as follows; node i is assigned to perform a learning taskIn addition, consideration will be output for each participating node
Constraints (14) require that the reward for each participating node be higher than the training cost for that node, other constraints being the same as the federal learning quality optimization problem.
Preferably, the following greedy algorithm is adopted to solve the optimization problem of learning quality maximization:
in each iteration t, the algorithm first learns for each task according to the node's bid informationFinding a set of candidate nodes that can participate in the taskThen executing the main loop until no node can participate in the learning task or all the learning tasks are distributed to proper nodes for execution;
in the main loop, the algorithm first learns for each learning taskSelecting a subset of participating nodesThe subset can be approximately maximizedSum of predicted learning qualities of the participating nodes.
Preferably, in the greedy algorithm, the method further comprises:
according toSize descending order of values to nodesSorting is carried out, andthe value of (b) is used as the ranking index of the node i;
greedy adding node i to a set of participating nodes according to rankingUntil the total reward to the participating nodes exceeds the budgetWherein the reward to participating node i is
Find out having the maximumTask of learning valueWill taskAssigning the task to the participating node selected for itMarked as allocated and its participating nodes are also marked as allocated, taskAnd its participating nodes no longer participate in the allocation of the next round of the cycle.
Preferably, when the cloud platform aggregates a plurality of local models to obtain a global model, in iteration t, a set of local models is givenTaskParticipating node ofAnd their model parametersThe parameters of the model after polymerization were calculated in the following manner
Wherein the content of the first and second substances,andrespectively representing nodes i for training tasksData volume and data quality of the model.
Preferably, filtering out low quality local model updates comprises the steps of:
aggregating local model update parameters received from participating nodes using equation (16)Obtaining post-polymerization model parametersAnd calculating local model parameters of each nodeAndcosine similarity between them
Calculating the average value of cosine similarityMedian numberAnd standard deviation σd(ii) a And the following comparisons were made:
when in useThe cosine similarity is measuredWherein η is a preset threshold for the controllable range;
The invention also provides a computer system comprising a cloud platform, wherein the cloud platform comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, and the processor realizes the steps of any one of the methods when executing the computer program.
The invention has the following beneficial effects:
1. the invention relates to a quality-aware edge intelligent federal learning method and a system, which predict the learning quality of a user by using historical learning quality records; within a limited federal learning budget, the system models a reverse auction problem to encourage high quality, low cost users to participate in federal learning. The invention can complete the cooperative machine learning model training of large-scale distributed mobile edge nodes, can sense the learning quality of the nodes when performing distributed machine learning, and can obviously improve the quality of the distributed cooperative model training; more high-quality low-cost mobile edge nodes can be stimulated to participate in model training. The invention can provide richer data and more calculation power for the training of the model under the condition of protecting the data privacy, so as to improve the quality of the model and provide better intelligent service for users.
2. In a preferred scheme, the aggregation algorithm is adopted, the quality of model updating is considered during aggregation, and undesired model updating is filtered out, so that the quality of the aggregated federal learning model is further optimized, and the robustness and efficiency of model aggregation can be improved. Users may be provided with better intelligent services based on machine learning, such as autopilot, speech recognition, etc., which require higher quality for machine learning models.
In addition to the objects, features and advantages described above, other objects, features and advantages of the present invention are also provided. The present invention will be described in further detail below with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic structural diagram of a distributed federated learning system upon which a preferred embodiment of the present invention is based;
FIG. 2 is a flow diagram of a quality-aware edge intelligent federated learning method of a preferred embodiment of the present invention;
FIG. 3 is a schematic illustration of model accuracy achieved with different excitation mechanisms in accordance with a preferred embodiment of the present invention;
FIG. 4 is a schematic illustration of the amount of reduction of the learning task loss function value during an iteration process in accordance with a preferred embodiment of the present invention;
FIG. 5 is a graphical illustration of the performance of a model aggregation algorithm under different scenarios in accordance with a preferred embodiment of the present invention;
FIG. 6 is a diagram illustrating the accuracy of the model obtained by the system at different data qualities in accordance with a preferred embodiment of the present invention;
figure 7 is a schematic illustration of model loss function value reductions at different budgets for a preferred embodiment of the present invention.
Detailed Description
The embodiments of the invention will be described in detail below with reference to the drawings, but the invention can be implemented in many different ways as defined and covered by the claims.
FIG. 1 is a typical distributed federated learning system including a cloud platform and mobile computing nodes for use with the mobile computing nodesRepresentation, where i represents a setThe ith node. The system manages interaction between the cloud platform and the computing nodes in a time slot mode, and time is divided into T continuous iterations with equal length. In each iteration t, the cloud platform issues a group of learning tasks forIs shown in whichTo representThe jth learning task. For learning tasks in iteration tThe cloud platform, i.e. the task publisher, provides the learning budgetTo recruit appropriate computing nodes to collaborate on trainingAnd (5) practicing the model. The node downloads the global model firstly, then trains the model by using local data, updates and uploads the model to the cloud platform, and finally the cloud platform aggregates the model to update the global model. Since computing nodes have a wide variety of data, they can typically participate in a variety of learning tasks, representing the set of tasks that node i can participate in the tth iteration asBut each node has limited computational power and therefore each node is limited to participate in only one learning task at one iteration. In each iteration t, the platform needs to determine which compute node performs which training task, and determine the learning reward of the compute node, under a limited budget.
The quality-aware edge intelligent federated learning method of the embodiment is established based on the architecture of a typical distributed federated learning system shown in fig. 1. In each iteration, the cloud platform issues a group of learning tasks, and the cloud platform provides a learning budget of each learning task so as to recruit a proper node cooperation training model; downloading a global model by the node, training the model by using local data, and updating and uploading the model to the cloud platform; the cloud platform aggregates the models to update the global model.
Referring to fig. 2, in the distributed federal learning process, the cloud platform constructs a federal learning quality optimization problem in such a way that the sum of the aggregate model qualities of a plurality of learning tasks reaches the maximum optimization target in each iteration; and the federate learning quality optimization problem is solved by adopting the following steps:
in each iteration, predicting the learning quality of the participating nodes by using the historical learning quality records of the participating nodes; the learning quality of the node training data is quantified by using the reduction amount of the loss function value in each iteration;
in each iteration, the cloud platform stimulates the nodes with high learning quality to participate in federal learning through a reverse auction mechanism; to perform the distribution of learning tasks and learning rewards;
in each iteration, for each learning task, each participating node uploads the local model parameters to the cloud platform, and the cloud platform aggregates a plurality of local models to obtain a global model. In this embodiment, the cloud platform aggregates the plurality of local model parameters to obtain global model parameters, including filtering out low-quality local model updates according to the training data amount and the quality of the training data of each participating node, and aggregating the high-quality model updates to obtain an aggregated global model. The robustness and efficiency of model aggregation can be improved. Users may be provided with better intelligent services based on machine learning, such as autopilot, speech recognition, etc., which require higher quality for machine learning models.
The steps can complete the cooperative machine learning model training of the large-scale distributed mobile edge nodes, can sense the learning quality of the nodes when performing distributed machine learning, and can obviously improve the quality of the distributed cooperative model training; more high-quality low-cost mobile edge nodes can be stimulated to participate in model training. The invention can provide richer data and more calculation power for the training of the model under the condition of protecting the data privacy, so as to improve the quality of the model and provide better intelligent service for users.
The following is a detailed description:
since the global model is used in each iteration, the aggregate model quality for each learning task should be maximized in each iteration. Therefore, the federal learning quality optimization problem is defined as follows:
wherein the content of the first and second substances,is a binary variable used to mark tasksIf iteration t is assigned to node i, a value equal to 1 means that a task is assigned to a node, otherwise it is set to 0. By usingIndicating that node i performs a taskThe remuneration of (a) to (b), representing the learning task assignment result in iteration t,representing the result of the distribution of learning reward, f () is a model aggregation function. Constraint (3) indicates that for each learning task, the sum of the rewards to the participating nodes cannot exceed the learning budget provided by the task publisher; constraint (4) means that a node can only be assigned to learning tasks it can participate in; constraint (5) limits each node to participate in at most one learning task per iteration.
Fig. 2 shows the flow of the quality-aware edge intelligent federated learning method of the present invention. The method mainly comprises three parts of learning quality prediction, quality perception incentive mechanism and model aggregation.
And I, predicting the learning quality. Mainly comprises the quantification of the learning quality and the prediction of the learning quality.
a) And (5) learning quality quantification.
In federal learning, both the amount of data used to train the federal learning model and the quality of the data significantly affect the quality of learning. Quantification of the quality of learning should adequately reflect the contribution of local model updates to global model aggregation. One plausible approach is to test the accuracy of each local model update on a global test data set, with accuracy as the learning quality. However, in this approach, each iteration requires testing on each local model, which can cause significant overhead to the system in terms of test cost and latency. Unlike the accuracy measure, the loss function value is calculated during the training process with no additional overhead for the system. Thus, the present embodiment quantifies the quality of the node training data with the amount of reduction in the loss function value in each iteration.
Suppose that iteration t starts at timeEnd at the timeAt the time pointThe cloud platform converges on the received model updates from the compute nodes, whereupon the next iteration begins. Therefore, the compute node should be atTime point ofUploading own model updates, otherwise there will be no learning reward. Hypothesis learning taskAt a point in timeIn the test setHas a loss function value ofLocal model of node i at a point in timeHas a loss function value ofData quality for training at iteration t for node iIs defined as:
combined with data volume for trainingRepresentation), learning quality of node i at the t-th iterationThe definition is as follows:
b) and learning quality prediction.
After the cloud platform receives the model update contributed by the computing node, the learning quality can be quantified, but is unknown before the node learning. Thus, at each iteration, the platform will first predict the learning quality of the participant to assist in the learning task and the allocation of learning rewards. Since the training of the federated learning model will typically proceed in an iterative fashion, the participant's historical learning quality records may be utilized to estimate its learning quality. Suppose node i is in iteration t0,t1,…,trParticipate in a learning taskThen the quality record can be usedTo predict the node i at iteration t (t)>tr) Quality of learning ofThe learning quality of a node may change over time, and intuitively, recent quality records are more representative than older quality records. Thus, quality records are weighted by their staleness, with newer records being weighted more heavily than older records. The present embodiment uses an exponential forgetting function to assign weights, with the latest quality record having a weight of 1 and the weights of the other records being determined by their relative positions with respect to the latest record. Quality recordingAre given respective weights ofIs a forgetting coefficient, learning quality prediction valueCan be calculated by the following formula:
and II, exciting mechanism of quality perception. Mainly comprising the allocation of learning tasks and the determination of learning remuneration.
After predicting the learning quality of each candidate participant, the federated learning quality optimization problem defined in (1) can be solved in two steps. Firstly, a quality-aware incentive mechanism is designed to encourage high-quality low-cost nodes to participate in the federal learning task, and then an efficient model aggregation algorithm is designed to further improve the quality of the aggregated model. In each iteration, the bookEmbodiments encourage high-quality learning participating nodes to participate in federated learning through a reverse auction mechanism. Each computing node i submits its own bid informationTo cloud platform, arrayIncluding learning tasks that node i can participate inAnd bid price for participating in the taskThe cloud platform then learns the task for eachSelecting a set of participating nodesAnd determine their remunerationThus, the following Learning Quality Maximization (LQM) problem may be defined: in each iteration t, how to learn the task for each time according to the bid informationSelecting a set of participating nodesHow to determine their learning remunerationCan the sum of the learning quality predictors for the selected node be maximized?
The input to the LQM problem is a set of tasks that each node i can participate inBid priceLearning budgetAnd learning quality prediction valuesThe output being a binary variableIf it is not Node i will be added to the learning taskOf participating node setsIn, meaning that node i is assigned to perform a learning taskIn addition, consideration will be output for each participating nodeConstraints (14) require that the reward for each participating node be higher than the training cost for that node, other constraints being the same as the federal learning quality optimization problem.
The LQM problem can be proved to be NP-hard, so that a heuristic algorithm is designed to solve the LQM problem, the algorithm has high calculation efficiency, and meanwhile, the calculation node can have honest bid price.
In this embodiment, the algorithm is a greedy algorithm, and the flow is as follows: in each iteration t, the algorithm first learns for each task according to the node's bid informationFinding a set of candidate nodes that can participate in the taskThe main loop is then executed until no nodes can participate in the learning task or all learning tasks have been assigned to the appropriate nodes for execution. In the main loop, the algorithm first learns for each learning taskSelecting a subset of participating nodesThe subset can be approximately maximizedSum of predicted learning qualities of the participating nodes. Specifically, the algorithm will be as followsSize descending order of values to nodesThe sorting is carried out, and the sorting is carried out,can be used as a ranking index for node i. The algorithm then greedily adds node i to the set of participating nodes according to the rankingUntil the total reward to the participating nodes exceeds the budgetThe method for determining the reward of the participating nodes comprises the following steps: assuming that node k represents the highest ranked node of the nodes failing in bidding competition, the highest bid price b 'of successful bidding competition of the nodes is supported'i,jSatisfy the requirement ofTherefore, the reward to participating node i isFinally, the algorithm finds the maximumTask of learning valueThen the task is processedAssigning the task to the participating node selected for itMarked as allocated and its participating nodes are also marked as allocated, the task and its participating nodes will not participate in the allocation of the next round of the cycle. In this embodiment, the assigned labeling manner is as follows: each node can only participate in one task in each iteration, if the node i is determined to participate in the task l, the node i is marked as allocated (each node has a 01 variable x, the allocated variable x is 1, and all the values of the nodes x in the next iteration are restored to 0). Each round of circulation can only determine the distribution of one task, and the distributed nodes are not distributed any more; and the next round of the task with the budget (allocated) exhausted is not considered.
Model polymerization. Mainly including detection of outlier model updates and aggregation of model updates.
In each iteration t, for each learning taskAfter one or more gradient descent updates (local model updates), each participating node i will have its local model parametersUploading the parameters to a cloud platform, and then the platform aggregates the parameters to obtain global model parametersThe following model aggregation algorithm Federated Averaging (FA) is widely used in recent research work:
wherein the content of the first and second substances,representing node i for training task in iteration tThe data volume of the model (2). Unlike existing model aggregation algorithms for federated learning, the present embodiment considers not only the amount of training data per node, but also the quality of the training data when aggregating model parameters. In iteration t, a set of tasks is givenParticipating node ofAnd their model parametersCalculating the model parameters after the polymerization in the following manner
Wherein the content of the first and second substances,andrespectively representing nodes i for training tasksData volume and data quality of the model. In addition, to avoid the negative impact of low quality model updates, a computationally efficient outlier model update detection algorithm is designed to detect and filter out low quality local model updates: first, parameters are updated using (16) the aggregated local model received from the participating nodesObtaining post-polymerization model parametersAnd calculating local model parameters of each nodeAndcosine similarity between themThen, an average value of the similarity is calculatedMedian numberAnd standard deviation σd. Most model updates received should be of high quality, since the incentive mechanism will filter out most unreliable nodes prior to training. Thus, medianThe direction of high quality local model updates may be reflected. Specifically, the first comparisonAnda value of, ifSimilarity of low quality model updatesShould be higher thanTherefore, the similarity of the phases is obtainedLow quality updates (η is a preset threshold for controllable range) that are deemed unacceptableWhen it is similar to each other Is considered as an unqualified low quality update. In this way, a high quality, qualified model update can be obtained. Finally, the aggregated results are obtained by (16) aggregating these high quality qualified model updates.
The present invention also provides a computer system, including a cloud platform, where the cloud platform includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of any of the above embodiments when executing the computer program.
The invention was verified by simulation experiments as follows:
the simulation system is constructed with a widely used pytorre 1.4.0 software environment, simulating a real federal learning scenario, where the bid price, computational power and data quality of distributed nodes vary. The simulation experiment used 6 common learning models: multi-layer Perceptin (MLP), LeNet, MobileNet, VGG-11, EffectientNet-B0, ResNet-18, and 4 datasets: MNIST, fast-MNIST (FMNIST), CIFAR-10, and Street View House Numbers (SVHN). MNIST is a dataset of handwritten digits and FMNIST is a dataset of zaalando's fashion merchandise pictures, both of which have a training set of 6 ten thousand examples and a test set of 1 ten thousand examples. The CIFAR-10 dataset consists of 5 ten thousand training images and 1 ten thousand test images of class 10. SVHN is a real house number image dataset with 73000 training data and 26000 test data.
Experiment ofParameters are as follows: there are 4 experimental parameter settings, which are: setting I: setting II: setting III: setting IV: wherein the content of the first and second substances,andrespectively representing the number of nodes and learning tasks in each iteration,is a learning taskThe learning budget of (2). Bid price per node i in each iterationAnd amount of training dataSubject to uniform distribution within the above range and in aggregateIn (1), N iseThe proportion of error label data in the data set of each node is set as Re。
FIG. 3 shows a comparison of model accuracy of the quality-aware excitation mechanism (FAIR) designed according to the present invention with other excitation mechanisms, using parameter set I in this experiment. The MLP model and the Le Net model are trained using MNIST and FMNIST datasets, respectively, the Mobile Net and VGG models are trained using a CIFAR-10 dataset, and the Effect Net and Res Net are trained using an SVHN dataset. FAIR, as well as other incentive mechanisms, uses the FA model aggregation algorithm, and after 30 iterations, the accuracy of the model is shown in FIG. 3. It can be seen that the FAIR designed by the present invention can achieve near-optimal (theoretical optimum) performance and has significant performance advantages over the other two excitation mechanisms. For example, for the Res Net model, two mechanisms, knapskackgreedy (knapsack greedy method) and bid price first (bid price first), can reach 45% accuracy, while the FAIR can reach 76% progress, improving performance by 68.9%.
Fig. 4 is a comparison of loss function value reduction amounts of different excitation mechanisms in a multi-learning task scene, and a parameter setting ii is adopted in the experiment. Each iteration has four learning tasks: an MNIST data set trains an MLP model, an FMNIST data set trains an Le Net model, a CIFAR-10 data set trains a Mobile Net model, and an SVHN data set trains an Efficient Net model. It can be seen that after 30 iterations, the FAIR can reach a 40% reduction in the loss function value, while the other two excitation mechanisms can reach 30% and 25%, respectively, which improve the performance of the FAIR by 33.3% and 60%.
FIG. 5 shows the results of comparing the model aggregation algorithm (FAIR) designed by the present invention with the Federated Averaging (FA) algorithm. The experiment simulates 10 computing nodes and four different scenes: a) clean (data label is completely correct), all nodes are normally trained on an unmodified data set, and the training data volume of the nodesCompliance [1000,10000]Uniform distribution within the range; b) noisy (partial node data label error), of 10 nodes, the data set of 5 nodes is clear, and 50% of the data labels of the data sets of the other 5 nodes are error; c) error (partial node data label Error), of which 10 nodes, the data set of 7 nodes is clear and the data set labels of the other 3 nodes are Error; d) attack (malicious nodes exist), wherein one of 10 nodes submits any model parameter update, and other nodes are normally trained. b) c) d) three scenarios are fixed in each iterationAs can be seen from fig. 5, after 30 iterations, the FAIR model aggregation algorithm outperforms the FA algorithm in all scenarios of almost all data sets of the model, and is more robust because the model aggregation performance of the FA algorithm drops sharply when the quality of the model update is reduced, but the FAIR algorithm can operate normally. For example, when training Le Net model with MNIST data set, the accuracy of FA algorithm is reduced from 94.42% (Clean) to 12.72% (attach), but FAIR is reduced from 95.6% to 82.71%.
Fig. 6 and 7 show the performance of the FAIR method (including the excitation mechanism and the model aggregation algorithm) designed by the present invention. FIG. 6a) is a comparison of the MLP model's accuracy at different MNIST training data qualities; FIG. 6b) is a comparison of the accuracy of the Le Net model at different FMNIST training data quality; FIG. 6c) is a comparison of the accuracy of the Mobile Net model under different CIFAR-10 training data qualities; fig. 6d) is a comparison of the accuracy of the efficiency Net model at different SVHN training data qualities. This experiment employs parameter setting III. The knapack greedy excitation mechanism uses an FA model aggregation algorithm. The experiment simulates 20 nodes, and the training data volume of each iteration of each node is fixed The noise level (noise level) of 10 nodes is continuously changed. The noise level refers to the percentage of the 20 nodes having 50% of the error label data. The learning budget for each iteration is set to 10, and after 30 iterations, the accuracy of the model is shown in fig. 6. It can be seen that in almost all settings, the FAIR outperforms the knapback greedy method, with a significant performance gap when the noise level is in the range of 20% to 80%. Also, although the learning quality of both mechanisms decreases with increasing noise level, the performance of the knapack greedy mechanism drops dramatically at low noise levels, while the performance of the FAIR remains stable at low noise levels.
Fig. 7a), 7b) and 7c) are the reduction of the model loss function values at a budget of 10, 20, 30 per iteration of learning per task, respectively, using the parameter setting iv in this experiment. The experiment simulates 100 distributed nodes and 4 learning tasks in each iteration: the MLP model was trained using the MNIST data set, the LeNet model was trained using the FMNIST data set, the MobileNet model was trained using the CIFAR-10 data set, and the EfficientNet model was trained using the SVHN data set. It can be seen that for all mechanisms, the learning quality can be improved as the learning budget is increased, and the FAIR can achieve better performance over 30 iterations compared to other mechanisms.
In conclusion, the quality of the Federal learning model is improved through the quality-aware incentive mechanism and the model aggregation mechanism. The device mainly comprises three parts: learning quality prediction, quality perception incentive mechanism and efficient model aggregation. For learning quality prediction, predicting the learning quality of the user by using historical learning quality records; for the quality-aware incentive mechanism, within a limited federal learning budget, the system models a reverse auction problem to encourage high-quality low-cost users to participate in federal learning; and during model aggregation, an aggregation algorithm is designed, the quality of model update is considered during aggregation, and undesirable model update is filtered out, so that the quality of the federated learning model after aggregation is further optimized. The invention can provide richer data and more calculation power for the training of the model under the condition of protecting the data privacy, so as to improve the quality of the model and provide better intelligent service for users. Users may be provided with better machine learning based intelligent services such as autopilot, voice recognition, and the like.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A quality-aware edge intelligent federated learning method is characterized in that in each iteration, a cloud platform issues a group of learning tasks, and provides a learning budget of each learning task so as to recruit a proper node cooperation training model; downloading a global model by the node, training the model by using local data, and updating and uploading the model to the cloud platform; aggregating the model to update the global model; the method is characterized in that in the process, the cloud platform builds a federal learning quality optimization problem by using the sum of the quality of the aggregation models of a plurality of learning tasks to reach the maximum optimization target in each iteration; and the federate learning quality optimization problem is solved by adopting the following steps:
in each iteration, predicting the learning quality of the participating nodes by using the historical learning quality records of the participating nodes; the learning quality of the node training data is quantified by using the reduction amount of the loss function value in each iteration;
in each iteration, the cloud platform stimulates the nodes with high learning quality to participate in federal learning through a reverse auction mechanism; to perform the distribution of learning tasks and learning rewards;
in each iteration, for each learning task, each participating node uploads the local model parameters to the cloud platform, and the cloud platform aggregates a plurality of local models to obtain a global model.
2. The quality-aware edge intelligent federated learning method of claim 1, wherein the cloud platform aggregates a plurality of local model parameters to obtain global model parameters, including filtering out low-quality local model updates according to the training data amount and the quality of training data of each participating node, and aggregating high-quality model updates to obtain an aggregated global model.
3. The quality-aware edge intelligent federated learning method of claim 1 or 2, wherein with the sum of the aggregated model qualities of multiple learning tasks reaching a maximum optimization goal in each iteration, the federated learning quality optimization problem is as follows:
wherein the content of the first and second substances,representing a set of nodes in the federation, where i represents a setThe ith node; represents a set of learning tasks published by the cloud platform in each iteration t, whereinTo representThe jth learning task;is a set of tasks that node i can participate in the t-th iteration, an Learning task for iteration t provided by cloud platformThe learning budget of (2);is a binary variable used to mark tasksIf iteration t is assigned to node i, a value equal to 1 means that a task is assigned to a node, otherwise, 0 is set;indicating that node i performs a taskThe remuneration of (a) to (b), representing the learning task assignment result in iteration t,representing the result of distribution of learning reward, f () is a model aggregation function;
constraint (3) indicates that for each learning task, the sum of the rewards to the participating nodes cannot exceed the learning budget provided by the task publisher;
constraint (4) represents that only the learning tasks that it can participate in can be assigned to a node;
constraint (5) represents the restriction that each node can only participate in one learning task at most in each iteration.
4. The quality-aware edge intelligent federated learning method of claim 3, wherein the participation node with high learning quality is motivated to participate in federated learning through a reverse auction mechanism, comprising the following steps:
each computing node i submits its own bid informationTo cloud platform, arrayIncluding learning tasks that node i can participate inAnd bid price for participating in the taskThe cloud platform then learns the task for eachSelecting a set of participating nodesAnd determine their remunerationThe cloud platform optimizes the goal according to the maximum sum of the learning quality predicted values of the selected nodes and provides each learning task with the maximum sumSelecting a set of participating nodesAnd determine their learning remuneration
5. The quality-aware edge intelligent federated learning method of claim 4, wherein the cloud platform optimizes the problem of learning quality maximization according to that the maximum sum of the learning quality predicted values of the selected nodes is an optimization goal as follows:
wherein, the input of the optimization problem of the maximization of the learning quality is the task set which each node i can participate inBid priceLearning budgetAnd learning quality prediction valuesThe output being a binary variableIf it is notNode i will be added to the learning taskOf participating node setsThe preparation method is as follows; node i is assigned to perform a learning taskIn addition, consideration will be output for each participating node
Constraints (14) require that the reward for each participating node be higher than the training cost for that node, other constraints being the same as the federal learning quality optimization problem.
6. The quality-aware edge intelligent federated learning method of claim 5, wherein the optimization problem of learning quality maximization is solved by a greedy algorithm as follows:
in each iteration t, the algorithm first learns for each task according to the node's bid informationFinding a set of candidate nodes that can participate in the taskThen executing the main loop until no node can participate in the learning task or all the learning tasks are distributed to proper nodes for execution;
7. The quality-aware edge intelligent federated learning method of claim 6, wherein in the greedy algorithm, the method further comprises:
according toSize descending order of values to nodesSorting is carried out, andthe value of (b) is used as the ranking index of the node i;
greedy adding node i to a set of participating nodes according to rankingUntil the total reward to the participating nodes exceeds the budgetWherein the reward to participating node i is
8. The quality-aware edge intelligent federated learning method of claim 2, wherein when a cloud platform aggregates a plurality of local models to obtain a global model, a set of tasks is given in iteration tParticipating node ofAnd their model parametersThe parameters of the model after polymerization were calculated in the following manner
9. The quality-aware edge intelligent federated learning method of claim 8, wherein the filtering out low-quality local model updates comprises the steps of:
aggregating the local models received from the participating nodes using equation (16) is furtherNew parametersObtaining post-polymerization model parametersAnd calculating local model parameters of each nodeAndcosine similarity between them
Calculating the average value of cosine similarityMedian numberAnd standard deviation σd(ii) a And the following comparisons were made:
when in useThe cosine similarity is measuredWherein η is a preset threshold for the controllable range;
10. A computer system comprising a cloud platform including a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1 to 9 when executing the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010590843.7A CN111754000B (en) | 2020-06-24 | 2020-06-24 | Quality-aware edge intelligent federal learning method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010590843.7A CN111754000B (en) | 2020-06-24 | 2020-06-24 | Quality-aware edge intelligent federal learning method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111754000A true CN111754000A (en) | 2020-10-09 |
CN111754000B CN111754000B (en) | 2022-10-14 |
Family
ID=72677167
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010590843.7A Active CN111754000B (en) | 2020-06-24 | 2020-06-24 | Quality-aware edge intelligent federal learning method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111754000B (en) |
Cited By (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112181971A (en) * | 2020-10-27 | 2021-01-05 | 华侨大学 | Edge-based federated learning model cleaning and equipment clustering method, system, equipment and readable storage medium |
CN112216401A (en) * | 2020-10-12 | 2021-01-12 | 中国石油大学(华东) | DF-based new coronary pneumonia federal learning detection method |
CN112232519A (en) * | 2020-10-15 | 2021-01-15 | 成都数融科技有限公司 | Joint modeling method based on federal learning |
CN112261137A (en) * | 2020-10-22 | 2021-01-22 | 江苏禹空间科技有限公司 | Model training method and system based on joint learning |
CN112287990A (en) * | 2020-10-23 | 2021-01-29 | 杭州卷积云科技有限公司 | Model optimization method of edge cloud collaborative support vector machine based on online learning |
CN112348204A (en) * | 2020-11-05 | 2021-02-09 | 大连理工大学 | Safe sharing method for marine Internet of things data under edge computing framework based on federal learning and block chain technology |
CN112508205A (en) * | 2020-12-04 | 2021-03-16 | 中国科学院深圳先进技术研究院 | Method, device and system for scheduling federated learning |
CN112532746A (en) * | 2020-12-21 | 2021-03-19 | 北京邮电大学 | Cloud edge cooperative sensing method and system |
CN112637883A (en) * | 2020-12-09 | 2021-04-09 | 深圳智芯微电子科技有限公司 | Federal learning method with robustness to wireless environment change in power Internet of things |
CN112668128A (en) * | 2020-12-21 | 2021-04-16 | 国网辽宁省电力有限公司物资分公司 | Method and device for selecting terminal equipment nodes in federated learning system |
CN112836822A (en) * | 2021-02-26 | 2021-05-25 | 浙江工业大学 | Federal learning strategy optimization method and device based on width learning |
CN112926088A (en) * | 2021-03-18 | 2021-06-08 | 之江实验室 | Federal learning privacy policy selection method based on game theory |
CN113139600A (en) * | 2021-04-23 | 2021-07-20 | 广东安恒电力科技有限公司 | Intelligent power grid equipment anomaly detection method and system based on federal learning |
CN113157434A (en) * | 2021-02-26 | 2021-07-23 | 西安电子科技大学 | Excitation method and system for user node of horizontal federated learning system |
CN113222031A (en) * | 2021-05-19 | 2021-08-06 | 浙江大学 | Photolithographic hot zone detection method based on federal personalized learning |
CN113312847A (en) * | 2021-06-07 | 2021-08-27 | 北京大学 | Privacy protection method and system based on cloud-edge computing system |
CN113361694A (en) * | 2021-06-30 | 2021-09-07 | 哈尔滨工业大学 | Layered federated learning method and system applying differential privacy protection |
CN113360514A (en) * | 2021-07-02 | 2021-09-07 | 支付宝(杭州)信息技术有限公司 | Method, device and system for jointly updating model |
CN113379294A (en) * | 2021-06-28 | 2021-09-10 | 武汉大学 | Task deployment method based on federal learning participation user auction incentive mechanism |
CN113435534A (en) * | 2021-07-09 | 2021-09-24 | 新智数字科技有限公司 | Data heterogeneous processing method and device based on similarity measurement, computer equipment and computer readable storage medium |
CN113537511A (en) * | 2021-07-14 | 2021-10-22 | 中国科学技术大学 | Automatic gradient quantization federal learning framework and method |
CN113537518A (en) * | 2021-07-19 | 2021-10-22 | 哈尔滨工业大学 | Model training method and device based on federal learning, equipment and storage medium |
CN113591486A (en) * | 2021-07-29 | 2021-11-02 | 浙江大学 | Forgetting verification method based on semantic data loss in federated learning |
CN113642737A (en) * | 2021-08-12 | 2021-11-12 | 广域铭岛数字科技有限公司 | Federal learning method and system based on automobile user data |
CN113645197A (en) * | 2021-07-20 | 2021-11-12 | 华中科技大学 | Decentralized federal learning method, device and system |
CN113660304A (en) * | 2021-07-07 | 2021-11-16 | 北京邮电大学 | Unmanned aerial vehicle group distributed learning resource control method based on bidirectional auction game |
CN113689003A (en) * | 2021-08-10 | 2021-11-23 | 华东师范大学 | Safe mixed federal learning framework and method for removing third party |
CN113887748A (en) * | 2021-12-07 | 2022-01-04 | 浙江师范大学 | Online federal learning task allocation method and device, and federal learning method and system |
CN114065863A (en) * | 2021-11-18 | 2022-02-18 | 北京百度网讯科技有限公司 | Method, device and system for federal learning, electronic equipment and storage medium |
WO2022089507A1 (en) * | 2020-10-28 | 2022-05-05 | 索尼集团公司 | Electronic device and method for federated learning |
CN114496274A (en) * | 2021-12-08 | 2022-05-13 | 杭州趣链科技有限公司 | Byzantine robust federated learning method based on block chain and application |
CN114513270A (en) * | 2022-03-07 | 2022-05-17 | 苏州大学 | Heterogeneous wireless network spectrum resource sensing method and system based on federal learning |
CN114648131A (en) * | 2022-03-22 | 2022-06-21 | 中国电信股份有限公司 | Federal learning method, device, system, equipment and medium |
WO2022130098A1 (en) * | 2020-12-15 | 2022-06-23 | International Business Machines Corporation | Federated learning for multi-label classification model for oil pump management |
CN114819183A (en) * | 2022-04-15 | 2022-07-29 | 支付宝(杭州)信息技术有限公司 | Model gradient confirmation method, device, equipment and medium based on federal learning |
WO2022217784A1 (en) * | 2021-04-15 | 2022-10-20 | 腾讯云计算(北京)有限责任公司 | Data processing methods and apparatus, device, and medium |
CN115640852A (en) * | 2022-09-09 | 2023-01-24 | 湖南工商大学 | Federal learning participation node selection optimization method, and federal learning method and system |
CN116520814A (en) * | 2023-07-03 | 2023-08-01 | 清华大学 | Equipment fault prediction method and device based on federal learning under cloud edge cooperative architecture |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109872058A (en) * | 2019-01-31 | 2019-06-11 | 南京工业大学 | A kind of multimedia intelligent perception motivational techniques for machine learning system |
CN110189174A (en) * | 2019-05-29 | 2019-08-30 | 南京工业大学 | A kind of mobile intelligent perception motivational techniques based on quality of data perception |
CN110717671A (en) * | 2019-10-08 | 2020-01-21 | 深圳前海微众银行股份有限公司 | Method and device for determining contribution degree of participants |
CN111178524A (en) * | 2019-12-24 | 2020-05-19 | 中国平安人寿保险股份有限公司 | Data processing method, device, equipment and medium based on federal learning |
CN111222646A (en) * | 2019-12-11 | 2020-06-02 | 深圳逻辑汇科技有限公司 | Design method and device of federal learning mechanism and storage medium |
-
2020
- 2020-06-24 CN CN202010590843.7A patent/CN111754000B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109872058A (en) * | 2019-01-31 | 2019-06-11 | 南京工业大学 | A kind of multimedia intelligent perception motivational techniques for machine learning system |
CN110189174A (en) * | 2019-05-29 | 2019-08-30 | 南京工业大学 | A kind of mobile intelligent perception motivational techniques based on quality of data perception |
CN110717671A (en) * | 2019-10-08 | 2020-01-21 | 深圳前海微众银行股份有限公司 | Method and device for determining contribution degree of participants |
CN111222646A (en) * | 2019-12-11 | 2020-06-02 | 深圳逻辑汇科技有限公司 | Design method and device of federal learning mechanism and storage medium |
CN111178524A (en) * | 2019-12-24 | 2020-05-19 | 中国平安人寿保险股份有限公司 | Data processing method, device, equipment and medium based on federal learning |
Non-Patent Citations (1)
Title |
---|
JIAWEN KANG等: "Incentive Mechanism for Reliable Federated Learning: A Joint Optimization Approach to Combining Reputation and Contract Theory", 《IEEE INTERNET OF THINGS JOURNAL》 * |
Cited By (59)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112216401A (en) * | 2020-10-12 | 2021-01-12 | 中国石油大学(华东) | DF-based new coronary pneumonia federal learning detection method |
CN112232519A (en) * | 2020-10-15 | 2021-01-15 | 成都数融科技有限公司 | Joint modeling method based on federal learning |
CN112232519B (en) * | 2020-10-15 | 2024-01-09 | 成都数融科技有限公司 | Joint modeling method based on federal learning |
CN112261137B (en) * | 2020-10-22 | 2022-06-14 | 无锡禹空间智能科技有限公司 | Model training method and system based on joint learning |
CN112261137A (en) * | 2020-10-22 | 2021-01-22 | 江苏禹空间科技有限公司 | Model training method and system based on joint learning |
CN112287990A (en) * | 2020-10-23 | 2021-01-29 | 杭州卷积云科技有限公司 | Model optimization method of edge cloud collaborative support vector machine based on online learning |
CN112287990B (en) * | 2020-10-23 | 2023-06-30 | 杭州卷积云科技有限公司 | Model optimization method of edge cloud collaborative support vector machine based on online learning |
CN112181971A (en) * | 2020-10-27 | 2021-01-05 | 华侨大学 | Edge-based federated learning model cleaning and equipment clustering method, system, equipment and readable storage medium |
WO2022089507A1 (en) * | 2020-10-28 | 2022-05-05 | 索尼集团公司 | Electronic device and method for federated learning |
CN112348204A (en) * | 2020-11-05 | 2021-02-09 | 大连理工大学 | Safe sharing method for marine Internet of things data under edge computing framework based on federal learning and block chain technology |
CN112508205A (en) * | 2020-12-04 | 2021-03-16 | 中国科学院深圳先进技术研究院 | Method, device and system for scheduling federated learning |
CN112637883A (en) * | 2020-12-09 | 2021-04-09 | 深圳智芯微电子科技有限公司 | Federal learning method with robustness to wireless environment change in power Internet of things |
CN112637883B (en) * | 2020-12-09 | 2023-04-28 | 深圳智芯微电子科技有限公司 | Federal learning method with robustness to wireless environment change in electric power Internet of things |
WO2022130098A1 (en) * | 2020-12-15 | 2022-06-23 | International Business Machines Corporation | Federated learning for multi-label classification model for oil pump management |
CN112532746A (en) * | 2020-12-21 | 2021-03-19 | 北京邮电大学 | Cloud edge cooperative sensing method and system |
CN112668128A (en) * | 2020-12-21 | 2021-04-16 | 国网辽宁省电力有限公司物资分公司 | Method and device for selecting terminal equipment nodes in federated learning system |
CN112532746B (en) * | 2020-12-21 | 2021-10-26 | 北京邮电大学 | Cloud edge cooperative sensing method and system |
CN112668128B (en) * | 2020-12-21 | 2024-05-28 | 国网辽宁省电力有限公司物资分公司 | Method and device for selecting terminal equipment nodes in federal learning system |
CN112836822A (en) * | 2021-02-26 | 2021-05-25 | 浙江工业大学 | Federal learning strategy optimization method and device based on width learning |
CN112836822B (en) * | 2021-02-26 | 2024-05-28 | 浙江工业大学 | Federal learning strategy optimization method and device based on width learning |
CN113157434A (en) * | 2021-02-26 | 2021-07-23 | 西安电子科技大学 | Excitation method and system for user node of horizontal federated learning system |
CN113157434B (en) * | 2021-02-26 | 2024-05-07 | 西安电子科技大学 | Method and system for exciting user nodes of transverse federal learning system |
CN112926088A (en) * | 2021-03-18 | 2021-06-08 | 之江实验室 | Federal learning privacy policy selection method based on game theory |
CN112926088B (en) * | 2021-03-18 | 2024-03-19 | 之江实验室 | Federal learning privacy policy selection method based on game theory |
WO2022217784A1 (en) * | 2021-04-15 | 2022-10-20 | 腾讯云计算(北京)有限责任公司 | Data processing methods and apparatus, device, and medium |
CN113139600A (en) * | 2021-04-23 | 2021-07-20 | 广东安恒电力科技有限公司 | Intelligent power grid equipment anomaly detection method and system based on federal learning |
CN113222031A (en) * | 2021-05-19 | 2021-08-06 | 浙江大学 | Photolithographic hot zone detection method based on federal personalized learning |
CN113312847A (en) * | 2021-06-07 | 2021-08-27 | 北京大学 | Privacy protection method and system based on cloud-edge computing system |
CN113379294B (en) * | 2021-06-28 | 2022-07-05 | 武汉大学 | Task deployment method based on federal learning participation user auction incentive mechanism |
CN113379294A (en) * | 2021-06-28 | 2021-09-10 | 武汉大学 | Task deployment method based on federal learning participation user auction incentive mechanism |
CN113361694A (en) * | 2021-06-30 | 2021-09-07 | 哈尔滨工业大学 | Layered federated learning method and system applying differential privacy protection |
CN113361694B (en) * | 2021-06-30 | 2022-03-15 | 哈尔滨工业大学 | Layered federated learning method and system applying differential privacy protection |
CN113360514B (en) * | 2021-07-02 | 2022-05-17 | 支付宝(杭州)信息技术有限公司 | Method, device and system for jointly updating model |
CN113360514A (en) * | 2021-07-02 | 2021-09-07 | 支付宝(杭州)信息技术有限公司 | Method, device and system for jointly updating model |
CN113660304A (en) * | 2021-07-07 | 2021-11-16 | 北京邮电大学 | Unmanned aerial vehicle group distributed learning resource control method based on bidirectional auction game |
CN113435534A (en) * | 2021-07-09 | 2021-09-24 | 新智数字科技有限公司 | Data heterogeneous processing method and device based on similarity measurement, computer equipment and computer readable storage medium |
CN113537511B (en) * | 2021-07-14 | 2023-06-20 | 中国科学技术大学 | Automatic gradient quantization federal learning device and method |
CN113537511A (en) * | 2021-07-14 | 2021-10-22 | 中国科学技术大学 | Automatic gradient quantization federal learning framework and method |
CN113537518A (en) * | 2021-07-19 | 2021-10-22 | 哈尔滨工业大学 | Model training method and device based on federal learning, equipment and storage medium |
CN113645197B (en) * | 2021-07-20 | 2022-04-29 | 华中科技大学 | Decentralized federal learning method, device and system |
CN113645197A (en) * | 2021-07-20 | 2021-11-12 | 华中科技大学 | Decentralized federal learning method, device and system |
CN113591486B (en) * | 2021-07-29 | 2022-08-23 | 浙江大学 | Forgetting verification method based on semantic data loss in federated learning |
CN113591486A (en) * | 2021-07-29 | 2021-11-02 | 浙江大学 | Forgetting verification method based on semantic data loss in federated learning |
CN113689003A (en) * | 2021-08-10 | 2021-11-23 | 华东师范大学 | Safe mixed federal learning framework and method for removing third party |
CN113689003B (en) * | 2021-08-10 | 2024-03-22 | 华东师范大学 | Mixed federal learning framework and method for safely removing third party |
CN113642737A (en) * | 2021-08-12 | 2021-11-12 | 广域铭岛数字科技有限公司 | Federal learning method and system based on automobile user data |
CN113642737B (en) * | 2021-08-12 | 2024-03-05 | 广域铭岛数字科技有限公司 | Federal learning method and system based on automobile user data |
CN114065863B (en) * | 2021-11-18 | 2023-08-29 | 北京百度网讯科技有限公司 | Federal learning method, apparatus, system, electronic device and storage medium |
CN114065863A (en) * | 2021-11-18 | 2022-02-18 | 北京百度网讯科技有限公司 | Method, device and system for federal learning, electronic equipment and storage medium |
CN113887748A (en) * | 2021-12-07 | 2022-01-04 | 浙江师范大学 | Online federal learning task allocation method and device, and federal learning method and system |
CN113887748B (en) * | 2021-12-07 | 2022-03-01 | 浙江师范大学 | Online federal learning task allocation method and device, and federal learning method and system |
CN114496274A (en) * | 2021-12-08 | 2022-05-13 | 杭州趣链科技有限公司 | Byzantine robust federated learning method based on block chain and application |
CN114513270A (en) * | 2022-03-07 | 2022-05-17 | 苏州大学 | Heterogeneous wireless network spectrum resource sensing method and system based on federal learning |
CN114648131A (en) * | 2022-03-22 | 2022-06-21 | 中国电信股份有限公司 | Federal learning method, device, system, equipment and medium |
CN114819183A (en) * | 2022-04-15 | 2022-07-29 | 支付宝(杭州)信息技术有限公司 | Model gradient confirmation method, device, equipment and medium based on federal learning |
CN115640852B (en) * | 2022-09-09 | 2023-06-09 | 湖南工商大学 | Federal learning participation node selection optimization method, federal learning method and federal learning system |
CN115640852A (en) * | 2022-09-09 | 2023-01-24 | 湖南工商大学 | Federal learning participation node selection optimization method, and federal learning method and system |
CN116520814A (en) * | 2023-07-03 | 2023-08-01 | 清华大学 | Equipment fault prediction method and device based on federal learning under cloud edge cooperative architecture |
CN116520814B (en) * | 2023-07-03 | 2023-09-05 | 清华大学 | Equipment fault prediction method and device based on federal learning under cloud edge cooperative architecture |
Also Published As
Publication number | Publication date |
---|---|
CN111754000B (en) | 2022-10-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111754000B (en) | Quality-aware edge intelligent federal learning method and system | |
Lu et al. | Optimization of lightweight task offloading strategy for mobile edge computing based on deep reinforcement learning | |
CN113434212B (en) | Cache auxiliary task cooperative unloading and resource allocation method based on meta reinforcement learning | |
CN110189174A (en) | A kind of mobile intelligent perception motivational techniques based on quality of data perception | |
CN113191484A (en) | Federal learning client intelligent selection method and system based on deep reinforcement learning | |
CN111770454B (en) | Game method for position privacy protection and platform task allocation in mobile crowd sensing | |
Wu et al. | Mobility-aware deep reinforcement learning with glimpse mobility prediction in edge computing | |
CN113869528B (en) | De-entanglement individualized federated learning method for consensus characterization extraction and diversity propagation | |
CN113760511B (en) | Vehicle edge calculation task unloading method based on depth certainty strategy | |
CN111626563B (en) | Dual-target robust mobile crowd sensing system and excitation method thereof | |
Pang et al. | An incentive auction for heterogeneous client selection in federated learning | |
Tang et al. | Credit and quality intelligent learning based multi-armed bandit scheme for unknown worker selection in multimedia MCS | |
CN112148986A (en) | Crowdsourcing-based top-N service re-recommendation method and system | |
CN113256335B (en) | Data screening method, multimedia data delivery effect prediction method and device | |
CN111464620B (en) | Edge-assisted mobile crowd sensing true value discovery system and excitation method thereof | |
KR20220150126A (en) | Coded and Incentive-based Mechanism for Distributed Training of Machine Learning in IoT | |
CN111510473B (en) | Access request processing method and device, electronic equipment and computer readable medium | |
CN117202264A (en) | 5G network slice oriented computing and unloading method in MEC environment | |
Wang et al. | Social-aware clustered federated learning with customized privacy preservation | |
CN116009990B (en) | Cloud edge collaborative element reinforcement learning computing unloading method based on wide attention mechanism | |
Singhal et al. | Greedy Shapley Client Selection for Communication-Efficient Federated Learning | |
Yang et al. | Asynchronous Federated Learning with Incentive Mechanism Based on Contract Theory | |
CN117829274B (en) | Model fusion method, device, equipment, federal learning system and storage medium | |
CN116029370B (en) | Data sharing excitation method, device and equipment based on federal learning of block chain | |
Ercetin et al. | Yardstick competition regulation for incentive mechanisms in federated learning: balancing cost optimization and fairness |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |