CN116362328A - Federal learning heterogeneous model aggregation method based on fairness characteristic representation - Google Patents

Federal learning heterogeneous model aggregation method based on fairness characteristic representation Download PDF

Info

Publication number
CN116362328A
CN116362328A CN202310418128.9A CN202310418128A CN116362328A CN 116362328 A CN116362328 A CN 116362328A CN 202310418128 A CN202310418128 A CN 202310418128A CN 116362328 A CN116362328 A CN 116362328A
Authority
CN
China
Prior art keywords
model
client
filter
layer
heterogeneous
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310418128.9A
Other languages
Chinese (zh)
Inventor
张萌
张盛兵
杨佳莹
杨钊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202310418128.9A priority Critical patent/CN116362328A/en
Publication of CN116362328A publication Critical patent/CN116362328A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/098Distributed learning, e.g. federated learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention provides a federal learning heterogeneous model aggregation method based on fairness characteristic representation, which belongs to the technical field of machine learning and mainly comprises the following steps: a feature matching metric; dynamically selecting communication parameters; heterogeneous model polymerization. The method quantifies the characteristic mismatch caused by different training processes brought by heterogeneous data and heterogeneous computing resources, uploads the local parameters represented by different characteristics fairly, and greatly reduces the influence of unordered extracted characteristics among different devices by using a model aggregation method with dynamic matching characteristics, thereby avoiding overfitting of a global model of collaborative aggregation.

Description

Federal learning heterogeneous model aggregation method based on fairness characteristic representation
Technical Field
The invention belongs to the technical field of machine learning, and particularly relates to a federal learning heterogeneous model aggregation method based on fairness characteristic representation.
Background
Federal learning is an emerging distributed machine learning algorithm in which model parameters are trained locally on each edge device, then transmitted to a central server for aggregation, and finally the aggregated global model parameters are returned to each device for further training. The sharing of training parameters rather than local data plays a role in protecting data privacy, and meanwhile, computing resources of the edge equipment are effectively used.
However, when different users use the mobile edge device in different scenarios, the data collected by the device has statistical heterogeneity and is not independently co-distributed. In addition, mobile edge devices have over 2000 different systems on chip, the available computational resources vary widely, and the mobility of the devices causes the communication bandwidth to fluctuate. The heterogeneity of various data and the heterogeneity of equipment prevent the generalization of global models, and challenges are presented to the Union to learn the aggregation of heterogeneous models of nodes.
Therefore, there is a need for a federal learning heterogeneous model aggregation method based on fairness features to overcome the above-mentioned shortcomings.
Disclosure of Invention
In order to solve the problem that heterogeneity of various data and heterogeneity of equipment hinder generalization of a global model, the invention provides a federal learning heterogeneous model aggregation method based on fairness characteristic representation.
In order to achieve the above object, the present invention provides the following technical solutions:
a federal learning heterogeneous model aggregation method based on fairness feature representation comprises the following steps:
the federal learning heterogeneous model aggregation method based on fairness characteristic representation provided by the invention has the following steps of
The beneficial effects are that:
the method quantifies the characteristic mismatch caused by different training processes brought by heterogeneous data and heterogeneous computing resources, uploads the local parameters represented by different characteristics fairly, and greatly reduces the influence of unordered extracted characteristics among different devices by using a model aggregation method with dynamic matching characteristics, thereby avoiding overfitting of a global model of collaborative aggregation.
Drawings
In order to more clearly illustrate the embodiments of the present invention and the design thereof, the drawings required for the embodiments will be briefly described below. The drawings in the following description are only some of the embodiments of the present invention and other drawings may be made by those skilled in the art without the exercise of inventive faculty.
FIG. 1 is a schematic block diagram of a federal learning heterogeneous model aggregation method based on fairness feature representation of the present invention 1;
FIG. 2 is a flow chart of a federal learning heterogeneous model aggregation method based on fairness feature representation of the present invention 1.
Detailed Description
The present invention will be described in detail below with reference to the drawings and the embodiments, so that those skilled in the art can better understand the technical scheme of the present invention and can implement the same. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
For convenience of understanding, the terms involved in the present invention are explained as follows:
communication overhead: in the network transmission process, because of the need of transforming the data format, it is inevitable to add some redundant data, which are necessary for transmission, and the proportion of the redundant data in the source data is called overhead.
Example 1
The method solves the problems of feature mismatch and feature representation range mismatch caused by the isomerism of data collection of the mobile edge equipment and the isomerism of computing resources and communication capacity, builds a local machine learning model of each mobile edge equipment based on a convolutional neural network in the training process, and provides a model parameter aggregation method under the isomerism condition, thereby improving the convergence and accuracy of federal learning training and the generalization of a global model.
Specifically, the invention provides a federal learning heterogeneous model aggregation method based on fairness feature representation, an implementation schematic diagram of the method is shown in fig. 1, a specific flow chart is shown in fig. 2, and the method comprises the following processing steps:
step S1: and taking all mobile edge devices (such as mobile phones, intelligent watches, intelligent doorbell and other devices with information acquisition, transmission and processing capabilities) participating in federal learning as clients, establishing a convolutional neural network model of the same network structure, namely a global neural network model (hereinafter referred to as global model), initializing global neural network model parameters by a central server, wherein the network model parameters refer to convolutional neural network convolutional kernel weights, and constructing a unified loss function.
Step S2: the central server quantifies the feature representation of the current global model.
Specifically, step S2 further includes:
step S200, sequentially selecting the ith layer of the global model.
Step S201, selecting filters as a basic structure of the feature representation, and regarding each filter in the i-th layer as a point in euclidean space.
Step S202, calculating a Geometric center (GM) of the i-th layer as a feature representation of the i-th layer by using each point in the i-th layer space:
Figure BDA0004185680220000031
in the method, in the process of the invention,
Figure BDA0004185680220000032
is the geometric center of the ith layer of the global model, and has the size of N i X K three-dimensional matrix, N i And N i+1 Is the number of i-th layer input and output channels, K is the size of the convolution kernel, F i,j A jth filter that is an ith layer of the global model; r is Euclidean space and x is a point in Euclidean space.
Step S203, judging whether to traverse to the last layer of the global model, if so, proceeding to step S204, otherwise, selecting the next layer i+1, and returning to step S201.
Step S204, feature representation of each layer of the global model is calculated in sequence, and quantization of the feature representation of the global model is achieved.
Step S3: and the central server transmits the initialized parameters of the global model and the quantized characteristic representation of the global model to each client.
Step S4: after receiving the parameter and characteristic representation of the global model issued by the central server, each client trains by utilizing the heterogeneity data collected locally to obtain a trained local model. The heterogeneous data refers to the data collected by different clients which are not independent and distributed, are located at different positions, and face different tasks, and the types, the number and the like of the data collected by the clients with different information collecting capabilities are greatly different, for example, the camera device 1 in fig. 1 may collect more pictures of puppies, and the camera device 2 and the doorbell device may collect more pictures of birds and kittens respectively. Each participant randomly selects part of own training data as batch data, and model training is performed on the batch data by utilizing the random gradient descent SGD, so that model parameters W of each participant are updated.
Step S5: and comparing the characteristic representation of the local model and the global model by each client, namely measuring the relation between the local model and the global model training result after each round of training, and determining the characteristic mismatch degree.
Specifically, step S5 further includes:
step S500, sequentially selecting the ith layer of the local model.
Step S501, sequentially selecting the jth filter of the ith layer of the local model.
Step S502, calculating the similarity between the jth filter of the ith layer of the local model and the characteristic representation of the ith layer of the global model.
Figure BDA0004185680220000041
In the method, in the process of the invention,
Figure BDA0004185680220000042
the j of the ith layer, which is the local model of the kth clientA filter (L)>
Figure BDA0004185680220000043
Representing a similarity between a geometric center of an ith layer of the global model and a jth filter of the ith layer of the local model of the kth client; if a filter of an i-th layer is closer to the geometric center of the i-th layer of the global model, the features extracted by the filter are considered to be more similar to the common features of the global model.
Step S503, judging whether to traverse to the last filter of the layer, if yes, proceeding to step S504, otherwise selecting the next filter j+1, and returning to step S501.
Step S504, it is determined whether to traverse to the last layer of the model, if so, step S506 is entered, otherwise, the next layer i+1, j=1 is selected, and step S501 is returned.
In step S506, the similarity between each filter of the local model and the feature representation of the global model is compared in turn, and if a filter is closer to the feature representation of the global model, the features extracted by the filter are considered to be more similar to the common features of the global model.
Step S6: the client clusters the filters of each layer using the similarity of the feature representation of each filter of the local model to the global model.
Specifically, step S6 further includes:
in step S600, n distance threshold ranges τ are set.
In step S601, the filters with similarity within the threshold τ are classified into a group, and the filters in the same group are considered to have more similar feature representations, and the client records the group in which each filter is located.
Step S602, selecting the filters with the same proportion in each group, and selecting the filters with different characteristic representations as the filter candidate set F by using fairness k
Step S7: the client evaluates the communication overhead of uploading each set of filter candidates.
Specifically, step S7 further includes:
step S700, setting delay threshold gamma according to the requirement of mobile edge to calculate service quality th
In step S701, the transmission rate from each client k to the central server S is calculated as follows:
Figure BDA0004185680220000051
wherein T is k In order to transmit the power of the power,
Figure BDA0004185680220000052
representing the variance of Additive White Gaussian Noise (AWGN) at the data reception, the channel parameter is h k The communication bandwidth of the local device k is B k
Step S702, calculating different filter candidate sets F uploaded by each mobile edge device k The required data transmission time is calculated as follows:
Figure BDA0004185680220000053
in the method, in the process of the invention,
Figure BDA0004185680220000061
uploading U for device k k The transmission time required for the data of the data volume to the server; u (U) k The data size uploaded by the local equipment k is represented, and the data size is in a 32-bit floating point format; c (C) k Is the transmission rate from the local device k to the server S.
Step S8: the client dynamically optimizes the filter parameter selection, namely, the client determines the category of the feature representation based on the feature matching degree, selects the filters with the same proportion in different categories, and dynamically selects the communication parameters according to heterogeneous network environments.
Specifically, step S8 further includes:
step S800, constraint gamma of communication delay th Under this, an optimization problem is constructed to trade-off the size and parameters of the uploaded dataContribution, ensure that both specific communication requirements can be met and model accuracy can be guaranteed. For each device k, a specific dynamic optimization objective is as follows:
Figure BDA0004185680220000062
wherein F is k For the filter candidate set of device k, S is a sparse term implementing the above filter selection,
Figure BDA0004185680220000063
communication time, gamma, which is the corresponding parameter th Is a delay threshold.
Step S801, embedding constraint conditions into an optimization target, simplifying a calculation process, and obtaining a Lagrangian function:
Figure BDA0004185680220000064
step S802, introducing an auxiliary variable Z to replace a function S and a variable F in communication constraint, and obtaining an augmented Lagrangian function of the problem:
Figure BDA0004185680220000065
where pi is the Lagrangian multiplier, ρ is the penalty parameter and ρ >0.
Step S803, define pi k =ρu k Further, the method comprises the following steps:
Figure BDA0004185680220000066
in step S804, based on the augmented lagrangian function, the following iterative execution alternate direction multiplier method of the sub-problem may be utilized to solve the filter candidate set that optimizes the dynamic optimization objective, as the data that needs to be uploaded to the central server by each mobile edge device.
Figure BDA0004185680220000071
Figure BDA0004185680220000072
Figure BDA0004185680220000073
Step S805, the client is in delay constraint gamma th And uploading the filter candidate set which enables the dynamic target to be optimal and the group where each filter is located internally, so as to ensure the quality of edge computing service.
Step S9: the central server performs heterogeneous model aggregation of the classification, i.e. the central server aggregates parameters with the same or similar feature representation depending on the class of the feature representation.
Specifically, step S9 further includes:
in step S900, the central server waits until all clients receive the filter update upload and the filter group.
In step S901, the central server aggregates the filters of the same group in each layer in the following manner:
Figure BDA0004185680220000074
wherein w is i As average parameter, K is the number of clients, w i,k Is a communication parameter;
in step S902, after the parameters are aggregated, the central server updates all the models by using the fused average parameters.
Step S10: repeating the steps 2-9 until the set iteration times are reached.
Summarizing, according to the federal learning heterogeneous model aggregation method based on fairness feature representation, aiming at the problems of unmatched feature representation and unmatched feature representation range caused by heterogeneous computing resource utilization heterogeneous data training after heterogeneous network environment uploading, the feature representation (namely, the filters) of each participant are quantized, distances between the feature representation and the global model feature representation are measured, common features among different filters are found, the filters with different feature representations are selected in a balanced mode and uploaded to a central server, and finally the central server aggregates the common features in a heterogeneous model respectively, so that the better generalization of the global model is achieved by fairly utilizing parameter updating of each client, the influence of disordered extraction features among different devices is greatly reduced, and the model accuracy is improved.
The above embodiments are merely preferred embodiments of the present invention, the protection scope of the present invention is not limited thereto, and any simple changes or equivalent substitutions of technical solutions that can be obviously obtained by those skilled in the art within the technical scope of the present invention disclosed in the present invention belong to the protection scope of the present invention.

Claims (10)

1. The federal learning heterogeneous model aggregation method based on fairness feature representation is characterized by comprising the following steps of:
the central server initializes the parameters of the global model uploaded by each client, quantifies the characteristic representation of the global model, and transmits the parameters and the characteristic representation to each client;
each client trains the received parameters and the characteristic representation by utilizing the heterogeneity data collected locally, and a trained local model is obtained;
comparing the characteristic representations of the local model and the global model by each client to determine the characteristic matching degree;
the client clusters the filters of each layer by utilizing the matching degree of the characteristic representation of each filter of the local model and the global model to obtain a filter candidate set;
the client evaluates the communication overhead of each group of filter candidate sets and uploads the communication overhead to the central server;
the client determines the category of the feature representation based on the feature matching degree, selects filters with the same proportion in different categories, and dynamically selects communication parameters according to heterogeneous network environments;
and the central server aggregates communication parameters with the same or similar characteristic representation according to the category of the characteristic representation, and performs heterogeneous model aggregation.
2. The federal learning heterogeneous model aggregation method based on fairness feature representation according to claim 1, wherein the global model is a convolutional neural network model, and parameters of the global model refer to convolutional neural network convolutional kernel weights.
3. The federal learning heterogeneous model aggregation method based on fairness feature representation according to claim 1, wherein the feature representation of the quantized global model comprises the following specific steps:
the central server uses the filter of each layer of the global model as the basic structure of the quantized feature representation, and regards each filter of the ith layer of the global model as a point in Euclidean space, calculates the geometric center GM of the ith layer, and uses the geometric center GM of the ith layer as the feature representation of the layer, wherein the specific calculation mode is shown in a formula (1):
Figure FDA0004185680200000011
in the method, in the process of the invention,
Figure FDA0004185680200000021
is the geometric center of the ith layer of the global model, and has the size of N i X K three-dimensional matrix, N i And N i+1 Is the number of i-th layer input and output channels, K is the size of the convolution kernel, F i,j A jth filter that is an ith layer of the global model; r is Euclidean space, x is a point in Euclidean space;
after the central server calculates the geometric center of each layer of the complete office model, the calculation result is issued to each client.
4. The federal learning heterogeneous model aggregation method based on fairness feature representation according to claim 3, wherein the feature matching degree is determined by comparing feature representations of a local model and a global model by each client, and the method comprises the following specific steps:
the client compares the filter of each layer in the local model with the characteristic representation of the same layer of the global model, namely the geometric center in the formula (1), and calculates the similarity of the filter and the geometric center, wherein the specific calculation mode is shown in the formula (2):
Figure FDA0004185680200000022
in the method, in the process of the invention,
Figure FDA0004185680200000023
a j-th filter of the i-th layer, which is a local model of the kth client,>
Figure FDA0004185680200000024
representing a similarity between a geometric center of an ith layer of the global model and a jth filter of the ith layer of the local model of the kth client; if a filter of an i-th layer is closer to the geometric center of the i-th layer of the global model, the features extracted by the filter are considered to be more similar to the common features of the global model;
similarity assessment is performed based on geometric centers: before uploading the local model parameters each time, the geometric centers and the similarity in the formulas (1) and (2) are circularly calculated.
5. The method for clustering federal learning heterogeneous models based on fairness feature representation according to claim 4, wherein the client clusters the filters of each layer by using the matching degree of each filter of the local model and the feature representation of the global model to obtain a filter candidate set, and the specific steps are as follows:
setting n distance threshold ranges tau;
dividing the filters with similarity within a threshold tau into a group, considering the filters in the same group to have more similar characteristic representation, and recording the group of each filter by the client;
filters of the same proportion are selected in each group, and filters with different characteristic representations are selected as filter candidate sets F by fairness k
6. The federal learning heterogeneous model aggregation method based on fairness feature representation according to claim 5, wherein the client evaluates the communication overhead of each set of filter candidates as follows:
calculating different filter candidate sets F uploaded by each client k The required data transmission time ensures that the parameter upload can be at the delay threshold gamma th The internal completion is calculated as shown in formula (3):
Figure FDA0004185680200000031
in the method, in the process of the invention,
Figure FDA0004185680200000032
uploading U for kth client k The transmission time required for the data of the data volume to the central server; u (U) k Representing the data size uploaded by the kth client; c (C) k For the transmission rate from the kth client to the central server, the calculation is as shown in formula (4):
Figure FDA0004185680200000033
wherein T is k In order to transmit the power of the power,
Figure FDA0004185680200000034
representing the variance of additive white gaussian noise at the data reception, the channel parameter is h k The communication bandwidth of the kth client is B k
7. The federal learning heterogeneous model aggregation method based on fairness feature representation according to claim 6, wherein the client determines the category of feature representation based on feature matching degree, selects filters of the same proportion in different categories, and dynamically selects communication parameters according to heterogeneous network environments, comprising the steps of:
under the constraint of communication delay, constructing an optimization problem to weigh the size of the uploaded data and the contribution of parameters; for each client, the specific dynamic optimization objective is as shown in equation (5):
Figure FDA0004185680200000035
wherein Loss is a Loss function of the model, F k For the filter candidate set of the kth client, S is a sparse term implementing the filter selection described above,
Figure FDA0004185680200000036
communication time, gamma, which is the corresponding parameter th Is a delay threshold;
embedding constraint targets into formula (5) to obtain a Lagrangian function as shown in formula (6):
Figure FDA0004185680200000037
wherein λ is a lagrange multiplier;
introducing an auxiliary variable Z to replace a function S and a variable F in communication constraint, and obtaining an augmented Lagrangian function of the problem;
based on the augmented Lagrangian function, an alternate direction multiplier method is performed to solve a filter candidate set that optimizes the dynamic optimization objective as data that each client needs to upload to the central server.
8. The federal learning heterogeneous model aggregation method based on fairness feature representation according to claim 7, wherein the central server aggregates communication parameters having the same or similar feature representation according to the category of the feature representation, and performs heterogeneous model aggregation, comprising the specific steps of:
the central server is at the delay threshold gamma th The method comprises the steps that filters uploaded by all nodes are received in time, and the group of the filters is located;
the central server aggregates the filters of the same group in each layer in the manner shown in the formula (7):
Figure FDA0004185680200000041
wherein w is i As average parameter, K is the number of clients, w i,k Is a communication parameter;
after the parameters are aggregated, the average parameters will be sent back to the original convolution kernel in the local model, and in the next round of testing, the aggregated parameters and the non-communicated parameters are trained using the local data.
9. The federal learning heterogeneous model aggregation method based on fairness feature representation according to claim 1, wherein the heterogeneous data refers to data collected by different clients, which does not have independent same distribution property, is in different positions, and faces different tasks.
10. The federal learning heterogeneous model aggregation method based on fairness feature representation of claim 1, further comprising a central server building a loss function through which the heterogeneous model is supervised trained.
CN202310418128.9A 2023-04-19 2023-04-19 Federal learning heterogeneous model aggregation method based on fairness characteristic representation Pending CN116362328A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310418128.9A CN116362328A (en) 2023-04-19 2023-04-19 Federal learning heterogeneous model aggregation method based on fairness characteristic representation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310418128.9A CN116362328A (en) 2023-04-19 2023-04-19 Federal learning heterogeneous model aggregation method based on fairness characteristic representation

Publications (1)

Publication Number Publication Date
CN116362328A true CN116362328A (en) 2023-06-30

Family

ID=86909154

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310418128.9A Pending CN116362328A (en) 2023-04-19 2023-04-19 Federal learning heterogeneous model aggregation method based on fairness characteristic representation

Country Status (1)

Country Link
CN (1) CN116362328A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597498A (en) * 2023-07-07 2023-08-15 暨南大学 Fair face attribute classification method based on blockchain and federal learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597498A (en) * 2023-07-07 2023-08-15 暨南大学 Fair face attribute classification method based on blockchain and federal learning
CN116597498B (en) * 2023-07-07 2023-10-24 暨南大学 Fair face attribute classification method based on blockchain and federal learning

Similar Documents

Publication Publication Date Title
CN113537514B (en) Digital twinning-based federal learning framework with high energy efficiency
CN112714032B (en) Wireless network protocol knowledge graph construction analysis method, system, equipment and medium
CN114912705A (en) Optimization method for heterogeneous model fusion in federated learning
CN109117856B (en) Intelligent edge cloud-based person and object tracking method, device and system
CN112100514B (en) Friend recommendation method based on global attention mechanism representation learning
CN110659734A (en) Low bit quantization method for depth separable convolution structure
CN109991591B (en) Positioning method and device based on deep learning, computer equipment and storage medium
CN115358487A (en) Federal learning aggregation optimization system and method for power data sharing
CN112801209A (en) Image classification method based on dual-length teacher model knowledge fusion and storage medium
CN116362328A (en) Federal learning heterogeneous model aggregation method based on fairness characteristic representation
CN113518007B (en) Multi-internet-of-things equipment heterogeneous model efficient mutual learning method based on federal learning
Hsu et al. An adaptive Wi-Fi indoor localisation scheme using deep learning
CN114943342A (en) Optimization method of federated learning system
US20230004776A1 (en) Moderator for identifying deficient nodes in federated learning
CN114626550A (en) Distributed model collaborative training method and system
CN116522988B (en) Federal learning method, system, terminal and medium based on graph structure learning
CN111194048B (en) EM-based 1-bit parameter estimation method
Zhang et al. Federated multi-task learning with non-stationary heterogeneous data
CN115660116A (en) Sparse adapter-based federated learning method and system
CN115601745A (en) Multi-view three-dimensional object identification method facing application end
WO2022121979A1 (en) Inner loop value adjustment method and device, storage medium, and electronic device
CN115345320A (en) Method for realizing personalized model under layered federal learning framework
CN115392348A (en) Federal learning gradient quantification method, high-efficiency communication Federal learning method and related device
Yang et al. FedDD: Federated double distillation in IoV
CN117829267A (en) CLIP-based non-independent co-distributed data federal learning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination