CN114202070A - Power data processing method and device, nonvolatile storage medium and processor - Google Patents

Power data processing method and device, nonvolatile storage medium and processor Download PDF

Info

Publication number
CN114202070A
CN114202070A CN202111398305.9A CN202111398305A CN114202070A CN 114202070 A CN114202070 A CN 114202070A CN 202111398305 A CN202111398305 A CN 202111398305A CN 114202070 A CN114202070 A CN 114202070A
Authority
CN
China
Prior art keywords
model
power terminal
local
global model
electric power
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111398305.9A
Other languages
Chinese (zh)
Inventor
温明时
赵广怀
尹康
郝佳恺
张丽
徐绍军
海天翔
金明
李俊芹
王萍萍
丰雷
李财云
孙东军
刘晓宸
高鹏
赵鲲翔
宋志鸿
王敏昭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Beijing University of Posts and Telecommunications
State Grid Beijing Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Beijing University of Posts and Telecommunications
State Grid Beijing Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Beijing University of Posts and Telecommunications, State Grid Beijing Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202111398305.9A priority Critical patent/CN114202070A/en
Publication of CN114202070A publication Critical patent/CN114202070A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06313Resource planning in a project environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Quality & Reliability (AREA)
  • Molecular Biology (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Operations Research (AREA)
  • Biophysics (AREA)
  • Educational Administration (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Primary Health Care (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Supply And Distribution Of Alternating Current (AREA)

Abstract

The invention discloses a power data processing method and device, a nonvolatile storage medium and a processor. Wherein, the method comprises the following steps: the method comprises the steps that a first global model is sequentially issued to a first number of electric power terminal devices, wherein the first global model is used for processing electric power data; receiving a second number of first local models uploaded by the power terminal equipment, wherein the second number is smaller than the first number, and the first local models are obtained by the power terminal equipment by training a first global model by using local power data of the power terminal equipment; and aggregating the first global model and the second number of first local models to obtain a second global model. The method solves the technical problem of low iteration efficiency of the model for processing the data of the power system.

Description

Power data processing method and device, nonvolatile storage medium and processor
Technical Field
The invention relates to the technical field of electric power, in particular to an electric power data processing method and device, a nonvolatile storage medium and a processor.
Background
A huge amount of user data is stored in a data processing center of the smart power grid, when the data are processed by applying a traditional machine learning method, data are collected from the local of a user to the data processing center, and then models are trained in the data processing center, wherein the models can be used for power load prediction, power consumption prediction, fault diagnosis of power devices, parameter prediction and deduction, power grid system planning and evaluation and the like.
In the process, the power terminal of the user needs to upload the original data, and the privacy content contained in the data can be exposed and is easily utilized by lawless persons to threaten enterprise information security, national infrastructure security and the like. By adopting a federal learning mode, the decoupling of model training and data uploading is realized by transferring the data storage and model training stages of machine learning to local equipment and only interacting the model with the central server, so that the privacy safety of a user is effectively guaranteed. However, the traditional synchronous federal learning has the problems of dependence on equipment reliability and training efficiency and the like. On one hand, the training efficiency of the whole federal learning system is closely related to the training efficiency of equipment, the wooden barrel effect exists in different equipment training efficiencies, and the overall training speed is often dependent on the model training time of the equipment with the worst performance. On the other hand, if the central node and the device node tamper with the model during the aggregation process, the entire model may crash.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a power data processing method and device, a nonvolatile storage medium and a processor, which are used for at least solving the technical problem of low model iteration efficiency for processing power system data.
According to an aspect of an embodiment of the present invention, there is provided a power data processing method, including: the method comprises the steps that a first global model is sequentially issued to a first number of electric power terminal devices, wherein the first global model is used for processing electric power data; receiving a second number of first local models uploaded by the power terminal equipment, wherein the second number is smaller than the first number, and the first local models are obtained by the power terminal equipment by training the first global model by using local power data of the power terminal equipment; and aggregating the first global model and the second number of first local models to obtain a second global model.
Optionally, when the second number is one, after the second global model is obtained by aggregation, the method further includes: receiving a third number of second local models uploaded by the power terminal equipment; aggregating the second global model and the third number of second local models according to a predetermined rule to obtain a third global model, wherein the predetermined rule includes: the second global model is aggregated with one of the second local models at a time.
Optionally, the predetermined rule further includes that the second global model performs model aggregation according to the following formula: omega(l)=(1-α(l)(l-1)(l)ωnewWherein, ω is(l)Is said second global model after the first aggregation, ω(l-1)Is said second global model after the first-1 polymerization, ωnewRepresenting one of said second local models, a(l)Representing a proportion of one of the second local models in the second global model.
Optionally, the receiving a second number of first local models uploaded by the power terminal device includes: generating a scheduling optimization strategy based on the scheduling optimization model; and based on the scheduling optimization strategy, allocating bandwidth for the action of uploading the model to the first number of electric power terminal devices, wherein when the first number of electric power terminal devices perform model training and uploading action according to the allocated bandwidth, the total resource loss of the first number of electric power terminal devices is the lowest.
Optionally, generating a scheduling optimization strategy based on the scheduling optimization model includes: acquiring parameter information of the first number of electric power terminal devices; calculating the time delay loss and the energy loss when the electric power terminal equipment executes model training and uploading actions according to the parameter information, the time delay model and the energy consumption model; and generating the scheduling optimization strategy based on the time delay loss, the energy loss and the scheduling optimization model.
Optionally, the delay model is represented by the following formula: t isi=Ti cmp+Ti com
Figure BDA0003370748780000021
Wherein, TiRepresenting the time delay loss T of the ith power terminal equipment when the ith power terminal equipment executes model training and uploading actioni cmpRepresenting the time taken for the ith power terminal device to train to obtain the local model, ciRepresenting the number of CPU cycles, N, required for the i-th power terminal device to train a data sampleiRepresenting the total number of samples in the local training set of the i-th power terminal device, fiIndicating the CPU frequency, T, of the i-th power terminal equipmenti comRepresenting the time taken for the ith power terminal device to upload the local model, delta representing the size of the local model, riRepresenting the average data transmission rate of the ith power terminal equipment; the energy consumption model is represented by the following formula:
Figure BDA0003370748780000022
wherein E isiRepresents the energy loss when the ith power terminal equipment executes model training and uploading action,
Figure BDA0003370748780000023
representing the energy loss of the local model obtained by training the ith power terminal device, wherein beta represents the effective capacitance coefficient of the computing chip set of the ith power terminal device,
Figure BDA0003370748780000024
representing the energy loss, p, of the local model uploaded by the ith power terminal deviceiIndicates the power of the i-th power terminal device.
Optionally, the scheduling optimization model includes: a deep Q network model, wherein the states, actions and rewards of the deep Q network model are determined according to constraints that minimize the overall resource consumption of the first number of power terminal devices.
According to another aspect of the embodiments of the present invention, there is also provided a power data processing apparatus, including: the issuing module is used for issuing a first global model to a first number of electric power terminal devices in sequence, wherein the first global model is used for processing electric power data; the receiving module is used for receiving a second number of first local models uploaded by the electric power terminal equipment, wherein the second number is smaller than the first number, and the first local models are obtained by the electric power terminal equipment by training the first global model by using local electric power data of the electric power terminal equipment; and the aggregation module is used for aggregating the first global model and the second number of first local models to obtain a second global model.
According to still another aspect of the embodiments of the present invention, there is also provided a non-volatile storage medium, where the non-volatile storage medium includes a stored program, and when the program runs, a device in which the non-volatile storage medium is located is controlled to execute any one of the above-mentioned power data processing methods.
According to still another aspect of the embodiments of the present invention, there is further provided a processor, where the processor is configured to execute a program, where the program executes the power data processing method described in any one of the above.
In the embodiment of the invention, a first global model is sequentially issued to a first number of electric power terminal devices, wherein the first global model is used for processing electric power data; receiving a second number of first local models uploaded by the power terminal equipment, wherein the second number is smaller than the first number, and the first local models are obtained by the power terminal equipment by training a first global model by using local power data of the power terminal equipment; the first global model and the second number of first local models are aggregated to obtain the second global model, and the purpose of fast iteration of the global model of the electric power data processing center is achieved, so that the technical effect of improving the model iteration efficiency of the global model of the electric power data processing center is achieved, and the technical problem that the model iteration efficiency for processing the data of the electric power system is low is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 shows a hardware configuration block diagram of a computer terminal for implementing a power data processing method;
FIG. 2 is a schematic diagram of power system model training and data processing provided in accordance with the related art;
FIG. 3 is a flow chart illustrating a power data processing method according to an embodiment of the invention;
FIG. 4 is a schematic illustration of the advantages of asynchronous federated learning provided in accordance with an alternative embodiment of the present invention;
fig. 5 is a block diagram of a power data processing device according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, partial terms or terms appearing in the description of the embodiments of the present application are applied to the following explanations:
federal learning, a distributed machine learning technique, on the basis of guaranteeing data privacy safety and legal compliance, helps a plurality of terminals to realize the effect of machine learning modeling jointly.
The deep Q network is used for executing corresponding behaviors according to a strategy for a given environment state, the behaviors can change the environment state of the environment and obtain an incentive value after the execution of the behaviors is finished, and the network updates the strategy according to the incentive value.
In accordance with an embodiment of the present invention, there is provided an embodiment of a method of processing power data, it being noted that the steps illustrated in the flowchart of the figure may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than that presented herein.
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Fig. 1 shows a hardware configuration block diagram of a computer terminal for implementing the power data processing method. As shown in fig. 1, the computer terminal 10 may include one or more (shown as 102a, 102b, … …, 102 n) processors 102 (the processors 102 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), and memory 104 for storing data. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial BUS (USB) port (which may be included as one of the ports of the BUS), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors 102 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuit may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer terminal 10. As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
The memory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the power data processing method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the software programs and modules stored in the memory 104, that is, implementing the power data processing method of the application program. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with the user interface of the computer terminal 10.
Fig. 2 is a schematic diagram of power system model training and data processing provided according to the related art, and as shown in fig. 2, a federal learning method is adopted in the related art to transmit a global model to each power terminal device through a power grid data center (i.e., a data processing center of a power grid), each power terminal device trains the global model with a local data set to obtain a local model, and then uploads the local model to the power grid data center, and the power grid data center iteratively updates the global model of the power grid data center in a manner of aggregating all uploaded local models at one time. The process has a barrel effect, namely the iterative updating speed is limited by the device which trains and uploads the model slowest in all power terminal devices.
Fig. 3 is a schematic flow chart of a power data processing method according to an embodiment of the present invention, and as shown in fig. 3, the method includes the following steps:
step S302, the first global model is sequentially issued to a first number of electric power terminal devices, wherein the first global model is used for processing electric power data.
In this step, the first global model may be a global model stored in a data processing center, the data processing center is connected to a first number of power terminal devices in a management area of the data processing center, and the first global model may be used to perform functions such as power load prediction, power consumption prediction, fault diagnosis of a power device, parameter prediction and deduction, and power grid system planning and evaluation.
Step S304, receiving a second number of first local models uploaded by the power terminal device, where the second number is smaller than the first number, and the first local models are obtained by the power terminal device training the first global model by using local power data of the power terminal device.
In this step, the power terminal devices may generate local power data during operation, and each power terminal device may train the received first global model using the local power data to obtain a first local model.
Step S306, the first global model and the second number of first local models are aggregated to obtain a second global model. It should be noted that the second number of first local models may be the local model that is received by the data processing center first, and other power terminal devices may not upload the local models to the data processing center due to slow training speed or poor network transmission. At this time, the first global model and the second number of first local models can be aggregated first through the step, and model aggregation does not need to be performed after all the local models are uploaded, so that the iteration efficiency of the models is improved.
Through the steps, in the embodiment of the invention, the first global model is sequentially issued to the first number of electric power terminal devices, wherein the first global model is used for processing electric power data; receiving a second number of first local models uploaded by the power terminal equipment, wherein the second number is smaller than the first number, and the first local models are obtained by the power terminal equipment by training a first global model by using local power data of the power terminal equipment; the first global model and the second number of first local models are aggregated to obtain the second global model, and the purpose of fast iteration of the global model of the electric power data processing center is achieved, so that the technical effect of improving the model iteration efficiency of the global model of the electric power data processing center is achieved, and the technical problem that the model iteration efficiency for processing the data of the electric power system is low is solved.
As an optional embodiment, in the case that the second number is one, after the second global model is obtained by aggregation, a third number of second local models uploaded by the power terminal device may also be received; aggregating the second global model and a third number of second local models according to a predetermined rule to obtain a third global model, wherein the predetermined rule comprises: the second global model is aggregated with one second local model at a time. For the data processing center, every time an uploaded local model is received, the local model and the global model are aggregated, the global model is updated in a step-by-step iteration mode, the model iteration efficiency and flexibility are improved, meanwhile, only one model is aggregated at a time, loss functions can be reduced in the model iteration process, and the model accuracy is improved.
Optionally, each power terminal device may download the latest global model ω (or the parameter ω of the global model) first, and then start a new local training round using the local data set, and during the local training, the ith power terminal device may be recorded as Di,DiUsing its local data siAnd the parameter omega of the global model trains the local model. Local model
Figure BDA0003370748780000062
Can be represented as
Figure BDA0003370748780000061
Where h represents the number of local iterations, l represents the number of global model aggregations (epochs), and γ is the learning rate.
As an alternative embodiment, the predetermined rule further includes that the second global model performs model aggregation according to the following formula: omega(l)=(1-α(l)(l-1)(l)ωnewWherein, ω is(l)Is the second global model after the first aggregation, ω(l-1)Is the second global model after the first-1 polymerization, ωnewRepresenting one of the second local models, α(l)Representing the proportion of one of the second local models in the second global model.
In the process, the data processing center reduces the loss function F (omega) through model iterationK) Where K represents the total training time of all devices,
Figure BDA0003370748780000071
wherein N isDIndicating the number of electrical devices, KiPresentation device DiThe number of training rounds of (a).
As an alternative embodiment, the receiving of the second number of first local models uploaded by the power terminal device includes: generating a scheduling optimization strategy based on the scheduling optimization model; and based on a scheduling optimization strategy, allocating bandwidth for the action of uploading the model to the first number of electric power terminal devices, wherein when the first number of electric power terminal devices perform model training and uploading action according to the allocated bandwidth, the total resource loss of the first number of electric power terminal devices is the lowest.
In the process that the data processing center interacts with the power terminal devices, the overall efficiency can be improved and the overall resource loss of the first number of power terminal devices interacting with the data processing center can be reduced by scheduling the devices in consideration of the limited resources of the wireless network environment.
As an alternative embodiment, the generating the scheduling optimization strategy based on the scheduling optimization model includes: acquiring parameter information of a first number of electric power terminal devices; calculating the time delay loss and the energy loss when the electric terminal equipment executes model training and uploading actions according to the parameter information, the time delay model and the energy consumption model; and generating a scheduling optimization strategy based on the delay loss, the energy loss and the scheduling optimization model.
By establishing the time delay model and the energy consumption model, the process of training and uploading the model of the electric power terminal equipment can be quantified.
In each iteration, each power terminal device first downloads the latest global model from the data processing center, and the time and bandwidth overhead of device downloading can be ignored since the uplink bandwidth is much smaller than the downlink bandwidth. After downloading the global model, each device trains the model locally, which brings about a time overhead. After the local training process is completed, each device needs to upload its own model to the central server of the data processing center. Therefore, the total time delay of each iteration process can be formed by the local training time and the time for uploading the local model, and the total time delay is obtained by calculating the two time delays. The establishment of the energy consumption model is similar to that of the time delay model, the total energy consumption of each iteration process is composed of the calculation energy consumption of local training and the communication energy consumption of uploading the local model, and the total energy consumption is obtained by calculating the energy consumption of the two parts.
As an alternative embodiment, the delay model is represented by the following formula: t isi=Ti cmp+Ti com
Figure BDA0003370748780000072
Wherein, TiRepresenting the time delay loss T of the ith power terminal equipment when the ith power terminal equipment executes model training and uploading actioni cmpRepresenting the time taken for the ith power terminal device to train to obtain the local model, ciRepresenting the number of CPU cycles, N, required for the i-th power terminal device to train a data sampleiRepresenting the total number of samples in the local training set of the i-th power terminal device, fiIndicating the CPU frequency, T, of the i-th power terminal equipmenti comRepresenting the time taken for the ith power terminal device to upload the local model, delta representing the size of the local model, riRepresenting the average data transmission rate of the ith power terminal equipment; the energy consumption model is expressed by the following formula:
Figure BDA0003370748780000081
wherein E isiRepresents the energy loss when the ith power terminal equipment executes model training and uploading action,
Figure BDA0003370748780000082
representing the energy loss of the local model obtained by training the ith power terminal device, wherein beta represents the effective capacitance coefficient of the computing chip set of the ith power terminal device,
Figure BDA0003370748780000083
representing the energy loss, p, of the local model uploaded by the ith power terminal deviceiRepresenting i-th power terminal equipmentAnd (4) power.
Calculating the delay loss may calculate the local training time and the upload time separately and then sum the two. Calculating the local training time may be performed by the following process: since each training sample size is the same for all devices, device DiThe time required to complete the local training is
Figure BDA0003370748780000084
Wherein c isiPresentation device DiThe number of CPU cycles required to train a data sample, which can be obtained by performing cycle monitoring; n is a radical ofiPresentation device DiTotal number of samples in the local training set; f. ofiPresentation device DiThe CPU frequency of (c).
Calculating the uploading time, and assuming that the local models of all the devices have the same size, the time for uploading the models by the devices can be expressed as:
Figure BDA0003370748780000085
where δ is the size of the model, riIs a device DiThe average transmission rate of (c) can be expressed by shannon formula:
Figure BDA0003370748780000086
where B is the bandwidth, hiIs a device DiChannel gain of piIs the transmission power, N0Is noise.
The process of calculating the energy consumption is similar, and the local training energy consumption and the communication energy consumption for uploading the local model can be calculated respectively and then added.
Calculating the local training energy consumption, which can be expressed as
Figure BDA0003370748780000087
Wherein beta is device DiAnd calculating the effective capacitance coefficient of the chip set.
It is time consuming to calculate the upload energy consumption which can be expressed as
Figure BDA0003370748780000088
Wherein p isiAs a device DiOf the power of (c).
As an alternative embodiment, generating the scheduling optimization strategy based on the delay loss, the energy loss and the scheduling optimization model may include the following steps:
firstly, aiming at the condition that the resources of the power equipment are limited, an optimization problem model is established as follows:
Figure BDA0003370748780000091
F(ωK)<μ
fmin<fi<fmax
Figure BDA0003370748780000092
wherein the optimization objective is to minimize the latency energy consumption of all devices in the whole process, where λ is the balance weight between latency and energy consumption. This parameter can handle joint learning scenarios with different energy consumption and latency preferences. For example, when every device can be connected to a power supply, power consumption is no longer an important consideration, and thus λ can be set small. The constraint condition is that firstly, the loss function is constrained, and the representative loss function is reduced to a certain precision mu, so that the effect of federal learning is ensured; CPU frequency constraint, which indicates that the CPU frequency of the equipment can not exceed the specified range; and thirdly, total bandwidth constraint, which indicates that the sum of the bandwidths allocated to the equipment can not exceed the total bandwidth limit.
Secondly, the optimization problem can be decomposed, and the optimal local computation time can be computed. Due to unpredictability of a network bandwidth environment, nonlinear constraint and independence and randomness of each device, the overall optimization problem is difficult to solve, so that the original problem is decomposed, and the problem of minimum time delay and energy consumption in a local training process is solved firstly:
Figure BDA0003370748780000095
by mathematicsReasoning to obtain the optimal fi *Is composed of
Figure BDA0003370748780000093
By optimization of fi *To obtain the optimal local computation time
Figure BDA0003370748780000094
And thirdly, based on deep reinforcement learning, the bandwidth allocation can be adjusted by considering the process scheduling equipment of the local training and uploading model for the federal learning of the power terminal equipment in a dynamic network environment, so that the aims of reducing time delay and energy consumption are achieved.
As an alternative, the scheduling optimization model may adopt a Deep Q network model (Deep Q Net, DQN model for short), wherein the state, action and reward of the Deep Q network model are determined according to a constraint condition for minimizing the overall resource consumption of the first number of power terminal devices.
Specifically, the DQN model is first modeled, and the basic elements such as state, action, reward, etc. are modeled as follows:
1) state (State): the status information includes the following three types of information:
(a) time of last round uploading of each electric power terminal equipment
Figure BDA0003370748780000101
The last round of uploading time of each device provides a basis for the selection action. It is desirable that devices that have just uploaded a round do not participate in this bandwidth allocation, as they may still be in training locally at this time.
(b) Number of times each power terminal device uploads a local model
Figure BDA0003370748780000102
Due to the limited bandwidth, if one device frequently uploads the model, it will take up the opportunity for other devices to upload, which is a factor to be considered.
(c) Local quality of power terminal equipment
Figure BDA0003370748780000103
In the federal learning process, devices with higher local data set quality have higher local model accuracy. For example, if one device has a higher number of training samples than another device, we would like it to have more weight.
Thus, the state can be represented as
Figure BDA0003370748780000104
2) Action (Action): an action may be represented as a (t) { a, a ∈ {1,2DAnd b, wherein a represents the power terminal equipment which selects and allocates the bandwidth on the time slot t.
3) Reward (Reward): when the DQN agent takes action a (t) in state s (t), it will be rewarded. Given our optimization goals, the reward function aims to minimize the delay and power consumption of all devices, while allowing the DQN to meet security and bandwidth constraints. In particular, in time slot t, we allocate bandwidth to one of the power terminal devices. If it is satisfied that the model can be uploaded in the time slot t, the power terminal equipment needs to train to complete the local model, i.e. needs to be in
Figure BDA0003370748780000105
The time interval begins training the model. The device has two states, namely, the power terminal device is in local training and cannot upload a local model at the moment, and in this case, the allocation of bandwidth is equivalent to the waste of an opportunity, so that a negative reward value is given; another state is that the power terminal device can upload the model at this time, thus giving a positive reward. In summary, the reward is designed as follows:
Figure BDA0003370748780000106
wherein
Figure BDA0003370748780000107
Suppose thatAll models are the same size, so the bandwidth required for uploading is the same and the transmission power is the same, i.e. for all devices
Figure BDA0003370748780000108
And Ti comThe same is true. The parameter η is a positive constant and the parameter R is a negative constant. The reward function is designed such that the agent tends to select a free, quality qiAnd the good equipment allocates bandwidth and ensures that the same equipment does not frequently upload the model and occupy the uploading opportunity of other equipment.
Further, DQN model training is carried out on the electric terminal equipment with the first number, so that the scheduling optimization strategy of the electric terminal equipment achieves the goals of time delay and optimal energy consumption.
Specifically, the training process of the scheduling optimization strategy based on DQN is as follows:
step1 loading data set of power terminal equipment, and attribute of equipment such as ci、qiEtc.;
step2 first initializes an experience playback pool D;
step2 initializing Q network and its neural network parameter omegaQ(ii) a Initializing a target Q network and its neural network parameters omegaQ*←ωQ
Step3 cycles through the epoch 1, 2.
Step3.1 initialization State s (t)
Step3.2 cycles through the time slot timeclock 1, 2.
Step3.2.1 takes action policy a (t) with e-greedy policy, chooses a random action with a probability of e, and chooses a (t) argmax (Q (s (t), a (t); ω (t)) with a probability of 1-eQ)
Step3.2.2 performs action a (t), receives reward r (t) and reaches new state s (t + 1);
device D selected by Step3.2.2 to act a (t)iSending business process notifications
Step3.2.3 stores samples (s (t), a (t), r (t), s (t +1)) in an empirical replay pool D;
step3.2.4 uses the warpCalculating a target Q value y of uniformly randomly sampled samples Minibatch (s (x), a (x), r (x), s (x +1)) in a playback poolx=r(x)+γ·maxAQ (s (x +1), r (x); ω Q, updating Q network parameters ω to reduce error functions yx-qs (x), a (x); ω Q2;
step3.2.5 updating parameters of base station planning target Q network every C steps, namely omegaQ*=ωQ
Step3.3 ends the time slot loop;
step4 ends the epoch cycle.
After the training and learning of the steps, a scheduling optimization strategy can be generated, and the strategy can automatically allocate bandwidth to achieve the goals of time delay and optimal energy consumption.
Fig. 4 is a schematic diagram illustrating the advantages of asynchronous federal learning according to an alternative embodiment of the present invention, as shown in fig. 4, when synchronous federal learning is adopted, a data processing center can perform model iteration only after all power terminal devices need to complete local model uploading, and a barrel effect exists. For the asynchronous federal learning method adopted by the invention, the power terminal equipment can select to upload the local model at any time, and the method has the advantages of improving the convergence speed of the global model, enabling the uploading and downloading processes of the model to be more free, and enabling the utilization rate of system resources to be high.
According to an embodiment of the present invention, there is also provided a power data processing apparatus for implementing the above power data processing method, and fig. 5 is a block diagram of a structure of the power data processing apparatus according to the embodiment of the present invention, as shown in fig. 5, the power data processing apparatus includes: the issuing module 52, the receiving module 54, and the aggregation module 56, which will be described below.
The issuing module 52 is configured to issue the first global model to the first number of power terminal devices in sequence, where the first global model is used to process power data;
a receiving module 54, connected to the issuing module 52, configured to receive a second number of first local models uploaded by the power terminal device, where the second number is smaller than the first number, and the first local models are obtained by the power terminal device training a first global model by using local power data of the power terminal device;
and an aggregation module 56, connected to the receiving module 54, configured to aggregate the first global model and the second number of first local models to obtain a second global model.
It should be noted that the issuing module 52, the receiving module 54 and the aggregating module 56 correspond to steps S302 to S306 in embodiment 1, and the three modules are the same as the corresponding steps in the implementation example and application scenario, but are not limited to the disclosure in embodiment 1. It should be noted that the above modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
An embodiment of the present invention may provide a computer device, and optionally, in this embodiment, the computer device may be located in at least one network device of a plurality of network devices of a computer network. The computer device includes a memory and a processor.
The memory may be configured to store software programs and modules, such as program instructions/modules corresponding to the power data processing method and apparatus in the embodiments of the present invention, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory, so as to implement the power data processing method. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the computer terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: the method comprises the steps that a first global model is sequentially issued to a first number of electric power terminal devices, wherein the first global model is used for processing electric power data; receiving a second number of first local models uploaded by the power terminal equipment, wherein the second number is smaller than the first number, and the first local models are obtained by the power terminal equipment by training a first global model by using local power data of the power terminal equipment; and aggregating the first global model and the second number of first local models to obtain a second global model.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a non-volatile storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
Embodiments of the present invention also provide a non-volatile storage medium. Alternatively, in this embodiment, the nonvolatile storage medium may be configured to store the program code executed by the power data processing method provided in embodiment 1.
Optionally, in this embodiment, the nonvolatile storage medium may be located in any one of computer terminals in a computer terminal group in a computer network, or in any one of mobile terminals in a mobile terminal group.
Optionally, in this embodiment, the non-volatile storage medium is configured to store program code for performing the following steps: the method comprises the steps that a first global model is sequentially issued to a first number of electric power terminal devices, wherein the first global model is used for processing electric power data; receiving a second number of first local models uploaded by the power terminal equipment, wherein the second number is smaller than the first number, and the first local models are obtained by the power terminal equipment by training a first global model by using local power data of the power terminal equipment; and aggregating the first global model and the second number of first local models to obtain a second global model.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit may be a division of a logic function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or may not be executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a non-volatile memory storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A method of processing power data, comprising:
the method comprises the steps that a first global model is sequentially issued to a first number of electric power terminal devices, wherein the first global model is used for processing electric power data;
receiving a second number of first local models uploaded by the power terminal equipment, wherein the second number is smaller than the first number, and the first local models are obtained by the power terminal equipment by training the first global model by using local power data of the power terminal equipment;
and aggregating the first global model and the second number of first local models to obtain a second global model.
2. The method of claim 1, wherein in the case that the second number is one, after aggregating to obtain the second global model, further comprising:
receiving a third number of second local models uploaded by the power terminal equipment;
aggregating the second global model and the third number of second local models according to a predetermined rule to obtain a third global model, wherein the predetermined rule includes: the second global model is aggregated with one of the second local models at a time.
3. The method of claim 2, wherein the predetermined rules further include that the second global model performs model aggregation according to the following formula:
ω(l)=(1-α(l)(l-1)(l)ωnew
wherein, ω is(l)Is said second global model after the first aggregation, ω(l-1)Is said second global model after the first-1 polymerization, ωnewRepresenting one of said second local models, a(l)Representing a proportion of one of the second local models in the second global model.
4. The method of claim 1, wherein receiving the second number of first local models uploaded by the power terminal device comprises:
generating a scheduling optimization strategy based on the scheduling optimization model;
and based on the scheduling optimization strategy, allocating bandwidth for the action of uploading the model to the first number of electric power terminal devices, wherein when the first number of electric power terminal devices perform model training and uploading action according to the allocated bandwidth, the total resource loss of the first number of electric power terminal devices is the lowest.
5. The method of claim 4, wherein generating a scheduling optimization strategy based on the scheduling optimization model comprises:
acquiring parameter information of the first number of electric power terminal devices;
calculating the time delay loss and the energy loss when the electric power terminal equipment executes model training and uploading actions according to the parameter information, the time delay model and the energy consumption model;
and generating the scheduling optimization strategy based on the time delay loss, the energy loss and the scheduling optimization model.
6. The method of claim 5,
the delay model is expressed by the following formula:
Ti=Ti cmp+Ti com
Figure FDA0003370748770000021
Figure FDA0003370748770000022
wherein, TiRepresents the time delay loss when the ith power terminal equipment executes model training and uploading action,
Figure FDA0003370748770000023
representing the time taken for the ith power terminal device to train to obtain the local model, ciRepresenting the number of CPU cycles, N, required for the i-th power terminal device to train a data sampleiRepresenting the total number of samples in the local training set of the i-th power terminal device, fiIndicates the CPU frequency of the i-th power terminal device,
Figure FDA0003370748770000024
representing the time taken for the ith power terminal device to upload the local model, delta representing the size of the local model, riRepresenting the average data transmission rate of the ith power terminal equipment;
the energy consumption model is represented by the following formula:
Figure FDA0003370748770000025
Figure FDA0003370748770000026
Figure FDA0003370748770000027
wherein E isiRepresents the energy loss when the ith power terminal equipment executes model training and uploading action,
Figure FDA0003370748770000028
representing the energy loss of the local model obtained by training the ith power terminal device, wherein beta represents the effective capacitance coefficient of the computing chip set of the ith power terminal device,
Figure FDA0003370748770000029
representing the energy loss, p, of the local model uploaded by the ith power terminal deviceiIndicates the power of the i-th power terminal device.
7. The method of claim 5, wherein the scheduling optimization model comprises: a deep Q network model, wherein the states, actions and rewards of the deep Q network model are determined according to constraints that minimize the overall resource consumption of the first number of power terminal devices.
8. An electric power data processing apparatus, characterized by comprising:
the issuing module is used for issuing a first global model to a first number of electric power terminal devices in sequence, wherein the first global model is used for processing electric power data;
the receiving module is used for receiving a second number of first local models uploaded by the electric power terminal equipment, wherein the second number is smaller than the first number, and the first local models are obtained by the electric power terminal equipment by training the first global model by using local electric power data of the electric power terminal equipment;
and the aggregation module is used for aggregating the first global model and the second number of first local models to obtain a second global model.
9. A non-volatile storage medium, comprising a stored program, wherein a device in which the non-volatile storage medium is located is controlled to execute the power data processing method according to any one of claims 1 to 7 when the program is executed.
10. A processor, characterized in that the processor is configured to execute a program, wherein the program executes the power data processing method according to any one of claims 1 to 7.
CN202111398305.9A 2021-11-23 2021-11-23 Power data processing method and device, nonvolatile storage medium and processor Pending CN114202070A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111398305.9A CN114202070A (en) 2021-11-23 2021-11-23 Power data processing method and device, nonvolatile storage medium and processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111398305.9A CN114202070A (en) 2021-11-23 2021-11-23 Power data processing method and device, nonvolatile storage medium and processor

Publications (1)

Publication Number Publication Date
CN114202070A true CN114202070A (en) 2022-03-18

Family

ID=80648626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111398305.9A Pending CN114202070A (en) 2021-11-23 2021-11-23 Power data processing method and device, nonvolatile storage medium and processor

Country Status (1)

Country Link
CN (1) CN114202070A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116414556A (en) * 2022-12-05 2023-07-11 上海交通大学 Heterogeneous embedded equipment power distribution system and method based on redundant calculation force

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140080428A1 (en) * 2008-09-12 2014-03-20 Digimarc Corporation Methods and systems for content processing
CN108964042A (en) * 2018-07-24 2018-12-07 合肥工业大学 Regional power grid operating point method for optimizing scheduling based on depth Q network
CN109684075A (en) * 2018-11-28 2019-04-26 深圳供电局有限公司 Method for unloading computing tasks based on edge computing and cloud computing cooperation
CN110275758A (en) * 2019-05-09 2019-09-24 重庆邮电大学 A kind of virtual network function intelligence moving method
CN111524034A (en) * 2020-05-12 2020-08-11 华北电力大学 High-reliability low-time-delay low-energy-consumption power inspection system and inspection method
CN111708640A (en) * 2020-06-23 2020-09-25 苏州联电能源发展有限公司 Edge calculation-oriented federal learning method and system
CN112668128A (en) * 2020-12-21 2021-04-16 国网辽宁省电力有限公司物资分公司 Method and device for selecting terminal equipment nodes in federated learning system
CN113011602A (en) * 2021-03-03 2021-06-22 中国科学技术大学苏州高等研究院 Method and device for training federated model, electronic equipment and storage medium
CN113050553A (en) * 2021-02-18 2021-06-29 同济大学 Scheduling modeling method of semiconductor production line based on federal learning mechanism
CN113139341A (en) * 2021-04-23 2021-07-20 广东安恒电力科技有限公司 Electric quantity demand prediction method and system based on federal integrated learning
CN113162850A (en) * 2021-01-13 2021-07-23 中国科学院计算技术研究所 Artificial intelligence-based heterogeneous network multi-path scheduling method and system
CN113177645A (en) * 2021-06-29 2021-07-27 腾讯科技(深圳)有限公司 Federal learning method and device, computing equipment and storage medium
US20210334128A1 (en) * 2020-04-22 2021-10-28 Goldman Sachs & Co. LLC Asynchronous quantum information processing
CN115456194A (en) * 2022-08-25 2022-12-09 北京百度网讯科技有限公司 Model training control method, device and system based on asynchronous federal learning

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140080428A1 (en) * 2008-09-12 2014-03-20 Digimarc Corporation Methods and systems for content processing
CN108964042A (en) * 2018-07-24 2018-12-07 合肥工业大学 Regional power grid operating point method for optimizing scheduling based on depth Q network
CN109684075A (en) * 2018-11-28 2019-04-26 深圳供电局有限公司 Method for unloading computing tasks based on edge computing and cloud computing cooperation
CN110275758A (en) * 2019-05-09 2019-09-24 重庆邮电大学 A kind of virtual network function intelligence moving method
US20210334128A1 (en) * 2020-04-22 2021-10-28 Goldman Sachs & Co. LLC Asynchronous quantum information processing
CN111524034A (en) * 2020-05-12 2020-08-11 华北电力大学 High-reliability low-time-delay low-energy-consumption power inspection system and inspection method
CN111708640A (en) * 2020-06-23 2020-09-25 苏州联电能源发展有限公司 Edge calculation-oriented federal learning method and system
CN112668128A (en) * 2020-12-21 2021-04-16 国网辽宁省电力有限公司物资分公司 Method and device for selecting terminal equipment nodes in federated learning system
CN113162850A (en) * 2021-01-13 2021-07-23 中国科学院计算技术研究所 Artificial intelligence-based heterogeneous network multi-path scheduling method and system
CN113050553A (en) * 2021-02-18 2021-06-29 同济大学 Scheduling modeling method of semiconductor production line based on federal learning mechanism
CN113011602A (en) * 2021-03-03 2021-06-22 中国科学技术大学苏州高等研究院 Method and device for training federated model, electronic equipment and storage medium
CN113139341A (en) * 2021-04-23 2021-07-20 广东安恒电力科技有限公司 Electric quantity demand prediction method and system based on federal integrated learning
CN113177645A (en) * 2021-06-29 2021-07-27 腾讯科技(深圳)有限公司 Federal learning method and device, computing equipment and storage medium
CN115456194A (en) * 2022-08-25 2022-12-09 北京百度网讯科技有限公司 Model training control method, device and system based on asynchronous federal learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
姜婧妍: "面向边缘智能的资源分配和任务调度的研究", 《中国优秀硕士学位论文全文数据库 信息科技》, 15 August 2020 (2020-08-15) *
廖钰盈: "面向异构边缘节点的融合联邦学习", 《中国优秀硕士学位论文全文数据库 信息科技》, 15 May 2021 (2021-05-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116414556A (en) * 2022-12-05 2023-07-11 上海交通大学 Heterogeneous embedded equipment power distribution system and method based on redundant calculation force
CN116414556B (en) * 2022-12-05 2024-01-30 上海交通大学 Heterogeneous embedded equipment power distribution system and method based on redundant calculation force

Similar Documents

Publication Publication Date Title
US11018979B2 (en) System and method for network slicing for service-oriented networks
CN110956202B (en) Image training method, system, medium and intelligent device based on distributed learning
WO2017166643A1 (en) Method and device for quantifying task resources
CN104731607B (en) Terminal Lifelong Learning processing method, device and system
CN113268341A (en) Distribution method, device, equipment and storage medium of power grid edge calculation task
CN112272102B (en) Method and device for unloading and scheduling edge network service
CN112306623A (en) Processing method and device for deep learning task and computer readable storage medium
JP2019087072A (en) Processor, inference device, learning device, processing system, processing method, and processing program
WO2022262646A1 (en) Resource configuration method and apparatus, and storage medium and computing system
CN114580658A (en) Block chain-based federal learning incentive method, device, equipment and medium
CN113660112A (en) Bandwidth allocation method, system, computer equipment and medium for federated learning
CN114202070A (en) Power data processing method and device, nonvolatile storage medium and processor
CN112084017A (en) Memory management method and device, electronic equipment and storage medium
CN111106960A (en) Mapping method and mapping device of virtual network and readable storage medium
CN107566480A (en) The user activity information acquisition method and device, storage medium of mobile terminal application
CN110874284B (en) Data processing method and device
CN117155791B (en) Model deployment method, system, equipment and medium based on cluster topology structure
US20180239318A1 (en) Method of controlling energy storage and apparatuses performing the same
CN113992520B (en) Virtual network resource deployment method and system
CN115759241A (en) Neural network segmentation method and system based on genetic algorithm
CN115374954A (en) Model training method based on federal learning, terminal and storage medium
US20220383202A1 (en) Evaluating a contribution of participants in federated learning
CN110837889A (en) Neural network training method and device, storage medium and electronic device
CN113850390A (en) Method, device, equipment and medium for sharing data in federal learning system
CN114528893A (en) Machine learning model training method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination