CN112381307A - Meteorological event prediction method and device and related equipment - Google Patents

Meteorological event prediction method and device and related equipment Download PDF

Info

Publication number
CN112381307A
CN112381307A CN202011312818.9A CN202011312818A CN112381307A CN 112381307 A CN112381307 A CN 112381307A CN 202011312818 A CN202011312818 A CN 202011312818A CN 112381307 A CN112381307 A CN 112381307A
Authority
CN
China
Prior art keywords
sample
terminal
order gradient
model
weather
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011312818.9A
Other languages
Chinese (zh)
Other versions
CN112381307B (en
Inventor
王健宗
李泽远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202011312818.9A priority Critical patent/CN112381307B/en
Publication of CN112381307A publication Critical patent/CN112381307A/en
Priority to PCT/CN2021/083026 priority patent/WO2021203980A1/en
Application granted granted Critical
Publication of CN112381307B publication Critical patent/CN112381307B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Operations Research (AREA)
  • Data Mining & Analysis (AREA)
  • Tourism & Hospitality (AREA)
  • Game Theory and Decision Science (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application provides a meteorological event prediction method, which comprises the following steps: the first terminal calculates a first order gradient set and a second order gradient set of a loss function of the model to be trained according to each sample in the first sample set; receiving an aggregation first-order gradient set and an aggregation second-order gradient set which are sent by the second terminal and calculated according to the second sample set, wherein the second sample set is a sample set which is determined from the sample set of the second terminal and is similar to the first sample set; and then training a model according to the gradient values of the samples and the aggregation gradient values of the similar samples, and predicting the meteorological condition through the trained model. The aggregation gradient value of the similar samples is sent to the first terminal to be used for training the model, so that the problem of data leakage is avoided, meanwhile, the similar samples of other terminals are used in the process of training the model, the trained model is more accurate, the terminals can be synchronously trained in parallel, the calculation efficiency of the model is improved, and data and resources are reasonably utilized.

Description

Meteorological event prediction method and device and related equipment
Technical Field
The invention relates to the technical field of big data processing, in particular to a meteorological event prediction method, a meteorological event prediction device and related equipment.
Background
With the development of big data and artificial intelligence, the application of massive data deep learning, complex neural networks and the like is gradually carried out, and the prediction of meteorological events by using big data and artificial intelligence technology becomes a hot topic, such as rainfall prediction, temperature prediction, wind speed prediction and the like.
At present, the weather forecasting method mainly includes traditional statistical methods such as a regression model and an autoregressive moving average model, and artificial intelligent models such as an artificial neural network, a support vector machine and a regression tree. However, the existing research aims at centralizing the training model, namely, the model is trained after all data of each meteorological site are uploaded to a central server, but as the meteorological sites are widely distributed and are more in number and are monitored for a long time, the data volume is very large, and meteorological data of different provinces relate to the problem of confidentiality, the model cannot meet the expectation of people when being trained only by a centralizing training mode, the training process is very weak, the operation efficiency is inevitably low, the model is too wide, the performance is insufficient, and the like.
Disclosure of Invention
The embodiment of the application provides a meteorological event prediction method, which can solve the problems of data privacy protection existing among meteorological data, low operation efficiency, wide range, insufficient performance and the like of a model.
In a first aspect, the present application provides a weather event prediction method, which is applied to a weather prediction system including a first terminal located at a first weather station and a second terminal located at a second weather station, the method including: the first terminal calculates a first order gradient set and a second order gradient set of a loss function of the model to be trained according to each sample in a first sample set, wherein one gradient value in the first order gradient set is calculated according to one sample in the first sample set, one gradient value in the second order gradient set is calculated according to one sample in the first sample set, and the first sample set is a set of samples collected by a first meteorological station; the first terminal receives an aggregation first-order gradient set and an aggregation second-order gradient set which are sent by a second terminal and calculated according to a second sample set, wherein the second sample set comprises samples similar to each sample in the first sample set; the first terminal trains the model to be trained according to the first-order gradient set, the aggregation first-order gradient set, the second-order gradient set, the aggregation second-order gradient set and the first sample set to obtain a trained model; and the first terminal predicts the sample to be predicted based on the trained model and determines the prediction result of the sample to be predicted.
In a second aspect, the present application provides a weather event prediction method for use in a weather prediction system including a first terminal located at a first weather station and a second terminal located at a second weather station, the method comprising: the second terminal sends a second hash table to the first terminal, the second hash table comprises an identifier corresponding to each sample in the third sample set and a hash value corresponding to each sample, and the third sample set is a sample collected by the second weather station; the second terminal receives a sample identification set sent by the first terminal, wherein each sample identification in the sample identification set indicates one sample in the third sample set; and the second terminal determines a second sample set in the third sample set according to the sample identification set, calculates an aggregation first-order gradient set and an aggregation second-order gradient set of the loss function of the model to be trained according to each sample in the second sample set, and sends the aggregation first-order gradient set and the aggregation second-order gradient set to the first terminal.
In a third aspect, the present application provides a meteorological event prediction apparatus, comprising: one or more functional modules for performing the method of the first aspect; or for performing a method as described in the second aspect.
In a fourth aspect, the present application provides a computer device comprising a processor and a memory, the memory being configured to store instructions, the processor being configured to execute the instructions, the processor when executing the instructions performing the method according to the first aspect; or for performing a method as described in the second aspect.
In a fifth aspect, the present application provides a computer readable storage medium storing a computer program for execution by a processor to perform the method of the first aspect; or for performing a method as described in the second aspect.
In the embodiment of the application, the first terminal of the first weather station trains the model by using the gradient set obtained by calculating the first sample and the gradient set obtained by calculating the second sample of the second terminal, wherein the second sample is a sample similar to the first weather station in the second weather station; in addition, the first terminal receives the gradient value of the sample of the second terminal, the method does not involve a central server, and the terminals of all weather stations do not need to upload data to the central server, so that the problem of data privacy disclosure is avoided; the method can be simultaneously applied to meteorological prediction of a plurality of meteorological stations, each meteorological station can cooperate with each other, and the model for each meteorological station is synchronously trained in parallel, so that the calculation efficiency of the model is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
FIG. 1 is a schematic overall flowchart of a meteorological event prediction method according to an embodiment of the present disclosure;
fig. 2 is a sample data structure of a data terminal provided in an embodiment of the present application;
FIG. 3 is a schematic flow chart of a model training process provided in an embodiment of the present application;
fig. 4 is a schematic diagram of a data structure obtained after encrypting sample data of a data terminal provided in an embodiment of the present application;
fig. 5 is a schematic data structure diagram of a sample identifier set determined by a data terminal according to a hash table in an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a weather event prediction device provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a computer device provided in an embodiment of the present application.
Detailed Description
The following describes embodiments of the present invention with reference to the drawings in the embodiments of the present application. The terminology used in the description of the embodiments section of the present application is for the purpose of describing particular embodiments of the present application only and is not intended to be limiting of the present application. When training of multiple participants and models is involved, the traditional method is to upload data of multiple participants to a central server, train the models through the central server, and then send the trained models to the participants for prediction of relevant events. Although the method can use the data of the participants for the training of the model, so that the trained model is suitable for the data prediction of each participant, the method has the problems of low computational efficiency, too wide model and privacy disclosure.
In order to solve the above problems, the present application provides a meteorological event prediction method, which combines the characteristics of the federal learning technology and uses an XGBOOST model to train the model. Each weather station need not to upload data to central server sharing data, each weather station is in the model of local training to local sample, still use the similar sample of other weather stations for the training of local model simultaneously, wherein, similar sample refers to the similar sample with the sample of local collection in the sample of other weather stations collection, through receiving the local model of the aggregation gradient value training of the similar sample of other weather stations, the problem of data privacy disclosure has been avoided, the synchronous parallel training of each weather station is to the model of each weather station, the computational efficiency of model and the rate of accuracy of model have effectively been improved.
First, a flow of a meteorological event prediction method according to an embodiment of the present application will be described in its entirety.
Fig. 1 is a schematic overall flow chart of a meteorological event prediction method, and the overall flow chart of the meteorological event prediction method comprises the following steps:
s101: and obtaining early data to train a meteorological prediction model.
The early data refers to early meteorological data, for example, when a certain meteorological data indicates that the meteorological condition of a certain place is light rain under the conditions that the temperature is 29 ℃, the humidity is 73%, the wind speed is 27 km/h, the air pressure is 1009 kpa and the like, the temperature, the humidity, the wind speed and the air pressure are sample feature sets of a meteorological event, the light rain is a sample tag of the meteorological event, and the sample feature set of the meteorological event and the sample tag form a meteorological event sample. The early known data are used for training the model, namely the parameters of the model are searched, and finally a meteorological prediction model with known parameters is obtained.
In the embodiment of the application, the model used for model training is the XGBOOST model. The process of model training is the process of building a regression tree, i.e. a regression tree is continuously added, i.e. a new function f (t) is learned to fit the residual errors predicted by the previously trained t-1 trees.
S102: and predicting the meteorological event by using the trained model.
Finally, the trained model can input the sample feature set of the meteorological event into the trained model under the condition that the feature set (temperature, humidity, wind speed, air pressure and the like) of each meteorological sample is known but the meteorological event is unknown, and then the meteorological result of the meteorological event can be predicted. Specifically, when the weather condition of a weather event is predicted, a sample feature set of the weather event is input into a trained model, one sample feature of the sample feature set is correspondingly located on one leaf node in one tree, and finally, the sum of weights of the leaf nodes obtained by each tree is used as a predicted value of the weather event.
In a specific embodiment of the present application, the sample feature set in the meteorological data may include other meteorological features except for temperature, humidity, wind speed, air pressure, and the like, the sample feature set is not limited in the embodiment of the present application, the meteorological conditions may be one or more of other meteorological conditions such as wind, cloud, snow, and the like, and the number of sample tags corresponding to a certain meteorological condition is not limited.
The meteorological event prediction method provided in the embodiment of the application is applied to a meteorological prediction system, wherein the meteorological prediction system comprises data terminals of a plurality of meteorological stations, the meteorological stations are synchronously trained in parallel, the model training processes are consistent, and combined training is performed on the basis of not sharing data. When each meteorological station trains the local XGBOOST model, not only local sample data but also data similar to the local sample data of other meteorological stations are used. The method can realize combined training under the condition of not revealing sample data of all weather stations, solves the problem of data privacy disclosure existing among the weather station data, can enable the model to be more accurate and reasonably utilize data and resources by using the training models of the local sample and similar samples of other weather stations, synchronizes and trains the models by all the weather stations, and effectively improves the calculation efficiency of the model.
The model training method provided by the embodiment of the application is described below by taking a single weather station as an example.
Since a large amount of sample data is required in the model training process, the structure of the sample data is described first.
The embodiment of the application relates to model training among a plurality of meteorological stations, and the meteorological data of each meteorological station exists, so that PiDenotes the ith weather station, i belongs to (1,2,3, … …, M), M is the number of weather stations,
Figure BDA0002790346830000031
indicating the ith weather station PiSample data of sample q, q ∈ (1,2,3, … …, N)i),NiFor the ith weather station PiNumber of samples, sample data
Figure BDA0002790346830000041
Including a sample feature set
Figure BDA0002790346830000042
And a sample label
Figure BDA0002790346830000043
Sample feature set
Figure BDA0002790346830000044
Indicating the ith weather station PiThe sample q of (a) is compared with a corresponding characteristic set (temperature, humidity, wind speed, air pressure and other meteorological data), wherein the sample characteristic set
Figure BDA0002790346830000045
T is the number of sample features, sample label
Figure BDA0002790346830000046
Indicating the ith weather station PiSample q of (1) (no rain, light rain, medium rain, heavy rain), wherein,
Figure BDA0002790346830000047
0 indicates no rain, 1 indicates light rain, 2 indicates medium rain, 3 indicates heavy rain, and 4 indicates heavy rain. Then the ith weather station PiSample set I ofiCan be expressed as
Figure BDA0002790346830000048
Can also be expressed as
Figure BDA0002790346830000049
And, each sample data corresponds to a sample Identification (ID), e.g. a sample
Figure BDA00027903468300000427
Is marked as 1, sample
Figure BDA00027903468300000410
Is identified as 2, and so on.
In the specific embodiment of the application, the naming mode of the sample identifier corresponding to the sample data is not limited, and each weather station can confirm the identifier of each sample by itself or be determined by all weather stations participating in model training in a unified manner.
Fig. 2 shows a sample data structure of a data terminal provided in the embodiment of the present application, which is a first weather station P1For example, the first weather station P shown in FIG. 21Number of samples ofAccording to Table I1Showing a first weather station P1Is determined by the first set of samples of (a),
Figure BDA00027903468300000411
sample data
Figure BDA00027903468300000412
Sample number of 1, sample data
Figure BDA00027903468300000413
Sample number of 2, sample data
Figure BDA00027903468300000414
The sample numbers of (a) are 3, … …,
Figure BDA00027903468300000415
is marked as N1Sample of
Figure BDA00027903468300000416
Sample feature set of
Figure BDA00027903468300000417
Including sample characteristics
Figure BDA00027903468300000418
The sample label corresponding to the sample is
Figure BDA00027903468300000419
In the table shown in fig. 2, one row corresponding to the sample identification 1 is the first weather station P1Sample data of sample 1, including sample features
Figure BDA00027903468300000420
Value of 12, sample characteristics
Figure BDA00027903468300000421
Value of 17, sample characteristics
Figure BDA00027903468300000422
Value of 10, … …, sample characteristics
Figure BDA00027903468300000423
Value of 54, sample label
Figure BDA00027903468300000424
Is 0, and the sample identification is 2 corresponding to a behavior weather station P1Sample data for sample 2, and so on.
With a first weather station P1Training the model as an example, introduce the first weather station P1First terminal and second weather station P2The second terminal in the training process of the model. Fig. 3 shows a schematic flow chart of a model training process provided in an embodiment of the present application. Since the training process of each weather station model is consistent, only the first weather station P is shown in FIG. 31It should be understood that when there are M weather stations simultaneously performing model training, the second weather station P performs model training2To Mth weather station PMThe M-1 meteorological stations are arranged at a first meteorological station P1The model training is also performed locally, and the process of the model training and the first meteorological station P1The model training process is the same.
S201: and the first terminal converts each sample in the first sample set into a hash value to obtain a first hash table corresponding to the first sample set.
First set of samples I1Is the first weather station P1Set of collected samples, including N1Each sample comprises a sample feature set and a sample label, wherein the sample feature set comprises temperature, humidity, wind speed and air pressure, and the sample label indicates meteorological conditions. Before the training of the model, the data needs to be encrypted, and the first meteorological station P1Each sample data of
Figure BDA00027903468300000425
Generating L hash values according to L hash functions
Figure BDA00027903468300000426
Wherein, deltaa,b(v) Given as a hash function, a is a d-dimensional random vector, v is d-dimensional sample data, and b is [0,1 ]]A random number of (1), which is set by each weather station itself, respectively, { delta }k}k=1,2…,LWhich represents L hash functions corresponding to different random vectors a and random numbers b. Each sample data is mapped into a character string with a fixed length through a hash function, and fig. 4 shows a data structure diagram obtained after a sample set of a data terminal provided in the embodiment of the present application is encrypted.
First weather station P as shown in FIG. 41The first weather station P1Sample data of (2)
Figure BDA0002790346830000051
After the hash function processing, L hash functions are generated
Figure BDA0002790346830000052
Sample data
Figure BDA0002790346830000053
After the hash function processing, L hash functions are generated
Figure BDA0002790346830000054
By analogy, the first weather station P1A total of N1Sample, weather station P1To obtain an N1L, first hash table.
S202: and the second terminal converts each sample in the third sample set into a hash value to obtain a second hash table corresponding to the third sample set, and sends the second hash table to the first terminal.
Wherein the third sample set I2Is the second weather station P2The method comprises the steps of collecting samples, wherein each sample comprises a sample feature set and a sample label, the sample feature set comprises temperature, humidity, wind speed and air pressure, the sample label indicates meteorological conditions, and the second hash tableIncluding the sample identification corresponding to each sample in the third sample set and the hash value corresponding to each sample. Similar to the first terminal, for the second weather station P2Of the third sample set I2Second weather station P2The second terminal in (1) generates an N according to the hash function2L and sending the second hash table to the first weather station P1
It should be understood that, when M terminals are involved in model training, the terminals of each weather station involved in model training perform the operation of the second terminal, generate the hash table corresponding to each sample, and send the hash table to the first terminal, i.e., each weather station PiWill generate an NiAnd sending the hash table of the L to the first terminal, wherein the hash functions used by the weather stations are the same.
S203: and the first terminal receives a second hash table sent by the second terminal, obtains a sample identification set according to the first hash table and the second hash table, and sends the sample identification set to the second terminal.
The second hash table comprises an identifier corresponding to each sample in the third sample set and a hash value corresponding to each sample, and the first terminal determines a sample identifier corresponding to a sample most similar to each sample in the first sample set in the third sample set according to the first hash table and the second hash table, so that the sample identifier set is obtained.
Specifically, for a certain hash value in the first hash table, the hash value corresponds to a certain sample in the first sample set, the first terminal searches for a hash value closest to the hash value from the second hash table, determines a sample identifier corresponding to the closest hash value, and when each corresponding sample in the first hash table determines a sample identifier, the set represented by all samples is the sample identifier set, and the sample corresponding to the closest hash value is the most similar sample of the certain sample in the first sample set.
When the M terminals are involved in the model training, the sample identification set comprises the weather station and the first weather station P participating in the model training1The local sample is the first sample, and the local sample is the most similar sample identificationA weather station P1Any one sample in the first set of samples. First weather station P1According to the received M-1 hash tables of the other weather stations, the hash tables of the other weather stations are compared with the first hash table, and the fact that the first weather station P is located in the samples collected by the other weather stations is determined1I.e. determining N from each weather station1Sample identifications, wherein the sample data corresponding to each sample identification represents the sample data corresponding to the first weather station P1The sample data of one sample is most similar, and the sample identification set is sent to the terminals of other M-1 meteorological stations participating in model training.
Due to the first weather station P1With N1Sample data of one sample q
Figure BDA0002790346830000055
Can all go from other weather stations Pi(i-2, 3,4, … …, M) the sample identity of the most similar sample found is noted
Figure BDA0002790346830000056
When i is 1, the most similar sample is the first weather station P1The most similar samples are obtained by comparing L hash values of a single sample in the hash table, and because the hash values are character strings with fixed lengths, the most similar samples are found by comparing the sizes of the character strings, that is, two compared character strings have the closest size, and the two samples are the most similar.
Figure 5 illustrates schematically a first weather station P1And the data structure diagram of the sample identification set determined according to the hash table. In the table shown in fig. 5, each row represents a first weather station P1The sample identification of the sample most similar to other weather stations, each column representing a weather station and a first weather station P1The sample identification of the most similar sample to which the sample of (a) corresponds. With a second weather station P2For example, the second row in FIG. 5 is the second weather station P2And a first weather station P1Sample identification of the most similar sample corresponding to the sample of (1)For example, a second weather station P2The first gas station P1Sample data of (2)
Figure BDA0002790346830000061
The most similar sample is the second weather station P2The middle sample is identified as sample data corresponding to 45.
It should be noted that, since the first weather station P is sought1Of a similar sample, a first meteorological station P1With N1For each sample, N is found for each of the other weather stations1Individual sample identification and first weather station P1In a one-to-one correspondence, i.e. the first weather station P1Obtaining a N1M sample identification set S1. For example, the first weather station P1Having 100 samples, a second weather station P2With 150 samples, for the first weather station P1Of the first meteorological station P1From a second weather station P2Wherein the sample identity of a most similar sample is determined from the second weather station P2The sample identifications of the 100 most similar samples determined in (a) may be different or partially the same. It can be seen that the ith weather station P is being soughtiWhen similar samples are obtained, the ith weather station PiA sample identification set S is obtainedi∈NiX M and broadcasts the sample identification set.
S204: and the first terminal calculates a first order gradient set and a second order gradient set of the loss function of the model to be trained according to each sample in the first sample set.
Wherein one gradient value in the first-order gradient set is calculated according to one sample in the first sample set, and one gradient value in the second-order gradient set is calculated according to one sample in the first sample set. Using formulas
Figure BDA0002790346830000062
And
Figure BDA0002790346830000063
calculating a first weather station P1Of a certain sample q of the gradient g of the first order1qAnd a second order gradient h1qWherein, in the step (A),
Figure BDA0002790346830000064
showing a first weather station P1The sample label of the sample i of (1),
Figure BDA0002790346830000065
representing the predicted value of sample i, which is a given initial value when building the first tree, since the model has not been trained at this time, the function l () represents the loss function of the model to be trained, the function l '() represents the first derivative of the loss function, the function l' () represents the second derivative of the loss function, the first weather station P1One of the samples q is a first weather station P1Of the first set of samples, i.e. the first weather station P1First and second order gradients are calculated for each sample.
It will be appreciated that when M terminals are involved in the model training, the terminals of each weather station involved in the model training will perform the first weather station P described above1The operation of (1): the first order gradient and the second order gradient of the loss function are calculated by using the sample set, wherein the selection of the loss function is not limited by the application, for example, the loss function can be selected as loglos.
S205: the second terminal receives the sample identification set sent by the first terminal, determines a second sample set in a third sample set according to the sample identification set, calculates an aggregate first-order gradient set and an aggregate second-order gradient set of the loss function of the model to be trained according to each sample in the second sample set, and sends the aggregate first-order gradient set and the aggregate second-order gradient set to the first terminal.
Wherein the second set of samples includes samples determined in the third set of samples that are most similar to each of the samples in the first set of samples, the second weather station P2The second terminal receives the first weather station P1Each sample identification in the identification set indicating one sample in the third sample set, and the second terminal finding the second sample from the third sample set according to the sample identification setAnd (4) collecting. Also using the formula
Figure BDA0002790346830000071
And
Figure BDA0002790346830000072
to calculate the aggregate first order gradient set and aggregate second order gradient set of the loss function of the second sample set and send them to the first meteorological station P1Wherein, in the step (A),
Figure BDA0002790346830000073
representing a second weather station P2The sample label of the sample q of (1),
Figure BDA0002790346830000074
representing the predicted value of the sample q, which is a given initial value when building the first tree, since no model has been trained at this time, the function l () represents the loss function of the model to be trained, the function l '() represents the first derivative of the loss function, and the function l' () represents the second derivative of the loss function. Due to the first weather station P1Total N is1One sample, so that the second terminal calculates N in total1A polymerization first order gradient and N1And a polymeric second order gradient. It should be understood that N1The first order gradients of the polymerization may be different or partially the same, for the same reason, N1The individual aggregate second order gradients may be different or partially identical.
It should be understood that when M terminals are involved in model training, the terminal of each weather station involved in model training performs the operations of the second terminal, i.e., finding a similar sample set according to the received sample identification set, calculating an aggregate first-order gradient set and an aggregate second-order gradient set of the similar sample set, and sending the aggregate first-order gradient set and the aggregate second-order gradient set to the first weather station.
For example, the weather station participating in the model training further includes a third weather station P3The fourth station P4… …, Mth weather station PMThen, the terminal of the third weather station finds out the similar sample in the third weather station to the first weather station according to the sample identification set sent by the first weather stationAnd calculating gradient values of similar samples and sending the gradient values to a first meteorological station, finding samples similar to the first meteorological station in a fourth meteorological station by a terminal of the fourth meteorological station according to a sample identification set sent by the first meteorological station, calculating gradient values of the similar samples and sending the gradient values to the first meteorological station, and so on, so that the first meteorological station can obtain the gradient values of the samples similar to the first sample set in other meteorological stations participating in model training besides the gradient values of the first sample set.
S206: and the first terminal receives the aggregation first-order gradient set and the aggregation second-order gradient set which are obtained by calculation according to the second sample set and sent by the second terminal, and trains the model to be trained according to the first-order gradient set, the aggregation first-order gradient set, the second-order gradient set, the aggregation second-order gradient set and the first sample set to obtain the trained model.
(1) And the first terminal updates the first-order gradient set of the first sample set according to the aggregate first-order gradient set and the first-order gradient set to obtain a first-order sample gradient set, and updates the second-order gradient set of the first sample set according to the aggregate second-order gradient set and the second-order gradient set to obtain a second-order sample gradient set.
When the meteorological station participating in model training only comprises the first meteorological station P1And a second weather station P2Then, for a certain sample data in the first sample set, the first terminal combines the first-order gradient of the loss function of the sample data with the second meteorological station P2The sum of the aggregate first-order gradients corresponding to the sample data is used as the first-order sample gradient of the sample data, and the second-order gradient of the loss function of the sample data and the second meteorological station P are used2The sum of the aggregate second-order gradients corresponding to the most similar sample data is used as the second-order sample gradient of the sample data. So that the first terminal gets the first weather station P1A first order sample gradient set and a second order sample gradient set of the first sample set.
It will be appreciated that when the participating model training involves M terminals, the first weather station P1The first terminal receives the aggregate first order gradient set and the aggregate second order gradient set sent by the terminals of other weather stations, and for the first weather station P1Any one of (1)Sample q, first meteorological station P1To obtain a polymerization first order gradient
Figure BDA0002790346830000075
And a polymerization second order gradient
Figure BDA0002790346830000076
The first weather station P1According to a first terminal
Figure BDA0002790346830000077
Updating the first order sample gradient G of the sample q1qAccording to
Figure BDA0002790346830000078
Updating
Figure BDA0002790346830000079
Second order sample gradient H of1qAnd, a first weather station P1The first and second order gradients are updated for each sample in the model in preparation for training.
(2) The first terminal trains the XGBOOST model by using the first sample set and the first-order sample gradient set and the second-order sample gradient set, so as to obtain a training result for the first weather station P1The model to be predicted.
At a training first meteorological station P1In the process of the meteorological prediction model, the first-order sample gradient set and the second-order sample gradient set of the first sample set are used as the first-order gradient set and the second-order gradient set when the first sample set is trained, and a first gradient tree of the model is trained.
The process of training the model is a process of continuously constructing the regression tree, and specifically, when constructing the first tree, splitting is required to be performed at the root node, the first sample set is divided into two sets of a left child node and a right child node at the node, and G of the two sets is calculated by using the sample gradient value of the sampleL、GR、HL、HRAnd then, the formula is utilized:
Figure BDA0002790346830000081
and calculating Gain, and taking the maximum value of the Gain value Gain as a standard for judging the optimal splitting point.
Wherein G isLSum of first order sample gradients, G, representing set of sample points in left leaf node if splitRRepresenting the sum of the first order sample gradients of the set of sample points in the right leaf node if split, HLSum of second order sample gradients, H, representing the set of sample points in the left leaf node if splitRRepresents the sum of the second order sample gradients of the set of sample points in the right leaf child node if split.
Firstly, partition areas are determined according to different partition points, a first sample set is divided into a left sub-node set and a right sub-node set, then, the partition points are determined by utilizing a sample feature set of sample data, and then, gains under different partition points are repeatedly calculated.
For example, for sample features
Figure BDA0002790346830000082
If for the sample characteristics
Figure BDA0002790346830000083
At a weather station P1If there are {12,15,20,30,35} data values in the sample data of (1), the Gain is calculated with 12,15,20,30,35 as division points, respectively. And traversing the next sample feature in the same way, calculating Gain, and so on.
Taking the splitting point with the maximum Gain as a root node, obtaining a sample set of split left and right child nodes, then judging whether to continue splitting according to the depth of the tree, if only one sample remains in the split child nodes, then the node does not need to be split again, and the node becomes a leaf node, then according to a formula:
Figure BDA0002790346830000084
calculate theThe weight of the leaf node. Wherein, among others,
Figure BDA0002790346830000085
is the sum of the first order sample gradient statistics of all samples falling within a leaf i,
Figure BDA0002790346830000086
is the sum of all sample second order sample gradient statistics that fall within leaf i.
If the tree depth is not reached, the same splitting operation is continuously carried out on the left child node and the right child node, namely the child node is regarded as a root node, and the process is circulated.
If the depth of the tree is reached, the nodes of the tree can not be split any more, and the weights of the leaf nodes are calculated
Figure BDA0002790346830000087
Thereby training the first gradient tree.
In the construction of the t (t)>1) In the course of tree training, the process is exactly the same as the previous process for t-1 trees, but the input parameters of the tree are no longer the initial input parameters G used for the first tree1q、H1qSince the t-th tree is fitted on the basis of the first t-1 trees, the first order gradient is present
Figure BDA0002790346830000088
And second order gradient
Figure BDA0002790346830000089
The predicted value of the model formed by the first t-1 trees on the ith training sample is required
Figure BDA00027903468300000810
And then, continuously calculating Gain based on the splitting points, and finally determining the optimal splitting points and the optimal weights required by the construction of the gradient tree in the current round.
The XGBOOST model training is completed when all sample features in the sample feature set are used in the construction of the model.
It should be understood that the present invention relates to model trainingWhen M terminals are reached, the terminals of the weather station for model training all execute the first weather station P1The terminals of the weather station participating in the model training are all to perform the above-mentioned second weather station P2Of the second terminal. It can be seen that for a weather station, the weather station not only uses the sample data from other weather stations to train the model, but also obtains a predictive model for the weather station.
S207: and the first terminal predicts the sample to be predicted based on the trained model and determines the prediction result of the sample to be predicted.
After the first terminal obtains the meteorological event prediction model through training, the first terminal can use the sample to be predicted to predict the meteorological condition of a certain event. The method comprises the steps of substituting a sample feature set in a sample to be predicted into a trained regression tree, enabling each sample feature to finally fall on a leaf node of the regression tree, adding weight values of the leaf nodes obtained by all the trees to obtain a weather predicted value of an event, and comparing a numerical value of a sample label to which the result value is closest, wherein weather conditions (no rain, light rain, medium rain, heavy rain and heavy rain) corresponding to the sample label are a prediction result of the sample to be predicted.
The method can be seen in that a sample similar to the sample of the model training party is searched from the sample data of the model training participator by utilizing the idea of federal learning to enlarge the training sample set, so that a more accurate model is constructed, and meanwhile, after the similar sample is found, the sample data is not directly sent to the model training party, but the gradient value of the loss function of the similar sample is sent, so that the problem of data leakage is avoided.
Fig. 6 is a schematic diagram of a weather event prediction device capable of performing the operations of the first terminal and the second terminal according to an embodiment of the present application, where the weather event prediction device 100 includes a receiving unit 101, a processing unit 102, and a transmitting unit 103. Wherein, when the weather event prediction apparatus 100 performs the operation of the first terminal:
a receiving unit 101, configured to receive an aggregate first-order gradient set and an aggregate second-order gradient set sent by a second terminal and calculated according to a second sample set, where the second sample set includes samples similar to each sample in a first sample set; and receiving a second hash table sent by a second terminal, wherein the second hash table comprises a sample identifier corresponding to each sample in a third sample set and a hash value corresponding to each sample.
A processing unit 102, configured to calculate, according to each sample in a first sample set, a first order gradient set and a second order gradient set of a loss function of a model to be trained, where one gradient value in the first order gradient set is calculated according to one sample in the first sample set, and one gradient value in the second order gradient set is calculated according to one sample in the first sample set, and the first sample set is a set of samples acquired by a first weather station; training a model to be trained according to the first-order gradient set, the aggregation first-order gradient set, the second-order gradient set, the aggregation second-order gradient set and the first sample set to obtain a trained model; converting each sample in the first sample set into a hash value to obtain a first hash table corresponding to the first sample set; and determining a sample identifier corresponding to the sample most similar to each sample of the first sample set in a third sample set according to the first hash table and the second hash table to obtain a sample identifier set.
A sending unit 103, configured to send the sample identifier set to the second terminal, so that the second terminal determines the second sample set according to the sample identifiers in the sample identifier set.
When the weather event prediction apparatus 100 performs the operation of the second terminal:
a receiving unit 101, configured to receive a sample identifier set sent by a first terminal, where each sample identifier in the sample identifier set indicates one sample in a third sample set.
The processing unit 102 is configured to convert each sample in the third sample set into a hash value, so as to obtain a second hash table corresponding to the third sample set; and determining a second sample set in the third sample set according to the identification set, calculating an aggregation first-order gradient set and an aggregation second-order gradient set of the loss function of the model to be trained according to each sample in the second sample set, and sending the aggregation first-order gradient set and the aggregation second-order gradient set to the first terminal.
The sending unit 103 is configured to send a second hash table to the first terminal, where the second hash table includes a sample identifier corresponding to each sample in a third sample set and a hash value corresponding to each sample, and the third sample set is a sample acquired by the second weather station.
Specifically, the weather event prediction device 100 may refer to the relevant operations of the first terminal in the above method embodiments to implement prediction of weather events, and will not be described in detail herein.
Fig. 7 is a schematic structural diagram of a computing device provided in an embodiment of the present application, where the computing device 200 includes: a processor 210, a communication interface 220, and a memory 230, wherein the processor 210, the communication interface 220, and the memory 230 are connected to each other via a bus 240, and the processor 210 is configured to execute instructions stored in the memory 230. The memory 230 stores program code, and the processor 210 may call the program code stored in the memory 230 to perform the following operations:
the meteorological event prediction device calculates a first-order gradient set and a second-order gradient set of a loss function of a model to be trained according to each sample in a first sample set, wherein one gradient value in the first-order gradient set is calculated according to one sample in the first sample set, one gradient value in the second-order gradient set is calculated according to one sample in the first sample set, and the first sample set is a set of samples collected by a first meteorological station; receiving an aggregation first-order gradient set and an aggregation second-order gradient set which are sent by a second terminal and calculated according to a second sample set, wherein the second sample set comprises samples similar to each sample in the first sample set; training the model to be trained according to the first-order gradient set, the aggregation first-order gradient set, the second-order gradient set, the aggregation second-order gradient set and the first sample set to obtain a trained model; and predicting the sample to be predicted based on the trained model, and determining the prediction result of the sample to be predicted.
In this embodiment of the present disclosure, the processor 210 may have a variety of specific implementation forms, for example, the processor 210 may be a combination of any one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Tensor Processing Unit (TPU), a neural Network Processing Unit (NPU), and the like, and the processor 210 may also be a single-core processor or a multi-core processor. The processor 210 may be a combination of a CPU (GPU, TPU, or NPU) and a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof. The processor 210 may also be implemented by a logic device with built-in processing logic, such as an FPGA or a Digital Signal Processor (DSP).
The communication interface 220 may be a wired interface or a wireless interface, and is used for communicating with other modules or devices, the wired interface may be an ethernet interface, a Controller Area Network (CAN) interface or a Local Interconnect Network (LIN) interface, and the wireless interface may be a cellular network interface or a wireless lan interface.
The memory 230 may be a non-volatile memory, such as a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash memory. Memory 230 may also be volatile memory, which may be Random Access Memory (RAM), that acts as external cache memory.
The memory 230 may also be used to store instructions and data that facilitate the processor 210 to invoke the instructions stored in the memory 230 to implement the operations performed by the processing unit 103 described above or the operations performed by the weather event prediction device of the method embodiments described above. Moreover, computing device 200 may contain more or fewer components than shown in FIG. 7, or have a different arrangement of components.
The bus 240 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 6, but this is not intended to represent only one bus or type of bus.
Optionally, the computing device 200 may further include an input/output interface 250, and the input/output interface 250 is connected with an input/output device for receiving input information and outputting an operation result.
It should be understood that the computing device 200 of the embodiment of the present application may correspond to the data processing apparatus 100 in the above embodiment, and may perform the operations performed by the weather event prediction apparatus in the above method embodiment, which are not described herein again.
The embodiments of the present application further provide a non-transitory computer storage medium, where instructions are stored in the computer storage medium, and when the instructions are run on a processor, the method steps in the foregoing method embodiments may be implemented, and specific implementation of the processor of the computer storage medium in executing the method steps may refer to specific operations in the foregoing method embodiments, and details are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded or executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more collections of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium, or a semiconductor medium, which may be a solid state disk.
The foregoing is only illustrative of the present application. Those skilled in the art can conceive of changes or substitutions based on the specific embodiments provided in the present application, and all such changes or substitutions are intended to be included within the scope of the present application.

Claims (10)

1. A weather event prediction method applied to a weather prediction system including a first terminal located at a first weather station and a second terminal located at a second weather station, the method comprising:
the first terminal calculates a first order gradient set and a second order gradient set of a loss function of the model to be trained according to each sample in a first sample set, wherein one gradient value in the first order gradient set is calculated according to one sample in the first sample set, one gradient value in the second order gradient set is calculated according to one sample in the first sample set, and the first sample set is a set of samples collected by a first meteorological station;
the first terminal receives an aggregation first-order gradient set and an aggregation second-order gradient set which are sent by a second terminal and calculated according to a second sample set, wherein the second sample set comprises samples similar to each sample in the first sample set;
the first terminal trains the model to be trained according to the first-order gradient set, the aggregation first-order gradient set, the second-order gradient set, the aggregation second-order gradient set and the first sample set to obtain a trained model;
and the first terminal predicts the sample to be predicted based on the trained model and determines the prediction result of the sample to be predicted.
2. The method of claim 1, wherein each sample comprises a set of sample features including temperature, humidity, wind speed, and air pressure and a sample signature indicative of meteorological conditions.
3. The method of claim 1 or 2, wherein the second set of samples comprises samples determined to be most similar to each sample in the first set of samples in a third set of samples, the third set of samples being samples acquired by the second weather station.
4. The method of claim 3, wherein before the first terminal receives the aggregate first-order gradient set and the aggregate second-order gradient set calculated according to the second sample set sent by the second terminal, further comprising:
the first terminal converts each sample in the first sample set into a hash value to obtain a first hash table corresponding to the first sample set;
the first terminal receives a second hash table sent by the second terminal, wherein the second hash table comprises an identifier corresponding to each sample in the third sample set and a hash value corresponding to each sample;
the first terminal determines a sample identifier corresponding to a sample most similar to each sample of the first sample set in the third sample set according to the first hash table and the second hash table to obtain a sample identifier set;
and the first terminal sends the sample identification set to the second terminal so that the second terminal determines the second sample set according to the sample identification in the sample identification set.
5. A weather event prediction method applied to a weather prediction system including a first terminal located at a first weather station and a second terminal located at a second weather station, the method comprising:
the second terminal sends a second hash table to the first terminal, the second hash table comprises a sample identifier corresponding to each sample in the third sample set and a hash value corresponding to each sample, and the third sample set is a sample collected by the second weather station;
the second terminal receives a sample identification set sent by the first terminal, wherein each sample identification in the identification set indicates one sample in the third sample set;
and the second terminal determines a second sample set in the third sample set according to the sample identification set, calculates an aggregation first-order gradient set and an aggregation second-order gradient set of the loss function of the model to be trained according to each sample in the second sample set, and sends the aggregation first-order gradient set and the aggregation second-order gradient set to the first terminal.
6. The method of claim 5, wherein before the second terminal sends the second hash table to the first terminal, further comprising: and the second terminal converts each sample in the third sample set into a hash value to obtain a second hash table corresponding to the third sample set.
7. The method of claim 5 or 6, wherein each sample comprises a set of sample features including temperature, humidity, wind speed, and air pressure and a sample signature indicative of meteorological conditions.
8. A weather event prediction device, the device comprising: one or more functional modules for performing the method of any one of claims 1 to 4; or for performing the method of any one of claims 5 to 7.
9. A computer device, comprising a processor and a memory, the memory for storing instructions, the processor for executing the instructions, the processor when executing the instructions performing the method of any of claims 1 to 4; or to perform a method according to any of claims 5 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which is executed by a processor to implement the method according to any one of claims 1 to 4; or to perform a method according to any of claims 5 to 7.
CN202011312818.9A 2020-11-20 2020-11-20 Meteorological event prediction method and device and related equipment Active CN112381307B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011312818.9A CN112381307B (en) 2020-11-20 2020-11-20 Meteorological event prediction method and device and related equipment
PCT/CN2021/083026 WO2021203980A1 (en) 2020-11-20 2021-03-25 Meteorological event prediction method and apparatus, and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011312818.9A CN112381307B (en) 2020-11-20 2020-11-20 Meteorological event prediction method and device and related equipment

Publications (2)

Publication Number Publication Date
CN112381307A true CN112381307A (en) 2021-02-19
CN112381307B CN112381307B (en) 2023-12-22

Family

ID=74584503

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011312818.9A Active CN112381307B (en) 2020-11-20 2020-11-20 Meteorological event prediction method and device and related equipment

Country Status (2)

Country Link
CN (1) CN112381307B (en)
WO (1) WO2021203980A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021203980A1 (en) * 2020-11-20 2021-10-14 平安科技(深圳)有限公司 Meteorological event prediction method and apparatus, and related device
CN113722739A (en) * 2021-09-06 2021-11-30 京东科技控股股份有限公司 Gradient lifting tree model generation method and device, electronic equipment and storage medium
CN113762805A (en) * 2021-09-23 2021-12-07 国网湖南省电力有限公司 Mountain forest fire early warning method applied to power transmission line

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114239862A (en) * 2021-12-23 2022-03-25 电子科技大学 anti-Byzantine attack federal learning method for protecting user data privacy
CN114091624B (en) * 2022-01-18 2022-04-26 蓝象智联(杭州)科技有限公司 Federal gradient lifting decision tree model training method without third party
CN114626458B (en) * 2022-03-15 2022-10-21 中科三清科技有限公司 High-voltage rear part identification method and device, storage medium and terminal
CN115794981B (en) * 2022-12-14 2023-09-26 广西电网有限责任公司 Method and system for counting meteorological data by using model

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109472283A (en) * 2018-09-13 2019-03-15 中国科学院计算机网络信息中心 A kind of hazardous weather event prediction method and apparatus based on Multiple Incremental regression tree model
CN109783682A (en) * 2019-01-19 2019-05-21 北京工业大学 It is a kind of based on putting non-to the depth of similarity loose hashing image search method
WO2020029590A1 (en) * 2018-08-10 2020-02-13 深圳前海微众银行股份有限公司 Sample prediction method and device based on federated training, and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10345485B2 (en) * 2015-10-07 2019-07-09 Forensic Weather Consultants, Llc Forensic weather system
CN111144576A (en) * 2019-12-13 2020-05-12 支付宝(杭州)信息技术有限公司 Model training method and device and electronic equipment
CN111695697B (en) * 2020-06-12 2023-09-08 深圳前海微众银行股份有限公司 Multiparty joint decision tree construction method, equipment and readable storage medium
CN112381307B (en) * 2020-11-20 2023-12-22 平安科技(深圳)有限公司 Meteorological event prediction method and device and related equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020029590A1 (en) * 2018-08-10 2020-02-13 深圳前海微众银行股份有限公司 Sample prediction method and device based on federated training, and storage medium
CN109472283A (en) * 2018-09-13 2019-03-15 中国科学院计算机网络信息中心 A kind of hazardous weather event prediction method and apparatus based on Multiple Incremental regression tree model
CN109783682A (en) * 2019-01-19 2019-05-21 北京工业大学 It is a kind of based on putting non-to the depth of similarity loose hashing image search method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021203980A1 (en) * 2020-11-20 2021-10-14 平安科技(深圳)有限公司 Meteorological event prediction method and apparatus, and related device
CN113722739A (en) * 2021-09-06 2021-11-30 京东科技控股股份有限公司 Gradient lifting tree model generation method and device, electronic equipment and storage medium
CN113722739B (en) * 2021-09-06 2024-04-09 京东科技控股股份有限公司 Gradient lifting tree model generation method and device, electronic equipment and storage medium
CN113762805A (en) * 2021-09-23 2021-12-07 国网湖南省电力有限公司 Mountain forest fire early warning method applied to power transmission line

Also Published As

Publication number Publication date
WO2021203980A1 (en) 2021-10-14
CN112381307B (en) 2023-12-22

Similar Documents

Publication Publication Date Title
CN112381307A (en) Meteorological event prediction method and device and related equipment
CN110837602B (en) User recommendation method based on representation learning and multi-mode convolutional neural network
CN111461410B (en) Air quality prediction method and device based on transfer learning
CN110636445B (en) WIFI-based indoor positioning method, device, equipment and medium
CN116032663A (en) Privacy data processing system, method, equipment and medium based on edge equipment
CN114462577A (en) Federated learning system, method, computer equipment and storage medium
CN108446712B (en) ODN network intelligent planning method, device and system
CN114584406B (en) Industrial big data privacy protection system and method for federated learning
Liu et al. A SDN-based intelligent prediction approach to power traffic identification and monitoring for smart network access
CN111460277B (en) Personalized recommendation method based on mobile social network tree-shaped transmission path
CN117829307A (en) Federal learning method and system for data heterogeneity
CN113541986B (en) Fault prediction method and device for 5G slice and computing equipment
CN112231481A (en) Website classification method and device, computer equipment and storage medium
CN115002031B (en) Federal learning network flow classification model training method, model and classification method based on unbalanced data distribution
CN116244484A (en) Federal cross-modal retrieval method and system for unbalanced data
CN114492849B (en) Model updating method and device based on federal learning
CN115913992A (en) Anonymous network traffic classification method based on small sample machine learning
CN115099875A (en) Data classification method based on decision tree model and related equipment
WO2019114481A1 (en) Cluster type recognition method, apparatus, electronic apparatus, and storage medium
Kalam et al. 5g traffic forecasting using federated learning
Wu [Retracted] Virtual Simulation Management of Data Traffic Optimization of Big Data Cloud Platform considering Multipoint Mapping Algorithm
CN115361032B (en) Antenna unit for 5G communication
CN116401071B (en) Resource allocation method and system for edge calculation
CN116991337A (en) Cloud storage method and device for educational resources of remote educational system
CN112118278B (en) Computing node access method, device, electronic equipment and machine-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant