CN106022521A - Hadoop framework-based short-term load prediction method for distributed BP neural network - Google Patents

Hadoop framework-based short-term load prediction method for distributed BP neural network Download PDF

Info

Publication number
CN106022521A
CN106022521A CN201610334920.6A CN201610334920A CN106022521A CN 106022521 A CN106022521 A CN 106022521A CN 201610334920 A CN201610334920 A CN 201610334920A CN 106022521 A CN106022521 A CN 106022521A
Authority
CN
China
Prior art keywords
distributed
neural network
threshold value
layer
load
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610334920.6A
Other languages
Chinese (zh)
Other versions
CN106022521B (en
Inventor
刘天琪
苏学能
焦慧明
何川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201610334920.6A priority Critical patent/CN106022521B/en
Publication of CN106022521A publication Critical patent/CN106022521A/en
Application granted granted Critical
Publication of CN106022521B publication Critical patent/CN106022521B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Economics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Operations Research (AREA)
  • Primary Health Care (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Biomedical Technology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a Hadoop framework-based short-term load prediction method for a distributed BP (Back Propagation) neural network. The method specifically comprises the steps of obtaining an initial load data set; dividing the load data set into small data sets and storing the small data sets in data nodes of a distributed file system; initializing BP neural network parameters and uploading a parameter set into the distributed file system; training the BP neural network according to a current load sample, and obtaining correction values of a weight and a threshold of the BP neural network in the current data set; performing statistics on sum of weight and threshold parameters of all layers and between the layers of the network according to a key value of a key value pair; judging whether the convergence precision or the maximum iterative frequency is reached or not in a current iterative task, and if yes, establishing a distributed BP neural network model, or otherwise, performing correction of the weight and threshold parameters of the network; and inputting prediction day data and obtaining load power data of a prediction day. According to the method, the load prediction speed is increased and the requirements of load prediction precision are met.

Description

The short-term load forecasting method of distributed BP neutral net based on Hadoop framework
Technical field
The present invention relates to power system and be combined short-term load forecasting applied technical field under scene with big data, particularly to A kind of short-term load forecasting method of distributed BP neutral net based on Hadoop framework.
Background technology
Load forecast ensure Power System Planning with reliably, have in terms of economical operation and be of great significance. Along with the constantly progress of modern technologies and going deep into of intelligent grid, the development that load prediction theory and technology is the biggest.For many years Coming, Methods of electric load forecasting and theory continue to bring out, and vector is supported in time series method, fuzzy theory, regression analysis, recurrence The technology such as machine, Bayes and neutral net are that load forecast provides good technical support.
Existing algorithm still suffers from certain limitation.Time series method: higher to historical data accuracy, short term is pre- During survey insensitive to weather conditions, it is difficult to solve the problem that the short-term load forecasting precision that causes because of meteorological factor is the highest.Return Analytic process: describe the quantitative relation between observed variable quantitatively from statistical average meaning visual angle, but very by load data amount The restriction of scale.Regression support vector machine: the method has good generalization ability, meanwhile, also can because of to penalty coefficient c, The e-value of loss function and the optimizing of the γ-value of kernel function and cause the training time the most tediously long, especially advise at training sample set When mould is bigger, embody the most prominent.
Along with emerging in large numbers of intelligent power mass data, existing prediction algorithm cannot meet wanting of predetermined speed and precision of prediction Ask, therefore, it is necessary to find a kind of new method that can meet magnanimity TV university data analysis.BP neutral net has the strongest non- Linear Mapping ability, self-learning capability and fault-tolerant ability, when being applied to load small data set, have precision of prediction higher, The advantages such as training speed is fast.But owing to this algorithm can carry out taking turns training for each load input and output sequence, with Calculating and obtain network each layer weights, the correction of threshold value, when data volume is the biggest, operand will become very big, and unit serial is instructed Practicing the time will be likely to be breached several hours, the most greatly.Therefore, the short-term load forecasting problem on the basis of mass data is still one Individual problem demanding prompt solution.
Summary of the invention
The technical problem to be solved is to provide a kind of distributed BP neutral net based on Hadoop framework Short-term load forecasting method, described method, based on Hadoop cluster platform, plays the nonlinear mapping energy that BP neutral net is powerful Power, improves load prediction speed, meets the requirement of load prediction precision.
For solving above-mentioned technical problem, the technical solution used in the present invention is:
The short-term load forecasting method of a kind of distributed BP neutral net based on Hadoop framework, comprises the following steps:
Step one: obtain initial load data set;
Step 2: according to the MapReduce framework of Hadoop framework, load data collection is split as small data set, deposits Storage is in the back end of distributed file system;Described small data set key-value pair<key, value>represents, key value is The initial character of this row is relative to the side-play amount of text first address, and value value will be resolvable to the current each layer of BP neutral net Weights, threshold value;
Step 3: initialize BP neural network parameter, comprise the number of plies of input layer, hidden layer and output layer, input layer with The weights between weights, the threshold value of hidden layer neuron, hidden layer and output layer between hidden layer, the threshold of output layer neuron Value, is uploaded to parameter sets in distributed file system;
Step 4: in the Map stage, reads the parameter in distributed file system, including weights, threshold value, appoints at every height When business starts, reduction BP neutral net;The forward of the input signal carrying out BP neutral net according to the distributed data in subtask passes Pass the back propagation with error signal, obtain the weights of BP neutral net, threshold value correction under current data set, according to key It is worth form as the input parameter in Reduce stage;
Step 5: in the Reduce stage, after all data sets are trained by BP neutral net, according to mark input layer, implies Layer and the key value in the key-value pair<key, value>of output layer neuron corresponding weight value and threshold value, all load data samples of statistics This training terminate after to each neuron weights, the influence amount of threshold value, by result output to distributed file system;
Step 6: judge under current iteration task, if reach convergence precision or reach maximum iteration time;If so, depend on According to the number of plies of input layer, hidden layer and the output layer of BP neutral net, and in distributed file system, weights, threshold parameter are built Vertical distributed BP neural network model, if it is not, carry out the correction of BP neural network weight, threshold parameter;
Step 7: according to distributed BP neural network model, input prediction day data be predicted, obtain predicting the negative of day Lotus power data.
According to such scheme, in described step 4, calculate and obtain the weights of BP neutral net, threshold value under current data set Correction, particularly as follows:
Δw′ki(τ+1)=(1-ρ) η Δ wki(τ+1)+ρΔwki(τ),
Δ′αk(τ+1)=(1-ρ) η Δ αk(τ+1)+ρΔαk(τ),
Δ′wij(τ+1)=(1-ρ) η Δ wij(τ+1)+ρΔwij(τ),
Δ′θi(τ+1)=(1-ρ) η Δ θi(τ+1)+ρΔθi(τ),
Wherein, Δ w 'kiFor showing that output layer kth node is to the final modified weight amount between hidden layer i-th node; Δ′αkFinal threshold value correction for output layer kth node;Δ′wijRepresent that hidden layer i-th node is to input layer jth Final modified weight amount between node;Δ′θiFinal threshold value correction for hidden layer i-th node;ρ is factor of momentum;τ For iterations.
According to such scheme, also include distributed BP neural network model is improved, i.e. introduce factor of momentum, use Repeatedly calculate the mode averaged distributed BP neural network model is improved.
Compared with prior art, the invention has the beneficial effects as follows: use the Hadoop cluster effectively processing the big data of magnanimity Platform, plays the non-linear mapping capability that BP neutral net is powerful, improves load prediction speed, meet load prediction precision Requirement.
Accompanying drawing explanation
Fig. 1 is typical three layers of BP neural network structure schematic diagram.
Fig. 2 is distributed BP neural network prediction model structural representation in the present invention.
Fig. 3 is laboratory Hadoop cluster platform topological diagram in the present invention.
Fig. 4 is that in the present invention, MapReduce-BP load prediction results and traditional BP predict the outcome comparison diagram.
Detailed description of the invention
The present invention is further detailed explanation with detailed description of the invention below in conjunction with the accompanying drawings.Feature of the present invention has three: One, the forward transmission of the input signal of traditional BP neutral net, the back-propagation process of error signal are fully resolved;Two, by BP The training process of neutral net combines with MapReduce framework, studies and passes through Java language realization based on MapReduce frame The distributed BP neural network model of frame, rear abbreviation MapReduce-BP model;Three, introduce factor of momentum, use and repeatedly count The mode averaged, improves BP neutral net and is easily absorbed in the problem of local convergence, improves its resistant and oscillation resistant ability.Describe in detail such as Under:
1, traditional BP neural network prediction principle resolves
1) BP neutral net basic model
In 1986, the scientist headed by Rumelhart and McCelland proposed BP neutral net, and it is a kind of energy Learn and store substantial amounts of input-output mode map relation, it is not necessary to the math equation in advance disclosing this mapping relations is many Layer feedforward neural network, is made up of input layer, hidden layer and output layer.Fig. 1 is the knot of typical three layers of BP neutral net Composition, uses totally interconnected mode between layers, does not exist and be connected with each other between same layer, and hidden layer can be with one or more layers. In Fig. 1, xjRepresent the input of input layer jth node;wijRepresent that hidden layer i-th node is between input layer jth node Weights;θiThreshold value for hidden layer i-th node;φ is the excitation function of hidden layer;wkiRepresent that output layer kth node arrives Weights between hidden layer i-th node;αkThreshold value for output layer kth node;ψ is the excitation function of output layer;okRepresent The output of kth node.
2) transmission of BP neutral net signal and error correction
Basic BP neural network algorithm is transmitted by the forward of signal and back propagation two parts of error form, and i.e. calculates reality Carrying out by the direction being input to output during the output of border, each layer weights, the makeover process of threshold value then enter from exporting to the direction inputted OK.According to parameter shown in Fig. 1, the output signal of BP neutral net, each layer weights and threshold value are calculated and adjust.
(1) the forward transmittance process of input signal
According to the structure chart of BP neutral net in Fig. 1, the input net of hidden layer i-th node of interest can be learntiWith Output oi, input quantity net of output layer kth nodekWith output okIt is respectively
net i = &Sigma; j = 1 M w i j x j + &theta; i - - - ( 1 )
o i = &phi; ( net i ) = &phi; ( &Sigma; i = 1 M w i j x j + &theta; j ) - - - ( 2 )
net k = &Sigma; i = 1 q w k i &phi; ( &Sigma; i = 1 M w i j x j + &theta; j ) + &alpha; k - - - ( 3 )
o k = &psi; ( net k ) = &psi; &lsqb; &Sigma; i = 1 q w k i &CenterDot; &phi; ( &Sigma; i = 1 M w i j x j + &theta; j ) + &alpha; k &rsqb; - - - ( 4 )
(2) back-propagation process of error signal
First by output layer, successively calculate the output error of each layer neuron, then adjust according to error gradient descent method Saving weights and the threshold value of each layer, after making regulation, final output of network mapping can be close to expected value.According to error gradient descent method, Revise hidden layer successively to output layer modified weight amount Δ wki, output layer threshold value correction amount αk, input layer is to hidden layer weights Correction amount wijWith hidden layer threshold value correction amount θi, as shown in formula (5)-formula (8), in formula, η is learning rate;P is training sample This sum.
&Delta;w k i = &eta; &Sigma; p = 1 P &Sigma; k = 1 L ( T k p - o k p ) &psi; &prime; ( net k ) o i - - - ( 5 )
&Delta;&alpha; k = &eta; &Sigma; p = 1 P &Sigma; k = 1 L ( T k p - o k p ) &psi; &prime; ( net k ) - - - ( 6 )
&Delta;w i j = &eta; &Sigma; p = 1 P &Sigma; k = 1 L ( T k p - o k p ) &psi; &prime; ( net k ) w k i &phi; &prime; ( net i ) x j - - - ( 7 )
&Delta;&theta; i = &eta; &Sigma; p = 1 P &Sigma; k = 1 L ( T k p - o k p ) &psi; &prime; ( net k ) w k i &phi; &prime; ( net i ) - - - ( 8 )
2, distributed BP neural fusion based on MapReduce framework
BP neural network weight, updating of threshold value are not directly dependent upon with the input sequence of sample, therefore, and BP neutral net Training process can use and is divided in by training data on the different PCs of cluster, and each PC is trained for different data Mode, this mode is much like with the principle of MapReduce programming framework.To this end, combine MapReduce framework and BP nerve net Network data can be parallel feature, set up the Distributed Predictive model of BP neutral net based on MapReduce framework, mainly comprise Map stage, Reduce stage and driving function three part.
1) the Map stage
The storage being uploaded to distributed file system HDFS is characterized each layer weights of BP neutral net, the textual data of threshold value Being divided into several data subsets according to set, data subset key-value pair<key, value>represents.Key value is the lead-in of this row According with the side-play amount relative to text first address, value value will be resolvable to current BP neutral net each layer weights, threshold value. This stage carries out forward transmission and the back propagation of error signal of input signal for current loads sample, with obtain weights, The correction of threshold value.Weights herein, the correction of threshold value correspond to the value of the output key-value pair in Map stage, and key value It is each layer weights, the another name of threshold value.Generate weights, threshold value correction Map stage key takeaway as follows:
Input: current loads data sample and each layer weights of BP neutral net, threshold value.
Output: the current weights of sample, the correction of threshold value.
Method: according to the transmission of input signal forward and error signal back propagation, calculate and obtain weights, the correction of threshold value Amount, specific as follows:
(1) setup function
According in HDFS file system, store weights, the text of threshold value, parse current BP by storage principle and order Neutral net each layer weights (wkiAnd wij), threshold quantity (αkAnd θi)。
(2) correction of weights, threshold value
After introducing factor of momentum, each layer weights, the more new formula of threshold value translate into formula (9)-formula (12).Wherein, Δ w 'ki For showing that output layer kth node is to the final modified weight amount between hidden layer i-th node;Δ′αkSave for output layer kth The final threshold value correction of point;Δ′wijFor the final modified weight between hidden layer i-th node to input layer jth node Amount;Δ′θiFinal threshold value correction for hidden layer i-th node;ρ is factor of momentum;τ is iterations.Adjust after renewal Formula is specific as follows:
Δw′ki(τ+1)=(1-ρ) η Δ wki(τ+1)+ρΔwki(τ) (9)
Δ′αk(τ+1)=(1-ρ) η Δ αk(τ+1)+ρΔαk(τ) (10)
Δ′wij(τ+1)=(1-ρ) η Δ wij(τ+1)+ρΔwij(τ) (11)
Δ′θi(τ+1)=(1-ρ) η Δ θi(τ+1)+ρΔθi(τ) (12)
Formula (1)-formula (4) can be passed through, formula (9)-formula (12) be respectively completed load input signal forward transmission and error believe Number back propagation, obtain under current loads sequence with this, the weights of each layer of BP neutral net, the correction of threshold value.
(3) map function
Map function is mainly in combination with (1) and (2) part, after the calculating completing weights, threshold value correction, uses context Weights, threshold value correction are exported by mode with key-value pair<key, value>, and concrete form is<IntWritable, Text>.
2) the Reduce stage
According to the key value in key-value pair, complete BP neutral net entirety weights, threshold value renewal statistics of variables in conjunction with value. Note 2 points: one, now value form is Iterable<Text>set, set dimension is the total of load sequence training sample Number, therefore needs progressively traversal to obtain correction character string Text of current sample before correction;Two, this stage has been nerve net Network weights, the summation of threshold value correction, summation is the cumulative of numerical value, therefore needs would indicate that the character of the correction of weights, threshold value Illustration and text juxtaposed setting originally resolves to numeric form.Summation degneracy, threshold value correction Reduce stage key takeaway as follows:
Input: what the Map stage exported characterizes single load sequence samples to repairing after BP neural network weight, adjusting thresholds Positive quantity key-value pair<IntWritable, Text>.
Output: BP neutral net entirety weights, the renewal amount of threshold value.
Method: according to BP neutral net each layer weights, the another name of threshold value, is also the key of the input key-value pair in this stage, point Lei Jia not add up overall weights, the renewal amount of threshold value.And the vital point in this stage is in reduce function, successively complete defeated Enter the value of key-value pair, i.e. Iterable<Text>traversal, obtain the weights of current sample, threshold value correction character string and After Text text is converted into numerical morphological, according to key value complete corresponding weight value, threshold value correction cumulative.
3) driving function
Driving function can be regarded as the configuration file of whole program, mainly completes the relevant setting of Job operation.The present invention exists In driving function, mainly complete three relevant settings: one, generate the weights of initial BP neutral net, threshold value text, and by it Reach in HDFS distributed file system;Two, create MapReduce operation, and the map function of Mapper class, run-out key are set The value data type to<key, value>, and the reduce function of Reducer class, the data of output key-value pair<key, value> Type;Three, in view of BP neutral net being the process of an iterative, therefore corresponding homework type is iteration MapReduce Calculating task, therefore needs to arrange iteration ending standard, i.e. sets maximum iteration time and the error margin of training.
Below by instantiation, technical solution of the present invention and technique effect are further described.
1, example system and data process
Load data that Data Source of the present invention is gathered in certain actual electric network and weather data, during the sampling of each equipment Between gap periods be 1h, Weather information is dry-bulb temperature, dew point temperature.Although laboratory data amount is not reaching to the rule of big data Mould, but the correctness of the inventive method can be verified by this experimental data, thus provide for the load prediction under big data environment A kind of new method.Training area is the electricity consumption data on March 31st, 1 day 1 January in 2014.Prediction day is 2014 4 Month electric load the most in the same time on the 1st, as shown in table 1.The research of load data is found these data present a kind of continuity, Periodically, the feature of dependency, according to the achievement in research of these features and lot of documents determine sample attribute be day the last fortnight with Moment load, the last week day of load in the same time, day load the most in the same time, the previous day day of load in the same time, the previous day day are together Moment dry-bulb temperature, the previous day day of dew point temperature in the same time, dry-bulb temperature in the same time on the prediction same day and prediction were revealed the same day in the same time Point temperature, it was predicted that day actual load in the same time, its sample data is as shown in table 2.
The actual load data in table on April 1st, 1 2014
Moment/h Load/MW Moment/h Load/MW Moment/h Load/MW
1 11483 9 15870 17 14508
2 10924 10 15965 18 14332
3 10711 11 15978 19 14219
4 10728 12 15823 20 14702
5 11027 13 15556 21 15265
6 12128 14 15388 22 14557
7 14043 15 15060 23 13416
8 15413 16 14761 24 12135
Table 2 weight training set of data samples
Attribute Value
Day the last fortnight load in the same time 11600MW
Day the last week load in the same time 11857MW
Day load the most in the same time 11462MW
Day the previous day load in the same time 11203MW
Day the previous day dry-bulb temperature in the same time 46℃
Day the previous day dew point temperature in the same time 43℃
Predict dry-bulb temperature in the same time on the same day 41℃
Predict dew point temperature in the same time on the same day 18℃
Predict day actual load in the same time 11483MW
It should be noted that BP neutral net logarithm value number between 0 with 1 compares sensitive, therefore, by original minus Before lotus sequence inputting distributed BP neural network model, needing first to be normalized data, training carries out anti-again after terminating Normalized, obtains actual load predictive value.
2, experimental result and analysis
This experiment is MapReduce-BP neutral net to be compared with traditional BP neural network algorithm, to confirm two Point, one, BP neural network weight, the makeover process of threshold value the most do not associate with the training sequencing of load list entries, I.e. BP neural network training process can be converted into data parallel;Two, demonstrate that to write BP based on MapReduce framework neural The correctness of the thinking of network load forecast model.Fig. 4 is MapReduce-BP load prediction results and traditional BP in the present invention Predict the outcome comparison diagram.Understand, based on the same prediction of prediction curve to prediction daily load of the parallel MapReduce-BP neutral net Day realized load curve is more identical, and traditional BP neural network prediction value prediction effect is close.Wherein, MapReduce-BP bears The average relative error of lotus prediction, root-mean-square error are respectively 3.95% and 1.97%;Traditional BP neural network prediction average Relative error, root-mean-square error are 3.92% and 1.93%.Thus it was confirmed carried MapReduce-BP power load distributing formula is predicted The correctness of model.

Claims (3)

1. the short-term load forecasting method of a distributed BP neutral net based on Hadoop framework, it is characterised in that include Following steps:
Step one: obtain initial load data set;
Step 2: according to the MapReduce framework of Hadoop framework, load data collection is split as small data set, is stored in In the back end of distributed file system;Described small data set key-value pair<key, value>represents, key value is this row Initial character relative to the side-play amount of text first address, value value will be resolvable to current BP neutral net each layer weights, Threshold value;
Step 3: initialize BP neural network parameter, comprises the number of plies of input layer, hidden layer and output layer, and input layer is with implicit The weights between weights, the threshold value of hidden layer neuron, hidden layer and output layer between Ceng, the threshold value of output layer neuron, Parameter sets is uploaded in distributed file system;
Step 4: in the Map stage, reads the parameter in distributed file system, including weights, threshold value, opens in each subtask During the beginning, reduction BP neutral net;Carry out according to subtask distributed data BP neutral net input signal forward transmission with The back propagation of error signal, obtains the weights of BP neutral net, threshold value correction under current data set, and according to key assignments To form as the input parameter in Reduce stage;
Step 5: in the Reduce stage, BP neutral net to all data sets train after, according to mark input layer, hidden layer with Key value in the key-value pair<key, value>of output layer neuron corresponding weight value and threshold value, statistics all load data samples instruction Practice to each neuron weights, the influence amount of threshold value after terminating, by result output to distributed file system;
Step 6: judge under current iteration task, if reach convergence precision or reach maximum iteration time;If so, according to BP The number of plies of the input layer of neutral net, hidden layer and output layer, and weights, threshold parameter in distributed file system, set up Distributed BP neural network model, if it is not, carry out the correction of BP neural network weight, threshold parameter;
Step 7: according to distributed BP neural network model, input prediction day data be predicted, obtain predicting the load merit of day Rate data.
2. the short-term load forecasting method of distributed BP neutral net based on Hadoop framework as claimed in claim 1, its It is characterised by, in described step 4, calculates and obtain the weights of BP neutral net, threshold value correction under current data set, tool Body is:
Δw′ki(τ+1)=(1-ρ) η Δ wki(τ+1)+ρΔwki(τ),
Δ′αk(τ+1)=(1-ρ) η Δ αk(τ+1)+ρΔαk(τ),
Δ′wij(τ+1)=(1-ρ) η Δ wij(τ+1)+ρΔwij(τ),
Δ′θi(τ+1)=(1-ρ) η Δ θi(τ+1)+ρΔθi(τ),
Wherein, Δ w 'kiFor the final modified weight amount between output layer kth node to hidden layer i-th node;Δ′αkFor The final threshold value correction of output layer kth node;Δ′wijBetween hidden layer i-th node to input layer jth node Final modified weight amount;Δ′θiFinal threshold value correction for hidden layer i-th node;ρ is factor of momentum;τ is iteration time Number.
3. the short-term load forecasting method of distributed BP neutral net based on Hadoop framework as claimed in claim 1 or 2, It is characterized in that, also include distributed BP neural network model is improved, i.e. introduce factor of momentum, use repeatedly to calculate and ask Distributed BP neural network model is improved by the mode of meansigma methods.
CN201610334920.6A 2016-05-19 2016-05-19 Short-term load prediction method of distributed BP neural network based on Hadoop architecture Expired - Fee Related CN106022521B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610334920.6A CN106022521B (en) 2016-05-19 2016-05-19 Short-term load prediction method of distributed BP neural network based on Hadoop architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610334920.6A CN106022521B (en) 2016-05-19 2016-05-19 Short-term load prediction method of distributed BP neural network based on Hadoop architecture

Publications (2)

Publication Number Publication Date
CN106022521A true CN106022521A (en) 2016-10-12
CN106022521B CN106022521B (en) 2020-05-19

Family

ID=57096033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610334920.6A Expired - Fee Related CN106022521B (en) 2016-05-19 2016-05-19 Short-term load prediction method of distributed BP neural network based on Hadoop architecture

Country Status (1)

Country Link
CN (1) CN106022521B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845705A (en) * 2017-01-19 2017-06-13 国网山东省电力公司青岛供电公司 The Echo State Networks load forecasting model of subway power supply load prediction system
CN107229693A (en) * 2017-05-22 2017-10-03 哈工大大数据产业有限公司 The method and system of big data system configuration parameter tuning based on deep learning
CN108090025A (en) * 2018-01-19 2018-05-29 福州瑞芯微电子股份有限公司 The neutral net SOC chip of dynamic multichannel and its channel resource distribution method
CN108182490A (en) * 2017-12-27 2018-06-19 南京工程学院 A kind of short-term load forecasting method under big data environment
CN108211268A (en) * 2018-01-25 2018-06-29 武汉中体智美科技有限公司 Exercise load monitoring and sports fatigue method for early warning and system based on training data
CN108734355A (en) * 2018-05-24 2018-11-02 国网福建省电力有限公司 A kind of short-term electric load method of parallel prediction and system applied to power quality harnessed synthetically scene
CN109118365A (en) * 2017-06-26 2019-01-01 平安科技(深圳)有限公司 Income calculation method, apparatus and computer readable storage medium
CN109543814A (en) * 2018-08-31 2019-03-29 南京理工大学 A kind of each equipment fault prediction technique of subway signal system
CN109614384A (en) * 2018-12-04 2019-04-12 上海电力学院 Power-system short-term load forecasting method under Hadoop frame
CN110288127A (en) * 2019-05-31 2019-09-27 武汉烽火富华电气有限责任公司 A kind of energy big data processing method
CN111753997A (en) * 2020-06-28 2020-10-09 北京百度网讯科技有限公司 Distributed training method, system, device and storage medium
CN112365074A (en) * 2020-11-18 2021-02-12 贵州电网有限责任公司 Artificial intelligence decision-making method based on power grid regulation and control data
CN113515896A (en) * 2021-08-06 2021-10-19 红云红河烟草(集团)有限责任公司 Data missing value filling method for real-time cigarette acquisition
US20220094464A1 (en) * 2019-01-07 2022-03-24 Nokia Technologies Oy Detecting control information communicated in frame using a neural network
CN115099526A (en) * 2022-07-28 2022-09-23 广东电网有限责任公司江门供电局 Mass distributed renewable energy power prediction method and related device
CN115330016A (en) * 2022-06-29 2022-11-11 王禹 Regional water consumption data analysis method based on cloud platform

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001216287A (en) * 2000-01-31 2001-08-10 Toshiba Corp Short term demand prediction device
CN103544528A (en) * 2013-11-15 2014-01-29 南京大学 BP neural-network classification method based on Hadoop
CN103559205A (en) * 2013-10-09 2014-02-05 山东省计算中心 Parallel feature selection method based on MapReduce
CN103793438A (en) * 2012-11-05 2014-05-14 山东省计算中心 MapReduce based parallel clustering method
CN105184424A (en) * 2015-10-19 2015-12-23 国网山东省电力公司菏泽供电公司 Mapreduced short period load prediction method of multinucleated function learning SVM realizing multi-source heterogeneous data fusion
CN105427184A (en) * 2015-11-06 2016-03-23 南京信息工程大学 Hadoop-based electricity consumption feedback implementation method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001216287A (en) * 2000-01-31 2001-08-10 Toshiba Corp Short term demand prediction device
CN103793438A (en) * 2012-11-05 2014-05-14 山东省计算中心 MapReduce based parallel clustering method
CN103559205A (en) * 2013-10-09 2014-02-05 山东省计算中心 Parallel feature selection method based on MapReduce
CN103544528A (en) * 2013-11-15 2014-01-29 南京大学 BP neural-network classification method based on Hadoop
CN105184424A (en) * 2015-10-19 2015-12-23 国网山东省电力公司菏泽供电公司 Mapreduced short period load prediction method of multinucleated function learning SVM realizing multi-source heterogeneous data fusion
CN105427184A (en) * 2015-11-06 2016-03-23 南京信息工程大学 Hadoop-based electricity consumption feedback implementation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张素香 等: "海量数据下的电力负荷短期预测", 《中国电机工程学报》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845705A (en) * 2017-01-19 2017-06-13 国网山东省电力公司青岛供电公司 The Echo State Networks load forecasting model of subway power supply load prediction system
CN107229693A (en) * 2017-05-22 2017-10-03 哈工大大数据产业有限公司 The method and system of big data system configuration parameter tuning based on deep learning
CN107229693B (en) * 2017-05-22 2018-05-01 哈工大大数据产业有限公司 The method and system of big data system configuration parameter tuning based on deep learning
CN109118365A (en) * 2017-06-26 2019-01-01 平安科技(深圳)有限公司 Income calculation method, apparatus and computer readable storage medium
CN108182490A (en) * 2017-12-27 2018-06-19 南京工程学院 A kind of short-term load forecasting method under big data environment
CN108090025A (en) * 2018-01-19 2018-05-29 福州瑞芯微电子股份有限公司 The neutral net SOC chip of dynamic multichannel and its channel resource distribution method
CN108090025B (en) * 2018-01-19 2021-08-27 瑞芯微电子股份有限公司 Dynamic multichannel neural network SOC chip and channel resource allocation method thereof
CN108211268A (en) * 2018-01-25 2018-06-29 武汉中体智美科技有限公司 Exercise load monitoring and sports fatigue method for early warning and system based on training data
CN108211268B (en) * 2018-01-25 2019-12-10 武汉中体智美科技有限公司 exercise load monitoring and exercise fatigue early warning method and system based on exercise training data
CN108734355A (en) * 2018-05-24 2018-11-02 国网福建省电力有限公司 A kind of short-term electric load method of parallel prediction and system applied to power quality harnessed synthetically scene
CN109543814A (en) * 2018-08-31 2019-03-29 南京理工大学 A kind of each equipment fault prediction technique of subway signal system
CN109614384A (en) * 2018-12-04 2019-04-12 上海电力学院 Power-system short-term load forecasting method under Hadoop frame
US11811527B2 (en) * 2019-01-07 2023-11-07 Nokia Technologies Oy Detecting control information communicated in frame using a neural network
US20220094464A1 (en) * 2019-01-07 2022-03-24 Nokia Technologies Oy Detecting control information communicated in frame using a neural network
CN110288127A (en) * 2019-05-31 2019-09-27 武汉烽火富华电气有限责任公司 A kind of energy big data processing method
CN111753997A (en) * 2020-06-28 2020-10-09 北京百度网讯科技有限公司 Distributed training method, system, device and storage medium
CN112365074A (en) * 2020-11-18 2021-02-12 贵州电网有限责任公司 Artificial intelligence decision-making method based on power grid regulation and control data
CN112365074B (en) * 2020-11-18 2023-08-18 贵州电网有限责任公司 Artificial intelligence decision-making method based on power grid regulation and control data
CN113515896A (en) * 2021-08-06 2021-10-19 红云红河烟草(集团)有限责任公司 Data missing value filling method for real-time cigarette acquisition
CN113515896B (en) * 2021-08-06 2022-08-09 红云红河烟草(集团)有限责任公司 Data missing value filling method for real-time cigarette acquisition
CN115330016A (en) * 2022-06-29 2022-11-11 王禹 Regional water consumption data analysis method based on cloud platform
CN115099526A (en) * 2022-07-28 2022-09-23 广东电网有限责任公司江门供电局 Mass distributed renewable energy power prediction method and related device

Also Published As

Publication number Publication date
CN106022521B (en) 2020-05-19

Similar Documents

Publication Publication Date Title
CN106022521A (en) Hadoop framework-based short-term load prediction method for distributed BP neural network
Yao et al. A novel photovoltaic power forecasting model based on echo state network
CN111127246A (en) Intelligent prediction method for transmission line engineering cost
CN104536412B (en) Photoetching procedure dynamic scheduling method based on index forecasting and solution similarity analysis
CN104635772B (en) Method for adaptively and dynamically scheduling manufacturing systems
CN103226741B (en) Public supply mains tube explosion prediction method
CN103745273B (en) Semiconductor fabrication process multi-performance prediction method
CN106022954A (en) Multiple BP neural network load prediction method based on grey correlation degree
CN104636801A (en) Transmission line audible noise prediction method based on BP neural network optimization
CN105184416A (en) Fluctuation wind speed prediction method based on particle swarm optimization back propagation neural network
CN106651023A (en) Grey correlation analysis-based improved fireworks algorithm mid-long term load prediction method
CN113283547B (en) Optimal power flow calculation method based on multi-task deep learning
CN104134103B (en) Utilize the method for the BP neural network model prediction hot oil pipeline energy consumption of amendment
CN105447510A (en) Fluctuating wind velocity prediction method based on artificial bee colony optimized least square support vector machine (LSSVM)
JPWO2020075771A1 (en) Planning equipment, planning methods, and planning programs
Ghoshchi et al. Machine learning theory in building energy modeling and optimization: a bibliometric analysis
CN105023056A (en) Power grid optimal carbon energy composite flow obtaining method based on swarm intelligence reinforcement learning
CN104732067A (en) Industrial process modeling forecasting method oriented at flow object
Zeng et al. Forecasting the total energy consumption in China using a new-structure grey system model
Zhao et al. A GA-ANN model for air quality predicting
Olu-Ajayi et al. Building energy consumption prediction using deep learning
Aslan Archimedes optimization algorithm based approaches for solving energy demand estimation problem: a case study of Turkey
CN106408118A (en) GRNN (generalized regression neural network) combination model-based urban daily water supply prediction method
Fujiwara et al. Modify mistral large performance with low-rank adaptation (lora) on the big-bench dataset
CN103472721B (en) The pesticide waste liquid incinerator furnace temperature optimization system of self-adaptation machine learning and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200519

CF01 Termination of patent right due to non-payment of annual fee