CN108182490A - A kind of short-term load forecasting method under big data environment - Google Patents

A kind of short-term load forecasting method under big data environment Download PDF

Info

Publication number
CN108182490A
CN108182490A CN201711442212.5A CN201711442212A CN108182490A CN 108182490 A CN108182490 A CN 108182490A CN 201711442212 A CN201711442212 A CN 201711442212A CN 108182490 A CN108182490 A CN 108182490A
Authority
CN
China
Prior art keywords
neural network
weights
threshold value
particle
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711442212.5A
Other languages
Chinese (zh)
Inventor
李先允
朱骄
朱一骄
王书征
唐昕杰
王建宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Technology
Original Assignee
Nanjing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Technology filed Critical Nanjing Institute of Technology
Priority to CN201711442212.5A priority Critical patent/CN108182490A/en
Publication of CN108182490A publication Critical patent/CN108182490A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Tourism & Hospitality (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Water Supply & Treatment (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Operations Research (AREA)
  • Public Health (AREA)
  • Quality & Reliability (AREA)
  • Primary Health Care (AREA)
  • Entrepreneurship & Innovation (AREA)

Abstract

The invention discloses the short-term load forecasting methods under a kind of big data environment, carry out distributed storage and processing to mass data using Hadoop framework, improve load prediction speed.The improved particle cluster algorithm of the present invention optimizes traditional BP neural network, improves load prediction precision.

Description

A kind of short-term load forecasting method under big data environment
Technical field
The present invention relates to Short Term load Forecasting Technique, more particularly to the short-term load forecasting side under a kind of big data environment Method.
Background technology
Load Prediction In Power Systems level becomes one of mark for weighing power system management modernization.Short-term load forecasting It is the important component of load forecast.As electricity market is kept reforming, the precision of power-system short-term load forecasting Directly affect the economic benefit in power grid and power plant.
Existing Forecasting Methodology still has certain limitation.Traditional BP neural network learning process convergence rate is slow, holds Local minimum point easily is absorbed in, robustness is bad.Meanwhile greatly developing with intelligent grid, the links such as generate electricity, transmit electricity, dispatching Mass data is emerged, proposes higher requirement, traditional BP nerve nets in the environment of big data to predetermined speed and precision Network Forecasting Methodology is in the case of mass data, and due to needing to find neighbour for each test point, operand is very big, unit operation Time it is very long.
Invention content
Goal of the invention:The object of the present invention is to provide the short-term load forecasting method under a kind of big data environment, Neng Gouyou Effect solves the problems, such as the precision and arithmetic speed of load prediction under big data environment.
Technical solution:Short-term load forecasting method under big data environment of the present invention, includes the following steps:
S1:Obtain historical load data collection;
S2:Using the MapReduce data processing systems based on Hadoop framework, load data collection is split as small data According to collection, it is stored in each back end of distributed file system;
S3:BP neural network is built, initializes BP neural network parameter;
S4:The initial parameter of BP neural network is optimized using particle cluster algorithm, weights is obtained and threshold value uploads to In distributed file system;
S5:In the Map stages, the parameter in distributed file system is read, including weights, threshold value, is opened in each subtask During the beginning, BP neural network is restored;According to subtask distribute the input signal that data carry out BP neural network it is positive transmit with The backpropagation of error signal obtains the correction amount of the weights, threshold value of BP neural network under current data set, and according to key assignments To input parameter of the form as the Reduce stages;
S6:In the Reduce stages, after BP neural network trains all data sets, according to input layer, hidden layer and defeated Go out the corresponding key-value pair of layer neuron<key,value>In key values, count after all load data sample trainings to each The influence amount of neuron weights, threshold value, result is exported into distributed file system;
S7:Judge under current iteration task, if reach convergence precision or reach preset iterations;If so, According to weights, threshold value ginseng in the number of plies and its distributed file system of the input layer of BP neural network, hidden layer and output layer Number establishes distributed BP neural network model;If it is not, carry out the amendment of BP neural network weights, threshold parameter;
S8:According to distributed BP neural network model, input prediction day data predicted, obtain the load work(of prediction day Rate data.
Further, in the step S4, the tool that is optimized using particle cluster algorithm to the initial parameter of BP neural network Body process is:The initial weight of BP neural network and threshold value set are mapped as population, that is, setting the position element of population is The optimal weights of population and threshold value is obtained in connection weight and threshold value between all nodes of BP neural network, each iteration, The weights and threshold value of global optimum are finally obtained, assign BP neural network;The speed and location updating equation of population is:
In formula (1), ω(t)The inertia weight factor for the t times iteration;c1And c2Be all Studying factors or be all accelerate it is normal Number;r1And r2It is all the uniform random number in the range of [0,1];vidFor the speed of i-th of particle d dimension, xidFor i-th of particle The position of d dimensions, pidFor the optimal location that i-th of particle lives through, pgdThe optimal location lived through for entire population.
Further, in the step S5, the correction amount of the weights, threshold value of BP neural network under current data set is obtained Process is:
If error criterion function is:
In formula (2), YiFor the desired network output vector of i-th of sample;Yi' for i-th of sample reality network export to Amount;P is number of samples;The vector that w is made of network weight and threshold value;ei(w) error for i-th of sample;
If wkRepresent the weights of kth time iteration and the vector that threshold value is formed, the vectorial w that new weights and threshold value are formedk +1=wk+Δw;Value increase Δ w calculation formula are as follows:
Δ w=[JT(w)J(w)+μI]-1JT(w)e(w) (3)
In formula (3), I is unit matrix;λ is user-defined learning rate;E (w) is error;J (w) is Jacobian squares Battle array, i.e.,:
In formula (4), wnThe vector being made of nth iteration weights and threshold value.
Further, the ω(t)It is obtained by following formula:
ω(t)=μ ω(t-1)(1-ω(t-1)) (5)
In formula (5), μ is chaology parameter;
Further, the step S4 specifically includes following steps:
S4.1:Particle swarm parameter is set, including population, maximum iteration, the fitness limits of error, inertia weight and Practise the factor;
S4.2:Initialize speed and the position of all particles;
S4.3:The fitness function value of each particle is calculated using mean square error as fitness function, and carry out step S4.4 and The judgement of step S4.5;
S4.4:If the current fitness function value of particle is better than its history optimal value, history is substituted with current location Optimal location;
S4.5:If the history optimal value of particle is better than global optimum, substituted with the history optimal value of the particle complete Office's optimal value;
S4.6:The update of speed and position is carried out to each particle;
S4.7:Check whether the speed of particle and position exceed the range of setting:If gone beyond the scope, made with boundary value Speed and position for particle;
S4.8:Iterations add 1, check whether and reach termination condition:If reached, stop iteration, output weights and Threshold value;Otherwise, step S4.3 is gone to;
S4.9:BP neural network model is built with obtained weights and threshold value.
Further, in the step S4.8, termination condition is:Reaching maximum iteration or reach minimal error will It asks.
Advantageous effect:The invention discloses the short-term load forecasting method under a kind of big data environment, using Hadoop framves Structure carries out distributed storage and processing to mass data, improves load prediction speed;Optimized with improved particle cluster algorithm Traditional BP neural network improves load prediction precision.
Description of the drawings
Fig. 1 is the structure chart of BP neural network in the specific embodiment of the invention;
Fig. 2 is the work flow diagram of MapReduce in the specific embodiment of the invention;
Fig. 3 is the flow chart of S4 in the specific embodiment of the invention.
Specific embodiment
With reference to embodiment and attached drawing, technical scheme of the present invention is described further.
Present embodiment discloses the short-term load forecasting method under a kind of big data environment, includes the following steps:
S1:Obtain historical load data collection.
S2:Using the MapReduce data processing systems based on Hadoop framework, load data collection is split as small data According to collection, it is stored in each back end of distributed file system.
S3:BP neural network as shown in Figure 1 is built, initializes BP neural network parameter.
S4:The initial parameter of BP neural network is optimized using particle cluster algorithm, obtains weights and threshold value.
S5:The workflow of Hadoop by the weights in S4 and threshold value as shown in Fig. 2, be stored in distributed field system first In system, data fractionation is carried out according to the quantity of working node.
S6:In the Map stages, the parameter in distributed file system is read, including weights, threshold value, is opened in each subtask During the beginning, BP neural network is restored;According to subtask distribute the input signal that data carry out BP neural network it is positive transmit with The backpropagation of error signal obtains the correction amount of the weights, threshold value of BP neural network under current data set, and according to key assignments To input parameter of the form as the Reduce stages.
S7:In the Reduce stages, after BP neural network trains all data sets, according to input layer, hidden layer and defeated Go out the corresponding key-value pair of layer neuron<key,value>In key values, count after all load data sample trainings to each The influence amount of neuron weights, threshold value, result is exported into distributed file system.
S8:Judge under current iteration task, if reach convergence precision or reach preset iterations;If so, According to weights, threshold value ginseng in the number of plies and its distributed file system of the input layer of BP neural network, hidden layer and output layer Number establishes distributed BP neural network model;If it is not, carry out the amendment of BP neural network weights, threshold parameter.
S9:According to distributed BP neural network model, input prediction day data predicted, obtain the load work(of prediction day Rate data.
Fig. 1 is the structure chart of typical three layers of BP neural network, it is assumed that input neuron number is M, and hidden layer is refreshing It is I through first number, output layer neuron number is J.M-th of neuron of input layer is denoted as am, i-th of neuron of hidden layer be denoted as kiJ-th of neuron of output layer is denoted as yj.From amTo kiConnection weight be wmi, from kiTo yjConnection weight be wij
(1) the positive transmittance process of input signal
According to the structure chart of BP neural network in Fig. 1, the output of input layer is equal to the input signal of network:
vm M(n)=a (n)
The input of i-th of neuron of hidden layer is equal to vm M(n) weighted sum:
Assuming that f () is implicit layer functions, then the output of i-th of neuron of hidden layer is equal to:
vi I(n)=f (ui I(n))
The input of j-th of neuron of output layer is equal to vi I(n) weighted sum:
Assuming that g () is output layer functions, the output of j-th of neuron of output layer is equal to:
vj J(n)=g ((uj J(n))
The error of j-th of neuron of output layer:
ej(n)=dj(n)-vj J(n)
The overall error of network:
(2) back-propagation process of error signal
The output error of each layer neuron is successively calculated first by output layer, is then used according to error level Levenberg-Marquardt (LM) algorithm adjusts the weights and threshold value of each layer, enables the final output of network mapping after adjusting Close to desired value.
In step S5,The process that LM methods obtain the correction amount under current data set of weights, threshold value of BP neural network is:
If error criterion function is:
In formula (2), YiFor the desired network output vector of i-th of sample;Yi' for i-th of sample reality network export to Amount;P is number of samples;The vector that w is made of network weight and threshold value;ei(w) error for i-th of sample;
If wkRepresent the weights of kth time iteration and the vector that threshold value is formed, the vectorial w that new weights and threshold value are formedk +1=wk+Δw;Value increase Δ w calculation formula are as follows:
Δ w=[JT(w)J(w)+λI]-1JT(w)e(w) (3)
In formula (3), I is unit matrix;λ is user-defined learning rate;E (w) is error;J (w) is Jacobian squares Battle array, i.e.,:
In formula (4), wnThe vector being made of nth iteration weights and threshold value.
In step S4, the detailed process optimized using particle cluster algorithm to the initial parameter of BP neural network is:It will The initial weight and threshold value set of BP neural network are mapped as population, that is, the position element for setting population is BP neural network Connection weight and threshold value between all nodes, each iteration are obtained the optimal weights of population and threshold value, finally obtain the overall situation Optimal weights and threshold value assign BP neural network;The speed and location updating equation of population is:
In formula (1), ω(t)The inertia weight factor for the t times iteration;c1And c2Be all Studying factors or be all accelerate it is normal Number;r1And r2It is all the uniform random number in the range of [0,1];vidFor the speed of i-th of particle d dimension, xidFor i-th of particle The position of d dimensions, pidFor the optimal location that i-th of particle lives through, pgdThe optimal location lived through for entire population.
ω(t)It is obtained by following formula:
ω(t)=μ ω(t-1)(1-ω(t-1)) (5)
In formula (5), μ is chaology parameter.
As shown in figure 3, step S4 specifically includes following steps:
S4.1:Particle swarm parameter is set, including population, maximum iteration, the fitness limits of error, inertia weight and Practise the factor;
S4.2:Initialize speed and the position of all particles;
S4.3:The fitness function value of each particle is calculated using mean square error as fitness function, and carry out step S4.4 and The judgement of step S4.5;
S4.4:If the current fitness function value of particle is better than its history optimal value, history is substituted with current location Optimal location;
S4.5:If the history optimal value of particle is better than global optimum, substituted with the history optimal value of the particle complete Office's optimal value;
S4.6:The update of speed and position is carried out to each particle;
S4.7:Check whether the speed of particle and position exceed the range of setting:If gone beyond the scope, made with boundary value Speed and position for particle;
S4.8:Iterations add 1, check whether and reach termination condition:If reached, stop iteration, output weights and Threshold value;Otherwise, step S4.3 is gone to;
S4.9:BP neural network model is built with obtained weights and threshold value.
In step S4.8, termination condition is:Reach maximum iteration or reach minimal error requirement.

Claims (6)

1. a kind of short-term load forecasting method under big data environment, it is characterised in that:Include the following steps:
S1:Obtain historical load data collection;
S2:Using the MapReduce data processing systems based on Hadoop framework, load data collection is split as small-sized data Collection, is stored in each back end of distributed file system;
S3:BP neural network is built, initializes BP neural network parameter;
S4:The initial parameter of BP neural network is optimized using particle cluster algorithm, weights is obtained and threshold value uploads to distribution In formula file system;
S5:In the Map stages, the parameter in distributed file system is read, including weights, threshold value, when starting in each subtask, Restore BP neural network;The positive transmission for the input signal that data carry out BP neural network and error letter are distributed according to subtask Number backpropagation, obtain the correction amount of the weights, threshold value of BP neural network under current data set, and according to key-value pair form Input parameter as the Reduce stages;
S6:In the Reduce stages, after BP neural network trains all data sets, according to input layer, hidden layer and output layer The corresponding key-value pair of neuron<key,value>In key values, count after all load data sample trainings to each nerve The influence amount of first weights, threshold value, result is exported into distributed file system;
S7:Judge under current iteration task, if reach convergence precision or reach preset iterations;If so, foundation Weights, threshold parameter in the number of plies and its distributed file system of the input layer of BP neural network, hidden layer and output layer, build Vertical distribution BP neural network model;If it is not, carry out the amendment of BP neural network weights, threshold parameter;
S8:According to distributed BP neural network model, input prediction day data predicted, obtain the load power number of prediction day According to.
2. the short-term load forecasting method under big data environment according to claim 1, it is characterised in that:The step S4 In, the detailed process optimized using particle cluster algorithm to the initial parameter of BP neural network is:By the first of BP neural network Beginning weights and threshold value set are mapped as population, that is, the position element for setting population is between all nodes of BP neural network The optimal weights of population and threshold value is obtained in connection weight and threshold value, each iteration, finally obtains the weights and threshold of global optimum Value assigns BP neural network;The speed and location updating equation of population is:
In formula (1), ω(t)The inertia weight factor for the t times iteration;c1And c2It is all Studying factors or is all aceleration pulse;r1 And r2It is all the uniform random number in the range of [0,1];vidFor the speed of i-th of particle d dimension, xidIt is tieed up for i-th of particle d Position, pidFor the optimal location that i-th of particle lives through, pgdThe optimal location lived through for entire population.
3. the short-term load forecasting method under big data environment according to claim 1, it is characterised in that:The step S5 In, the process for obtaining the correction amount of the weights, threshold value of BP neural network under current data set is:
If error criterion function is:
In formula (2), YiFor the desired network output vector of i-th of sample;Yi' network the output vector for i-th of sample reality;p For number of samples;The vector that w is made of network weight and threshold value;ei(w) error for i-th of sample;
If wkRepresent the weights of kth time iteration and the vector that threshold value is formed, the vectorial w that new weights and threshold value are formedk+1= wk+Δw;Value increase Δ w calculation formula are as follows:
Δ w=[JT(w)J(w)+μI]-1JT(w)e(w) (3)
In formula (3), I is unit matrix;μ is user-defined learning rate;E (w) is error;J (w) is Jacobian matrixes, i.e.,:
In formula (4), wuThe vector being made of nth iteration weights and threshold value, 1≤u≤n.
4. the short-term load forecasting method under big data environment according to claim 2, it is characterised in that:The ω(t)It is logical Following formula is crossed to obtain:
ω(t)=μ ω(t-1)(1-ω(t-1)) (5)
In formula (5), μ is chaology parameter.
5. the short-term load forecasting method under big data environment according to claim 1, it is characterised in that:The step S4 Specifically include following steps:
S4.1:Particle swarm parameter is set, including population, maximum iteration, the fitness limits of error, inertia weight and study because Son;
S4.2:Initialize speed and the position of all particles;
S4.3:The fitness function value of each particle is calculated using mean square error as fitness function, and carries out step S4.4 and step The judgement of S4.5;
S4.4:If the current fitness function value of particle is better than its history optimal value, it is optimal to substitute history with current location Position;
S4.5:If the history optimal value of particle is better than global optimum, the overall situation is substituted most with the history optimal value of the particle The figure of merit;
S4.6:The update of speed and position is carried out to each particle;
S4.7:Check whether the speed of particle and position exceed the range of setting:If gone beyond the scope, by the use of boundary value as grain The speed of son and position;
S4.8:Iterations add 1, check whether and reach termination condition:If reached, stop iteration, export weights and threshold value; Otherwise, step S4.3 is gone to;
S4.9:BP neural network model is built with obtained weights and threshold value.
6. the short-term load forecasting method under big data environment according to claim 5, it is characterised in that:The step In S4.8, termination condition is:Reach maximum iteration or reach minimal error requirement.
CN201711442212.5A 2017-12-27 2017-12-27 A kind of short-term load forecasting method under big data environment Pending CN108182490A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711442212.5A CN108182490A (en) 2017-12-27 2017-12-27 A kind of short-term load forecasting method under big data environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711442212.5A CN108182490A (en) 2017-12-27 2017-12-27 A kind of short-term load forecasting method under big data environment

Publications (1)

Publication Number Publication Date
CN108182490A true CN108182490A (en) 2018-06-19

Family

ID=62547567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711442212.5A Pending CN108182490A (en) 2017-12-27 2017-12-27 A kind of short-term load forecasting method under big data environment

Country Status (1)

Country Link
CN (1) CN108182490A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110068759A (en) * 2019-05-22 2019-07-30 四川华雁信息产业股份有限公司 A kind of fault type preparation method and device
CN110262897A (en) * 2019-06-13 2019-09-20 东北大学 A kind of Hadoop calculating task primary distribution method based on load estimation
CN111353582A (en) * 2020-02-19 2020-06-30 四川大学 Particle swarm algorithm-based distributed deep learning parameter updating method
CN111695667A (en) * 2020-05-27 2020-09-22 江苏信息职业技术学院 MapReduce-based distributed particle swarm clustering algorithm
CN112231489A (en) * 2020-10-19 2021-01-15 中国科学技术大学 Knowledge learning and transferring method and system for epidemic prevention robot
CN113065693A (en) * 2021-03-22 2021-07-02 哈尔滨工程大学 Traffic flow prediction method based on radial basis function neural network

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164742A (en) * 2013-04-02 2013-06-19 南京邮电大学 Server performance prediction method based on particle swarm optimization nerve network
CN103729695A (en) * 2014-01-06 2014-04-16 国家电网公司 Short-term power load forecasting method based on particle swarm and BP neural network
CN104361393A (en) * 2014-09-06 2015-02-18 华北电力大学 Method for using improved neural network model based on particle swarm optimization for data prediction
CN104715282A (en) * 2015-02-13 2015-06-17 浙江工业大学 Data prediction method based on improved PSO-BP neural network
CN106022521A (en) * 2016-05-19 2016-10-12 四川大学 Hadoop framework-based short-term load prediction method for distributed BP neural network
CN106229964A (en) * 2016-07-22 2016-12-14 南京工程学院 A kind of based on the electrical power distribution network fault location method improving binary particle swarm algorithm
CN106372756A (en) * 2016-09-07 2017-02-01 南京工程学院 Thermal power plant load optimization distribution method based on breeding particle swarm optimization
CN106779177A (en) * 2016-11-28 2017-05-31 国网冀北电力有限公司唐山供电公司 Multiresolution wavelet neutral net electricity demand forecasting method based on particle group optimizing
CN106777449A (en) * 2016-10-26 2017-05-31 南京工程学院 Distribution Network Reconfiguration based on binary particle swarm algorithm
CN107301475A (en) * 2017-06-21 2017-10-27 南京信息工程大学 Load forecast optimization method based on continuous power analysis of spectrum
CN107316099A (en) * 2017-05-22 2017-11-03 沈阳理工大学 Ammunition Storage Reliability Forecasting Methodology based on particle group optimizing BP neural network

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164742A (en) * 2013-04-02 2013-06-19 南京邮电大学 Server performance prediction method based on particle swarm optimization nerve network
CN103729695A (en) * 2014-01-06 2014-04-16 国家电网公司 Short-term power load forecasting method based on particle swarm and BP neural network
CN104361393A (en) * 2014-09-06 2015-02-18 华北电力大学 Method for using improved neural network model based on particle swarm optimization for data prediction
CN104715282A (en) * 2015-02-13 2015-06-17 浙江工业大学 Data prediction method based on improved PSO-BP neural network
CN106022521A (en) * 2016-05-19 2016-10-12 四川大学 Hadoop framework-based short-term load prediction method for distributed BP neural network
CN106229964A (en) * 2016-07-22 2016-12-14 南京工程学院 A kind of based on the electrical power distribution network fault location method improving binary particle swarm algorithm
CN106372756A (en) * 2016-09-07 2017-02-01 南京工程学院 Thermal power plant load optimization distribution method based on breeding particle swarm optimization
CN106777449A (en) * 2016-10-26 2017-05-31 南京工程学院 Distribution Network Reconfiguration based on binary particle swarm algorithm
CN106779177A (en) * 2016-11-28 2017-05-31 国网冀北电力有限公司唐山供电公司 Multiresolution wavelet neutral net electricity demand forecasting method based on particle group optimizing
CN107316099A (en) * 2017-05-22 2017-11-03 沈阳理工大学 Ammunition Storage Reliability Forecasting Methodology based on particle group optimizing BP neural network
CN107301475A (en) * 2017-06-21 2017-10-27 南京信息工程大学 Load forecast optimization method based on continuous power analysis of spectrum

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110068759A (en) * 2019-05-22 2019-07-30 四川华雁信息产业股份有限公司 A kind of fault type preparation method and device
CN110068759B (en) * 2019-05-22 2021-11-09 华雁智能科技(集团)股份有限公司 Fault type obtaining method and device
CN110262897A (en) * 2019-06-13 2019-09-20 东北大学 A kind of Hadoop calculating task primary distribution method based on load estimation
WO2020248226A1 (en) * 2019-06-13 2020-12-17 东北大学 Initial hadoop computation task allocation method based on load prediction
CN110262897B (en) * 2019-06-13 2023-01-31 东北大学 Hadoop calculation task initial allocation method based on load prediction
CN111353582A (en) * 2020-02-19 2020-06-30 四川大学 Particle swarm algorithm-based distributed deep learning parameter updating method
CN111695667A (en) * 2020-05-27 2020-09-22 江苏信息职业技术学院 MapReduce-based distributed particle swarm clustering algorithm
CN112231489A (en) * 2020-10-19 2021-01-15 中国科学技术大学 Knowledge learning and transferring method and system for epidemic prevention robot
CN112231489B (en) * 2020-10-19 2021-11-02 中国科学技术大学 Knowledge learning and transferring method and system for epidemic prevention robot
CN113065693A (en) * 2021-03-22 2021-07-02 哈尔滨工程大学 Traffic flow prediction method based on radial basis function neural network

Similar Documents

Publication Publication Date Title
CN108182490A (en) A kind of short-term load forecasting method under big data environment
CN109102126B (en) Theoretical line loss rate prediction model based on deep migration learning
Li et al. Prediction for tourism flow based on LSTM neural network
CN108694467B (en) Method and system for predicting line loss rate of power distribution network
CN106022521B (en) Short-term load prediction method of distributed BP neural network based on Hadoop architecture
Gill et al. Training back propagation neural networks with genetic algorithm for weather forecasting
CN104217258B (en) A kind of electric load sigma-t Forecasting Methodology
CN107704875A (en) Based on the building load Forecasting Methodology and device for improving IHCMAC neutral nets
CN103105246A (en) Greenhouse environment forecasting feedback method of back propagation (BP) neural network based on improvement of genetic algorithm
CN104636985A (en) Method for predicting radio disturbance of electric transmission line by using improved BP (back propagation) neural network
CN109445935A (en) A kind of high-performance big data analysis system self-adaption configuration method under cloud computing environment
CN104376389A (en) Master-slave type micro-grid power load prediction system and master-slave type micro-grid power load prediction method based on load balancing
CN111723839B (en) Method for predicting line loss rate of transformer area based on edge calculation
CN106296044B (en) Power system risk scheduling method and system
CN109614580A (en) Antidetonation bulk testing model update method based on online Xgboost algorithm
CN108171319A (en) The construction method of the adaptive depth convolution model of network connection
CN112633577A (en) Short-term household electrical load prediction method, system, storage medium and equipment
CN108805346A (en) A kind of hot continuous rolling force forecasting method based on more hidden layer extreme learning machines
CN109214565A (en) A kind of subregion system loading prediction technique suitable for the scheduling of bulk power grid subregion
CN116340006A (en) Computing power resource idle prediction method based on deep learning and storage medium
CN114493052A (en) Multi-model fusion self-adaptive new energy power prediction method and system
CN110163419A (en) A kind of method of middle and small river river basin flood forecast
CN109190749A (en) A kind of prediction technique and device for the intelligent electric meter service life
Showkati et al. Short term load forecasting using echo state networks
CN103763123A (en) Method and device for evaluating health condition of network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180619