CN113065693A - Traffic flow prediction method based on radial basis function neural network - Google Patents

Traffic flow prediction method based on radial basis function neural network Download PDF

Info

Publication number
CN113065693A
CN113065693A CN202110301075.3A CN202110301075A CN113065693A CN 113065693 A CN113065693 A CN 113065693A CN 202110301075 A CN202110301075 A CN 202110301075A CN 113065693 A CN113065693 A CN 113065693A
Authority
CN
China
Prior art keywords
particle
neural network
particles
rbf neural
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110301075.3A
Other languages
Chinese (zh)
Other versions
CN113065693B (en
Inventor
李思照
蔚昊
孙建国
武俊鹏
夏松竹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN202110301075.3A priority Critical patent/CN113065693B/en
Publication of CN113065693A publication Critical patent/CN113065693A/en
Application granted granted Critical
Publication of CN113065693B publication Critical patent/CN113065693B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Business, Economics & Management (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention belongs to the technical field of artificial intelligence and distributed learning, and particularly relates to a traffic flow prediction method based on a radial basis function neural network. Aiming at the problem of lack of universality of RBF parameter setting, the RBF and an APSO algorithm are fused, the network center, the center radius and the connection weight of the RBF are mapped to the movement position of the particle, and the optimization effect of the parameter is achieved through the optimization process of the particle. The invention introduces a PSO algorithm based on the health degree, divides the particles into particles with excellent, general and poor states through the judgment of the health degree of the particles, performs specific global search strategy optimization on the particles with poor health degree, and performs specific local strategy optimization on the particles with excellent health degree. And finally, on the basis of a Spark parallel platform, updating the particles through the main node and the auxiliary node, and outputting an RBF neural network model for traffic flow prediction.

Description

Traffic flow prediction method based on radial basis function neural network
Technical Field
The invention belongs to the technical field of artificial intelligence and distributed learning, and particularly relates to a traffic flow prediction method based on a radial basis function neural network.
Background
With the development of the artificial intelligence technology, the model needs to be continuously updated through derivation iteration in the deep learning process, so that the self capacity is improved, a large amount of intensive calculation is needed in the process, and the training period is prolonged. The training of deep learning on a single node can not meet the requirement gradually, and when massive training samples are input, the deep learning model realized by serialization is difficult to achieve error precision, so that the training period usually takes weeks, months or even longer. Compared with a serialization training mode of a single node, the distributed structure based on parallelization has good expandability and flexibility, and can integrate the resources of a single node. Meanwhile, the group intelligence algorithm is increasingly used for training the deep neural network with huge parameter quantity and is fused with more models.
Although the radial basis function neural network has good performance and is applied to a plurality of fields, in practical application, the determination of the network structure still depends on experience to debug and set in most cases, and a universal method for setting the network size is lacked. This also results in the RBF needing to spend time optimizing its parameters in some practical cases of parallelization; at present, the improvement of the parallelization aiming at the RBF is integrated with a particle swarm optimization algorithm, and the performance of the model is not high due to the problems that the search strength of a standard PSO algorithm is not strong, the model is easy to fall into local optimization and the like.
Disclosure of Invention
The invention aims to provide a traffic flow prediction method based on a radial basis function neural network.
The purpose of the invention is realized by the following technical scheme: the method comprises the following steps:
step 1: obtaining historical traffic flow collected by observation pointsVolume data, construct training set { X1,X2,...,XT}; for training set sample Xk={xk,ykK is {1, 2.., T }, where T is the number of samples in the training set; x is the number ofk、ykFor a continuous period of time, xkIs front t1Traffic flow in time period, ykIs next t2Traffic flow over a time period;
step 2: initializing a master node to generate a random particle swarm, and randomly dispersing the particles into S sub-nodes; will train set { X1,X2,...,XTInputting the data into each sub-node for iteration, and outputting an optimal RBF neural network by each sub-node;
and step 3: the main node integrates the optimal RBF neural networks output by the sub-nodes to obtain a final RBF neural network model; sample Xk={xk,ykIn xkInput to the prediction output obtained from the final RBF neural network model
Figure BDA0002986302530000011
Comprises the following steps:
Figure BDA0002986302530000012
wherein, ω issThe self-adaptive inertia weight of the optimal RBF neural network is output for the s-th branch node;
Figure BDA0002986302530000021
to sample Xk={xk,ykIn xkInputting the prediction output obtained from the optimal RBF neural network output by the s-th sub-node;
and 4, step 4: will train set { X1,X2,...,XTInputting the data into a final RBF neural network model for training to obtain a trained traffic flow prediction model;
and 5: top t to be predicted1The traffic flow in the time period is input into the trained traffic flow prediction model to obtain the traffic flowT is then2Predicted value of the traffic flow in the time period.
The present invention may further comprise:
the specific steps of outputting an optimal RBF neural network by each branch node in the step 2 are as follows:
step 2.1: setting the maximum iteration times M; initializing the iteration time t as 1; for I in the s-th sub-nodesInitializing a parameter K of each particle i in the randomly generated particle swarmi(1) Position, position
Figure BDA0002986302530000022
Velocity vi(1) And degree of health Hi(1)=rand[-0.1,0.1];s∈{1,2,...,S},i∈{1,2,...,Is},
Figure BDA0002986302530000023
From the s-th subnode IsRandomly selecting one particle from the particles as an initial global optimal particle gbest (1), and taking the position of the initial global optimal particle gbest as an initial global optimal position Pgbest(1);
Step 2.2: generating an RBF neural network for each particle i; the RBF neural network corresponding to the particle i comprises Ki(t) hidden layer nodes, jthiThe nodes of the hidden layer are centered
Figure BDA0002986302530000024
J thiThe width of each hidden layer node is
Figure BDA0002986302530000025
J thiThe connection weight from the node of the hidden layer to the kth output node is
Figure BDA0002986302530000026
ji∈{1,2,...,Ki(t)};
Step 2.3: will train set { X1,X2,...,XTInputting the result into the RBF neural network corresponding to each particle for calculation to obtain the training information in the RBF neural network corresponding to each particleCollecting the prediction output of each sample;
Figure BDA0002986302530000027
Figure BDA0002986302530000028
wherein the content of the first and second substances,
Figure BDA0002986302530000029
represents a sample Xk={xk,ykIn xkInputting the prediction output obtained from the RBF neural network corresponding to the particle i; phi () is the radial basis function;
step 2.4: calculating a fitness value f for each particle ii(t);
fi(t)=Ei(t)+αKi
Figure BDA0002986302530000031
Wherein, alpha is a balance factor, and alpha is more than 0; ei(t) is the root mean square error of the RBF neural network corresponding to the particle i;
step 2.5: calculating an adaptive inertial weight ω for each particle ii(t);
ωi(t)=γ(t)(Ai(t)+c)
Figure BDA0002986302530000032
Figure BDA0002986302530000033
Figure BDA0002986302530000034
Wherein, L and c are preset constants, L is more than or equal to 2, and c is more than or equal to 0; s (t) represents the particle diversity, min (f)i(t)) and max (f)i(t)) respectively representing a minimum fitness value and a maximum fitness value in a current t-th iteration; f. ofgbest(t)(t) a fitness value of the globally optimal particle gbest (t) in the current tth iteration;
step 2.6: according to the health degree H of each particle ii(t) historical individual optimal position for particles
Figure BDA0002986302530000035
And the current position Pi(t) updating;
if H isi(t)≤k1×IsThe health degree of the particle i is poor; if k is1×Is<Hi(t)≤k2×IsIf the health degree of the particle i is in a normal state; hi(t)>k2×IsThe health of the particle i is excellent; k is a radical of1And k2Is a parameter within a value range (0,1), and k2>k1
(1) In the probability of Limit/NsTreating the particles with poor health degree, and updating the position P of the particlesi(t) and Individual optimal positions
Figure BDA0002986302530000036
Figure BDA0002986302530000037
Figure BDA0002986302530000038
Wherein the initial position P of each particlei(1) As the initial individual optimal position of the particle
Figure BDA0002986302530000039
Figure BDA0002986302530000041
NsThe total number of particles with poor health degree in the current t-th iteration is obtained; d is dimension, and D is 3; beta is a value of [0,2]A constant of (d);
treating the rest part of the particles with poor health degree by adopting the following formula, and updating the position P of the particlesi(t) and Individual optimal positions
Figure BDA0002986302530000042
Figure BDA0002986302530000043
Pi(t+1)=N+rand4(1,D)×(M-N)
(2) For particles with general health degree, updating the speed v of the particlesi(t) and position Pi(t);
Figure BDA0002986302530000044
Pi(t+1)=Pi(t)+vi(t+1)
Wherein, c1、c2Is a learning factor;
(3) for particles with excellent health degree, only the position P of the particle is updatedi(t);
Figure BDA0002986302530000045
Figure BDA0002986302530000046
Wherein, the parameters mu and eta obey the standard normal distribution, and Levy (beta) represents that Levy () distribution operation is carried out on the parameter beta;
step 2.7: updating the health H of each particle ii(t);
Figure BDA0002986302530000047
Figure BDA0002986302530000048
Figure BDA0002986302530000049
Step 2.8: updating the number K of hidden layer nodes in the RBF neural network corresponding to each particle ii
Figure BDA0002986302530000051
Wherein, Kgbest(t)(t) the number of hidden layer nodes in the RBF neural network corresponding to the global optimal particles gbest (t) in the current t-th iteration is represented;
step 2.9: according to the fitness value f of each particle i calculated in the current t-th iterationi(t) taking the corresponding fitness value fi(t) the largest particle is taken as the global optimal particle gbest (t +1) in the next iteration;
step 2.10: judging whether the maximum iteration number M is reached; if t is less than M, making t equal to t +1, and returning to the step 2.3; otherwise, outputting the RBF neural network corresponding to the global optimal particle in the s-th sub-node.
The invention has the beneficial effects that:
aiming at the problem of lack of universality of RBF parameter setting, the RBF and an APSO algorithm are fused, the network center, the center radius and the connection weight of the RBF are mapped to the movement position of the particle, and the optimization effect of the parameter is achieved through the optimization process of the particle. The invention introduces a PSO algorithm based on health degree to be combined with APSO-RBF, and divides the particles into particles with excellent, general and poor states through the judgment of the health degree of the particles, and carries out specific global search strategy optimization on the particles with poor health degree, and carries out specific local strategy optimization on the particles with excellent health degree. And finally, on the basis of a Spark parallel platform, updating the particles through the main node and the auxiliary node, and outputting an RBF neural network model which can be used for prediction.
Drawings
FIG. 1 is a flow chart of the APSO algorithm for adding inertial weights.
FIG. 2 is a flow chart of the algorithm for APSO-RBF.
FIG. 3 is a parallelization flow diagram of the present invention.
Fig. 4 is a partial data table of the 1PeMSD4 data set in an embodiment of the present invention.
FIG. 5 is a table of calculated error values in accordance with an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
The invention relates to the field of artificial intelligence and distributed learning, and designs a parallelization Neural Network model-HHAPSO-RBF Neural Network model based on a Spark platform by combining a Radial Basis Function Neural Network (RBFNN), a particle swarm Algorithm (APSO) for adaptively changing weight and a particle swarm algorithm (HPSO) based on health degree. The invention provides a traffic flow prediction method based on a radial basis function neural network by combining an RBF neural network with an improved particle swarm optimization (HPSO) based on health degree.
A traffic flow prediction method based on a radial basis function neural network comprises the following steps:
step 1: obtaining historical traffic data collected by observation points, and constructing a training set { X }1,X2,...,XT}; for training set sample Xk={xk,ykK is {1, 2.., T }, where T is the number of samples in the training set; x is the number ofk、ykFor a continuous period of time, xkIs front t1Traffic flow in time period, ykIs next t2Traffic flow over a time period;
step 2: initializing a master node to generate a random particle swarm, and randomly dispersing the particles into S sub-nodes; will train set { X1,X2,...,XTInputting the data into each subnode for iteration, outputting an optimal RBF neural network by each subnode, wherein the specific iteration step in each subnode is as follows:
step 2.1: setting the maximum iteration times M; initializing the iteration time t as 1; for I in the s-th sub-nodesInitializing a parameter K of each particle i in the randomly generated particle swarmi(1) Position, position
Figure BDA0002986302530000061
Velocity vi(1) And degree of health Hi(1)=rand[-0.1,0.1];s∈{1,2,...,S},i∈{1,2,...,Is},
Figure BDA0002986302530000062
From the s-th subnode IsRandomly selecting one particle from the particles as an initial global optimal particle gbest (1), and taking the position of the initial global optimal particle gbest as an initial global optimal position Pgbest(1);
Step 2.2: generating an RBF neural network for each particle i; the RBF neural network corresponding to the particle i comprises Ki(t) hidden layer nodes, jthiThe nodes of the hidden layer are centered
Figure BDA0002986302530000063
J thiThe width of each hidden layer node is
Figure BDA0002986302530000064
J thiThe connection weight from the node of the hidden layer to the kth output node is
Figure BDA0002986302530000065
ji∈{1,2,...,Ki(t)};
Step 2.3: will train set { X1,X2,...,XTIs inputted to the RB corresponding to each particleCalculating in the F neural network to obtain the predicted output of each sample in the training set in the RBF neural network corresponding to each particle;
Figure BDA0002986302530000066
Figure BDA0002986302530000067
wherein the content of the first and second substances,
Figure BDA0002986302530000068
represents a sample Xk={xk,ykIn xkInputting the prediction output obtained from the RBF neural network corresponding to the particle i; phi () is the radial basis function;
step 2.4: calculating a fitness value f for each particle ii(t);
fi(t)=Ei(t)+αKi
Figure BDA0002986302530000071
Wherein, alpha is a balance factor, and alpha is more than 0; ei(t) is the root mean square error of the RBF neural network corresponding to the particle i;
step 2.5: calculating an adaptive inertial weight ω for each particle ii(t);
ωi(t)=γ(t)(Ai(t)+c)
Figure BDA0002986302530000072
Figure BDA0002986302530000073
Figure BDA0002986302530000074
Wherein, L and c are preset constants, L is more than or equal to 2, and c is more than or equal to 0; s (t) represents the particle diversity, min (f)i(t)) and max (f)i(t)) respectively representing a minimum fitness value and a maximum fitness value in a current t-th iteration; f. ofgbest(t)(t) a fitness value of the globally optimal particle gbest (t) in the current tth iteration;
step 2.6: according to the health degree H of each particle ii(t) historical individual optimal position for particles
Figure BDA0002986302530000075
And the current position Pi(t) updating;
if H isi(t)≤k1×IsThe health degree of the particle i is poor; if k is1×Is<Hi(t)≤k2×IsIf the health degree of the particle i is in a normal state; hi(t)>k2×IsThe health of the particle i is excellent; k is a radical of1And k2Is a parameter within a value range (0,1), and k2>k1
(1) In the probability of Limit/NsTreating the particles with poor health degree, and updating the position P of the particlesi(t) and Individual optimal positions
Figure BDA0002986302530000076
Figure BDA0002986302530000077
Figure BDA0002986302530000078
Wherein the initial position P of each particlei(1) As the initial individual optimal position of the particle
Figure BDA0002986302530000079
Figure BDA00029863025300000710
NsThe total number of particles with poor health degree in the current t-th iteration is obtained; d is dimension, and D is 3; beta is a value of [0,2]A constant of (d);
treating the rest part of the particles with poor health degree by adopting the following formula, and updating the position P of the particlesi(t) and Individual optimal positions
Figure BDA0002986302530000081
Figure BDA0002986302530000082
Pi(t+1)=N+rand4(1,D)×(M-N)
(2) For particles with general health degree, updating the speed v of the particlesi(t) and position Pi(t);
Figure BDA0002986302530000083
Pi(t+1)=Pi(t)+vi(t+1)
Wherein, c1、c2Is a learning factor;
(3) for particles with excellent health degree, only the position P of the particle is updatedi(t);
Figure BDA0002986302530000084
Figure BDA0002986302530000085
Wherein, the parameters mu and eta obey the standard normal distribution, and Levy (beta) represents that Levy () distribution operation is carried out on the parameter beta;
step 2.7: updating the health H of each particle ii(t);
Figure BDA0002986302530000086
Figure BDA0002986302530000087
Figure BDA0002986302530000088
Step 2.8: updating the number K of hidden layer nodes in the RBF neural network corresponding to each particle ii
Figure BDA0002986302530000089
Wherein, Kgbest(t)(t) the number of hidden layer nodes in the RBF neural network corresponding to the global optimal particles gbest (t) in the current t-th iteration is represented;
step 2.9: according to the fitness value f of each particle i calculated in the current t-th iterationi(t) taking the corresponding fitness value fi(t) the largest particle is taken as the global optimal particle gbest (t +1) in the next iteration;
step 2.10: judging whether the maximum iteration number M is reached; if t is less than M, making t equal to t +1, and returning to the step 2.3; otherwise, outputting the RBF neural network corresponding to the global optimal particle in the s-th sub-node;
and step 3: the main node integrates the optimal RBF neural networks output by the sub-nodes to obtain a final RBF neural network model; sample Xk={xk,ykIn xkInput to the prediction output obtained from the final RBF neural network model
Figure BDA0002986302530000091
Comprises the following steps:
Figure BDA0002986302530000092
wherein, ω issThe self-adaptive inertia weight of the optimal RBF neural network is output for the s-th branch node;
Figure BDA0002986302530000093
to sample Xk={xk,ykIn xkInputting the prediction output obtained from the optimal RBF neural network output by the s-th sub-node;
and 4, step 4: will train set { X1,X2,...,XTInputting the data into a final RBF neural network model for training to obtain a trained traffic flow prediction model;
and 5: top t to be predicted1Inputting the traffic flow in the time period into the trained traffic flow prediction model to obtain the following t2Predicted value of the traffic flow in the time period.
Example 1:
the invention firstly aims at the problem of lack of universality parameter setting of the RBF, the RBF and an APSO algorithm are fused, the network center, the center radius and the connection weight of the RBF are mapped to the movement position of the particle, and the optimization effect of the parameter is achieved through the optimization process of the particle (wherein the APSO algorithm is provided by adjusting the operation state through a nonlinear adaptive adjustment of an inertia variable on the basis of a standard PSO). Then, on the basis, in order to improve the performance of subsequent parallelization, the search optimization capability and efficiency of the particles need to be further enhanced, so that a health-degree-based PSO algorithm is introduced to be combined with the APSO-RBF. And finally, on the basis of a Spark parallel platform, a RBF neural network model for prediction is provided through updating the particles by the main node and the auxiliary node.
1) The PSO algorithm is further optimized and improved, and a nonlinear automatic adjustment strategy for inertia variables based on particle diversity is added. Particle diversity and particles by PSO algorithmFitness is improved while introducing a non-linear regression function (adjusting inertial weights to balance search power) such that the inertial weights
Figure BDA0002986302530000094
Can vary according to a non-linear regression function. And the nonlinear regression function can be adjusted through the diversity of the particle swarm, so that the particle can overcome the condition of easy local optimization.
The particle diversity improvement formula for the PSO algorithm is as follows:
Figure BDA0002986302530000101
improved formula for particle fitness:
Figure BDA0002986302530000102
wherein S (t) represents the particle diversity, fmin(a (t)) and fmax(a (t)) are the minimum fitness value and the maximum fitness value of the population in the t-th cycle, respectively, f (a)i(t)) is the ith particle fitness value.
Introducing a nonlinear regression function:
Figure BDA0002986302530000103
l is a predetermined constant (generally, L.gtoreq.2).
In order to obtain a suitable velocity, the variation ratio between the ith particle and the optimal particle of the group can be obtained by the following formula (4):
Figure BDA0002986302530000104
wherein f isgbest(t)(t) is the fitness value of the globally optimal particle gbest (t) in the current tth iteration.
Finally, an inertia weight formula of the adaptive strategy can be obtained:
ω(t)=γ(t)(Ai(t)+c) (5)
where ω (t) is the dynamic inertial weight of the ith particle in the tth iteration; c is a predefined constant to improve the global search capability of the particle, typically c ≧ 0.
2) On the basis of an improved APSO algorithm, the characteristics of the RBF neural network are further improved and optimized. The RBF neural network is combined with an improved APSO algorithm, the number of hidden neurons is adjusted according to the number of the optimal particles, and an APSO-RBF optimized neural network capable of automatically updating the size of the network is provided.
In the optimized RFB neural network, the fitness value of each particle is adopted to represent the accuracy of the network, and the fitness value of each particle is designed by simultaneously considering an error criterion and the size of the network:
f(ai(t))=Ei(t)+αKi(t) (6)
wherein f (a)i(t)) is the fitness value of the ith particle, KiIs the number of hidden neurons, alpha is a balance factor with a value greater than 0, Ei(t) is the root mean square error of the neural network, which can be obtained by the following equation (7):
Figure BDA0002986302530000111
where T is the number of data pairs, y (T) and yd (T) are the network output and the desired output, respectively, at time T.
Based on the improved algorithm in 1), the optimal particle is obtained, and the size of other particles are updated by changing the network size, wherein the updating mode is as follows:
Figure BDA0002986302530000112
wherein, Kgbest(t)(t) represents the global optimal particle gbest in the current t-th iteration (t) the number of hidden layer nodes in the corresponding RBF neural network.
Finally, in the continuous iteration process, the optimal particles are gradually approached and obtained.
3) Based on a particle swarm optimization algorithm based on the health degree, the particles are classified into particles with excellent state, general state and poor state according to the judgment of the health degree of the particles.
If H isi(t)≤k1×IsThe health degree of the particle i is poor; if k is1×Is<Hi(t)≤k2×IsIf the health degree of the particle i is in a normal state; hi(t)>k2×IsThe health of the particle i is excellent; k is a radical of1And k2Is a parameter within a value range (0,1), and k2>k1
The health degree is randomly distributed along with the particles according to the number threshold value set initially in the initial state, and is continuously and dynamically adjusted in the subsequent multi-round iterative process of the particles. The initial health degree is calculated by formula (9):
Hi(1)=rand[-0.1,0.1] (9)
the variation difference of the dynamic adjustment of the particle health is shown in equations (10) and (11):
Figure BDA0002986302530000113
Figure BDA0002986302530000114
under the condition of convergence, most particles evolve towards excellent states, and the health degree is basically stable at the later stage of iteration. Meanwhile, in order to solve the problem that the original algorithm is not strong in searching capability when solving the high-latitude problem and lacks of particle population diversity so that the situation that the particles are easy to fall into a local optimal solution cannot be overcome, different searching strategies are adjusted on the particles with the health degrees in three different states.
For particles with general health degree, the iteration of the particles is continued, and the flow of a standard particle swarm algorithm is followed.
And carrying out specific global search strategy optimization on the particles with poor health degree. In order to maintain the diversity of particles, the number of particles with Limit and the remaining particles with poor health are respectively treated to prevent the particles from falling into the local optimal condition, so that a regulating variable Limit is set and is calculated by the following formula (12):
Figure BDA0002986302530000121
wherein D is dimension, beta is a constant with a value of [0,2], t is current iteration round, and M is maximum iteration number.
(1) In the probability of Limit/NsTreating the particles with poor health degree, and updating the position P of the particlesi(t) and Individual optimal positions
Figure BDA0002986302530000122
Figure BDA0002986302530000123
Figure BDA0002986302530000124
Wherein the initial position P of each particlei(1) As the initial individual optimal position of the particle
Figure BDA0002986302530000125
Figure BDA0002986302530000126
NsThe total number of particles with poor health degree in the current t-th iteration is obtained; d is dimension, and D is 3; beta is a value of [0,2]A constant of (d);
to the rest of another partThe particles with poor health degree are treated by adopting the following formula, and the position P of the particles is updatedi(t) and Individual optimal positions
Figure BDA0002986302530000127
Figure BDA0002986302530000128
Pi(t+1)=N+rand4(1,D)×(M-N) (16)
(2) For particles with general health degree, updating the speed v of the particlesi(t) and position Pi(t);
Figure BDA0002986302530000129
Pi(t+1)=Pi(t)+vi(t+1) (18)
Wherein, c1、c2Is a learning factor;
(3) for particles with excellent health degree, only the position P of the particle is updatedi(t);
Figure BDA0002986302530000131
Figure BDA0002986302530000132
Wherein, the parameters mu and eta obey the standard normal distribution, and Levy (beta) represents that Levy () distribution operation is carried out on the parameter beta;
4) in an HDFS (Hadoop distributed) platform, a new RBF parallelization model for prediction, namely a HHAPSO-RBF neural network prediction model, is provided by dispersing particles into each processor of a system and then combining an improved and optimized RBF neural network with a HHAPSO algorithm by independently operating respective search strategies in each processor. The moldThe method divides the data of the training set and then carries out parallelization training, and in this way, each processor in the processor group utilizes the divided local data set S when carrying out trainingiAnd only after the fitting degree of the corresponding sub-training set is qualified, the sub-training set can be interacted with other training machines, so that the cost is relatively low.
The algorithm runs in an HDFS (Hadoop distributed) platform, and all data are stored in a file form. The parallelization of the model is achieved by first dispersing the particles into the various processors of the system, and then the particles independently run their own search strategy in each processor. Only the global optimal particle needs to be updated for the master node in the model. The main function of the process is to combine the running process of the HHAPSO-RBFNN algorithm with the parallel process of Map-Reduce. And the particles in each node processor are independently updated according to the flow of the HHAPSO-RBFNN algorithm, and in each iteration process, the main node is responsible for collecting the historical optimal value in each node and broadcasting the historical optimal value to each subnode, so that each subnode can utilize the historical optimal value as global optimal and carry out next iteration. And obtaining an optimal neural network structure until iteration is finished, and then training respective data of each node.
5) In conducting the prediction experiments, a published data set PeMSD4 was used, where data was generated from the monitoring system every 5 minutes/frequency acquisition, as shown in fig. 4.
In the data set, three columns of data, namely Lane1, Lane2 and Lane3, are the number of vehicles observed by the three observation points at the corresponding time point respectively, and the data in the Flow column is the total number of vehicles observed by the three observation points at the corresponding time node. In order to avoid influence, the data are divided into two samples according to working days and rest days, and training and prediction are respectively carried out. For each sample, the data of the first three weeks are used as training samples, the data of the fourth week are used as test samples, and the traffic flow of the previous hour is used as an interval of 15 minutes to predict the total traffic flow in the next 15-minute interval.
For convenient calculation, the original data is normalized and is scaled to 0-1. The normalization formula is as follows:
Figure BDA0002986302530000133
wherein, ymax=1,yminMax (x), min (x) are sample maximum and minimum values, respectively.
As shown in FIG. 5, from the convergence rate aspect, HHAPSO-RBFNN iterates 132 times during the working day and 323 times during the resting day. The number of iterations is less, indicating a faster convergence rate. In the aspect of prediction accuracy, the traffic flow prediction result is processed and compared with actual data, and absolute percent error (MAPE), mean absolute error (MAD) and mean square error (MSD) are used for characterization. It can be seen that the model prediction works well.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (2)

1. A traffic flow prediction method based on a radial basis function neural network is characterized by comprising the following steps:
step 1: obtaining historical traffic data collected by observation points, and constructing a training set { X }1,X2,...,XT}; for training set sample Xk={xk,ykK is {1, 2.., T }, where T is the number of samples in the training set; x is the number ofk、ykFor a continuous period of time, xkIs front t1Traffic flow in time period, ykIs next t2Traffic flow over a time period;
step 2: initializing a master node to generate a random particle swarm, and randomly dispersing the particles into S sub-nodes; will train set { X1,X2,...,XTIs inputted intoIteration is carried out in each sub-node, and each sub-node outputs an optimal RBF neural network;
and step 3: the main node integrates the optimal RBF neural networks output by the sub-nodes to obtain a final RBF neural network model;
sample Xk={xk,ykIn xkInput to the prediction output obtained from the final RBF neural network model
Figure FDA0002986302520000011
Comprises the following steps:
Figure FDA0002986302520000012
wherein, ω issThe self-adaptive inertia weight of the optimal RBF neural network is output for the s-th branch node;
Figure FDA0002986302520000013
to sample Xk={xk,ykIn xkInputting the prediction output obtained from the optimal RBF neural network output by the s-th sub-node;
and 4, step 4: will train set { X1,X2,...,XTInputting the data into a final RBF neural network model for training to obtain a trained traffic flow prediction model;
and 5: top t to be predicted1Inputting the traffic flow in the time period into the trained traffic flow prediction model to obtain the following t2Predicted value of the traffic flow in the time period.
2. The traffic flow prediction method based on the radial basis function neural network as claimed in claim 1, wherein: the specific steps of outputting an optimal RBF neural network by each branch node in the step 2 are as follows:
step 2.1: setting the maximum iteration times M; initializing the iteration time t as 1; for I in the s-th sub-nodesParticle, initializing random generationParameter K of each particle i in the populationi(1) Position, position
Figure FDA0002986302520000014
Velocity vi(1) And degree of health Hi(1)=rand[-0.1,0.1];s∈{1,2,...,S},i∈{1,2,...,Is},
Figure FDA0002986302520000015
From the s-th subnode IsRandomly selecting one particle from the particles as an initial global optimal particle gbest (1), and taking the position of the initial global optimal particle gbest as an initial global optimal position Pgbest(1);
Step 2.2: generating an RBF neural network for each particle i; the RBF neural network corresponding to the particle i comprises Ki(t) hidden layer nodes, jthiThe nodes of the hidden layer are centered
Figure FDA0002986302520000016
J thiThe width of each hidden layer node is
Figure FDA0002986302520000017
J thiThe connection weight from the node of the hidden layer to the kth output node is
Figure FDA0002986302520000021
Step 2.3: will train set { X1,X2,...,XTInputting the result into an RBF neural network corresponding to each particle for calculation to obtain the predicted output of each sample in a training set in the RBF neural network corresponding to each particle;
Figure FDA0002986302520000022
Figure FDA0002986302520000023
wherein the content of the first and second substances,
Figure FDA0002986302520000024
represents a sample Xk={xk,ykIn xkInputting the prediction output obtained from the RBF neural network corresponding to the particle i; phi () is the radial basis function;
step 2.4: calculating a fitness value f for each particle ii(t);
fi(t)=Ei(t)+αKi
Figure FDA0002986302520000025
Wherein, alpha is a balance factor, and alpha is more than 0; ei(t) is the root mean square error of the RBF neural network corresponding to the particle i;
step 2.5: calculating an adaptive inertial weight ω for each particle ii(t);
ωi(t)=γ(t)(Ai(t)+c)
Figure FDA0002986302520000026
Figure FDA0002986302520000027
Figure FDA0002986302520000028
Wherein, L and c are preset constants, L is more than or equal to 2, and c is more than or equal to 0; s (t) represents the particle diversity, min (f)i(t)) and max (f)i(t)) respectively representing a minimum fitness value and a maximum fitness value in a current t-th iteration; f. ofgbest(t)(t) global in the current t-th iterationFitness value of the optimal particle gbest (t);
step 2.6: according to the health degree H of each particle ii(t) historical individual optimal position for particles
Figure FDA0002986302520000029
And the current position Pi(t) updating;
if H isi(t)≤k1×IsThe health degree of the particle i is poor; if k is1×Is<Hi(t)≤k2×IsIf the health degree of the particle i is in a normal state; hi(t)>k2×IsThe health of the particle i is excellent; k is a radical of1And k2Is a parameter within a value range (0,1), and k2>k1
(1) In the probability of Limit/NsTreating the particles with poor health degree, and updating the position P of the particlesi(t) and Individual optimal positions
Figure FDA0002986302520000031
Figure FDA0002986302520000032
Figure FDA0002986302520000033
Wherein the initial position P of each particlei(1) As the initial individual optimal position of the particle
Figure FDA0002986302520000034
Figure FDA0002986302520000035
NsThe total number of particles with poor health degree in the current t-th iteration is obtained; d is dimension, and D is 3; beta is a value of [0,2]Of (2)An amount;
treating the rest part of the particles with poor health degree by adopting the following formula, and updating the position P of the particlesi(t) and Individual optimal positions
Figure FDA0002986302520000036
Figure FDA0002986302520000037
Pi(t+1)=N+rand4(1,D)×(M-N)
(2) For particles with general health degree, updating the speed v of the particlesi(t) and position Pi(t);
Figure FDA0002986302520000038
Pi(t+1)=Pi(t)+vi(t+1)
Wherein, c1、c2Is a learning factor;
(3) for particles with excellent health degree, only the position P of the particle is updatedi(t);
Figure FDA0002986302520000039
Figure FDA00029863025200000310
Wherein, the parameters mu and eta obey the standard normal distribution, and Levy (beta) represents that Levy () distribution operation is carried out on the parameter beta;
step 2.7: updating the health H of each particle ii(t);
Figure FDA0002986302520000041
Figure FDA0002986302520000042
Figure FDA0002986302520000043
Step 2.8: updating the number K of hidden layer nodes in the RBF neural network corresponding to each particle ii
Figure FDA0002986302520000044
Wherein, Kgbest(t)(t) the number of hidden layer nodes in the RBF neural network corresponding to the global optimal particles gbest (t) in the current t-th iteration is represented;
step 2.9: according to the fitness value f of each particle i calculated in the current t-th iterationi(t) taking the corresponding fitness value fi(t) the largest particle is taken as the global optimal particle gbest (t +1) in the next iteration;
step 2.10: judging whether the maximum iteration number M is reached; if t is less than M, making t equal to t +1, and returning to the step 2.3; otherwise, outputting the RBF neural network corresponding to the global optimal particle in the s-th sub-node.
CN202110301075.3A 2021-03-22 2021-03-22 Traffic flow prediction method based on radial basis function neural network Active CN113065693B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110301075.3A CN113065693B (en) 2021-03-22 2021-03-22 Traffic flow prediction method based on radial basis function neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110301075.3A CN113065693B (en) 2021-03-22 2021-03-22 Traffic flow prediction method based on radial basis function neural network

Publications (2)

Publication Number Publication Date
CN113065693A true CN113065693A (en) 2021-07-02
CN113065693B CN113065693B (en) 2022-07-15

Family

ID=76563110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110301075.3A Active CN113065693B (en) 2021-03-22 2021-03-22 Traffic flow prediction method based on radial basis function neural network

Country Status (1)

Country Link
CN (1) CN113065693B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164742A (en) * 2013-04-02 2013-06-19 南京邮电大学 Server performance prediction method based on particle swarm optimization nerve network
CN104361393A (en) * 2014-09-06 2015-02-18 华北电力大学 Method for using improved neural network model based on particle swarm optimization for data prediction
CN106295153A (en) * 2016-08-03 2017-01-04 南京航空航天大学 A kind of Fault Diagnosis of Aircraft Engine Gas Path method based on twin support vector machine
CN106447027A (en) * 2016-10-13 2017-02-22 河海大学 Vector Gaussian learning particle swarm optimization method
US20180164272A1 (en) * 2014-11-02 2018-06-14 Beijing University Of Technology Measuring Phosphorus in Wastewater Using a Self-Organizing RBF Neural Network
CN108182490A (en) * 2017-12-27 2018-06-19 南京工程学院 A kind of short-term load forecasting method under big data environment
CN110068774A (en) * 2019-05-06 2019-07-30 清华四川能源互联网研究院 Estimation method, device and the storage medium of lithium battery health status
CN110532665A (en) * 2019-08-26 2019-12-03 哈尔滨工程大学 A kind of mobile object dynamic trajectory prediction technique under scheduled airline task
CN110708318A (en) * 2019-10-10 2020-01-17 国网湖北省电力有限公司电力科学研究院 Network abnormal flow prediction method based on improved radial basis function neural network algorithm
CN111260118A (en) * 2020-01-10 2020-06-09 天津理工大学 Vehicle networking traffic flow prediction method based on quantum particle swarm optimization strategy

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164742A (en) * 2013-04-02 2013-06-19 南京邮电大学 Server performance prediction method based on particle swarm optimization nerve network
CN104361393A (en) * 2014-09-06 2015-02-18 华北电力大学 Method for using improved neural network model based on particle swarm optimization for data prediction
US20180164272A1 (en) * 2014-11-02 2018-06-14 Beijing University Of Technology Measuring Phosphorus in Wastewater Using a Self-Organizing RBF Neural Network
CN106295153A (en) * 2016-08-03 2017-01-04 南京航空航天大学 A kind of Fault Diagnosis of Aircraft Engine Gas Path method based on twin support vector machine
CN106447027A (en) * 2016-10-13 2017-02-22 河海大学 Vector Gaussian learning particle swarm optimization method
CN108182490A (en) * 2017-12-27 2018-06-19 南京工程学院 A kind of short-term load forecasting method under big data environment
CN110068774A (en) * 2019-05-06 2019-07-30 清华四川能源互联网研究院 Estimation method, device and the storage medium of lithium battery health status
CN110532665A (en) * 2019-08-26 2019-12-03 哈尔滨工程大学 A kind of mobile object dynamic trajectory prediction technique under scheduled airline task
CN110708318A (en) * 2019-10-10 2020-01-17 国网湖北省电力有限公司电力科学研究院 Network abnormal flow prediction method based on improved radial basis function neural network algorithm
CN111260118A (en) * 2020-01-10 2020-06-09 天津理工大学 Vehicle networking traffic flow prediction method based on quantum particle swarm optimization strategy

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
LIZONG ZHANG: "A hybrid forecasting framework based on support vector regression with a modified genetic algorithm and a random forest for traffic flow prediction", 《TSINGHUA SCIENCE AND TECHNOLOGY》 *
SHUCAI SONG等: "The value of short-time traffic flow prediction in the PSO-RBFNN study", 《PROCEEDINGS OF THE 2012 INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND INFORMATION PROCESSING》 *
SUN JIANGUO等: "Adaptive model of rotor/turbo-shaft engine", 《JOURNAL OF BEIJING UNIVERSITY OF AERONAUTICS AND ASTRONAUTICS》 *
李思照: "片上多核***软件特性及***可靠性分析研究", 《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》 *
王崇荣等: "基于KPCA遗传算法的预报模型及其应用", 《华北理工大学学报(自然科学版)》 *
王超: "基于数据驱动的平原水库健康诊断预测研究", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 *

Also Published As

Publication number Publication date
CN113065693B (en) 2022-07-15

Similar Documents

Publication Publication Date Title
Liang et al. A deep reinforcement learning network for traffic light cycle control
CN108133258B (en) Hybrid global optimization method
CN106022521B (en) Short-term load prediction method of distributed BP neural network based on Hadoop architecture
CN112700060B (en) Station terminal load prediction method and prediction device
CN106650920A (en) Prediction model based on optimized extreme learning machine (ELM)
CN112784362A (en) Hybrid optimization method and system for unmanned aerial vehicle-assisted edge calculation
CN105427241B (en) Distortion correction method for large-view-field display equipment
CN106529732A (en) Carbon emission efficiency prediction method based on neural network and random frontier analysis
CN112685841B (en) Finite element modeling and correcting method and system for structure with connection relation
CN110472840A (en) A kind of agricultural water conservancy dispatching method and system based on nerual network technique
CN115525038A (en) Equipment fault diagnosis method based on federal hierarchical optimization learning
CN112163671A (en) New energy scene generation method and system
CN113722980A (en) Ocean wave height prediction method, system, computer equipment, storage medium and terminal
Guoqiang et al. Study of RBF neural network based on PSO algorithm in nonlinear system identification
CN116050540A (en) Self-adaptive federal edge learning method based on joint bi-dimensional user scheduling
Li et al. Dual-stage hybrid learning particle swarm optimization algorithm for global optimization problems
Ricalde et al. Evolving adaptive traffic signal controllers for a real scenario using genetic programming with an epigenetic mechanism
CN109408896B (en) Multi-element intelligent real-time monitoring method for anaerobic sewage treatment gas production
CN110197250A (en) A kind of power battery on-line parameter identification method of multifactor impact
Chen et al. A Spark-based Ant Lion algorithm for parameters optimization of random forest in credit classification
CN105334730B (en) The IGA optimization T S of heating furnace oxygen content obscure ARX modeling methods
CN117200213A (en) Power distribution system voltage control method based on self-organizing map neural network deep reinforcement learning
Martinez-Soto et al. Fuzzy logic controllers optimization using genetic algorithms and particle swarm optimization
Ismail Adaptation of PID controller using AI technique for speed control of isolated steam turbine
CN113065693B (en) Traffic flow prediction method based on radial basis function neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant