CN114611823B - Optimized dispatching method and system for electricity-cold-heat-gas multi-energy-demand typical park - Google Patents

Optimized dispatching method and system for electricity-cold-heat-gas multi-energy-demand typical park Download PDF

Info

Publication number
CN114611823B
CN114611823B CN202210289380.XA CN202210289380A CN114611823B CN 114611823 B CN114611823 B CN 114611823B CN 202210289380 A CN202210289380 A CN 202210289380A CN 114611823 B CN114611823 B CN 114611823B
Authority
CN
China
Prior art keywords
value
scheduling
function
energy
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210289380.XA
Other languages
Chinese (zh)
Other versions
CN114611823A (en
Inventor
张海滨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Terminus Technology Group Co Ltd
Original Assignee
Terminus Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Terminus Technology Group Co Ltd filed Critical Terminus Technology Group Co Ltd
Priority to CN202210289380.XA priority Critical patent/CN114611823B/en
Publication of CN114611823A publication Critical patent/CN114611823A/en
Application granted granted Critical
Publication of CN114611823B publication Critical patent/CN114611823B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/04Constraint-based CAD

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Human Resources & Organizations (AREA)
  • Data Mining & Analysis (AREA)
  • Strategic Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Computational Linguistics (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Marketing (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Tourism & Hospitality (AREA)
  • Primary Health Care (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Water Supply & Treatment (AREA)
  • Public Health (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides an optimized dispatching method and system for a typical park with electricity-cold-heat-gas multi-energy demands, and belongs to the technical field of intelligent dispatching. The method comprises the following steps: acquiring performance parameters and constraint conditions of each device in an electricity, cold, heat and gas multi-energy system in a park; determining an objective function for optimizing scheduling, wherein the objective function comprises cost of electricity and gas and carbon emission; establishing an optimized dispatching reinforcement learning model, and determining a state space and a reward function, wherein the state space is determined according to the performance parameters of each device; and performing optimized scheduling on each device in the multi-energy system by using the optimized scheduling reinforcement learning model and based on the constraint conditions. The invention strengthens learning and carries out real-time scheduling optimization on the multi-energy system by considering the objective function of expense cost and carbon emission, thereby ensuring that scheduling can meet the real-time change of energy demand and improving the economy and environmental protection of the multi-energy system.

Description

Optimized dispatching method and system for electricity-cold-heat-gas multi-energy-demand typical park
Technical Field
The invention relates to the field of intelligent scheduling, in particular to an optimized scheduling method and system for a typical park with electricity-cold-heat-gas multi-energy requirements.
Background
In a traditional energy system, cold, heat, electricity and gas are often designed, operated and controlled independently, and different energy supply and energy consumption system main bodies cannot be integrally coordinated, matched and optimized, so that the overall utilization rate of energy is low. The multi-energy complementary comprehensive energy system is an energy production, supply and marketing integrated system formed by organically coordinating and optimizing production, transmission, conversion, storage, consumption and other links of various energy sources of cold, heat, electricity and gas in the processes of planning, construction, operation and the like, on one hand, the cascade utilization of energy sources is realized, the comprehensive utilization level of the energy sources is improved, and on the other hand, the comprehensive management and the coordination and complementation of the various energy sources are realized by utilizing a coupling mechanism of each energy system on the time and the space. At present, researches on a multi-energy complementary comprehensive energy system at home and abroad are mostly concentrated on a macroscopic level, such as system planning, functional architecture, technical form and the like, partial scholars use a control theory of a micro-grid and a scheduling theory of a large grid for reference to develop optimization operation researches on the comprehensive energy system, but mainly only two kinds of energy are studied to be coupled and use a consistent optimization period, the optimization method is consistent with a traditional method, the characteristics of multi-energy flow are not fully embodied, meanwhile, the researches on real-time coordination control of the multi-energy flow are rarely seen, the influence of load prediction errors on day-ahead scheduling cannot be solved, the economy of the multi-energy system is reduced, and the carbon emission is increased and cannot meet the environment-friendly requirements.
Disclosure of Invention
Therefore, the technical problem to be solved by the embodiments of the present invention is to overcome the defect of low economy and environmental protection caused by the fact that a multi-energy system in the prior art cannot be accurately regulated and controlled in real time, so as to provide an optimal scheduling method and system for a typical electricity-cold-heat-gas multi-energy demand park.
Therefore, the invention provides an optimal scheduling method for a typical park with electricity-cold-heat-gas multi-energy demand, which comprises the following steps:
acquiring performance parameters and constraint conditions of each device in an electricity, cold, heat and gas multi-energy system in a park;
determining an objective function for optimizing scheduling, wherein the objective function comprises cost of electricity and gas and carbon emission;
establishing an optimized scheduling reinforcement learning model, and determining a state space and a reward function, wherein the state space is determined according to the performance parameters of each device;
and performing optimized scheduling on each device in the multi-energy system by using the optimized scheduling reinforcement learning model and based on the constraint condition.
Optionally, the optimally scheduling, by using the optimally scheduling reinforcement learning model and based on the constraint condition, each device in the multi-energy system is optimally scheduled, including:
determining a plurality of selectable action values according to the current state, energy supply demand and environmental information of each device by using a first deep learning neural network model;
calculating the probability corresponding to each optional action value by utilizing a second deep learning neural network model;
and selecting the selectable action value corresponding to the maximum probability value as the current action value and executing.
Optionally, the first deep learning neural network model includes a radial basis function neural network, and the radial basis function neural network is established as follows:
establishing an input layer, wherein the input layer is used for inputting the current state, energy supply demand and environmental information of each device;
establishing a Gaussian radial basis function layer;
establishing a radial basis function weight connection layer;
and establishing a weight matrix of the output layer to perform matrix product operation with the output of the radial basis function weight connection layer.
Optionally, the first deep learning neural network model includes a radial basis function neural network, and a neuron excitation function of the radial basis function neural network is:
Figure BDA0003561066350000021
wherein, delta l (x) Is the excitation function of the ith neuron node in the hidden layer, x is the input vector, c l Center of excitation function for the first neuron node of the hidden layer, d l The width of the center of the excitation function of the ith neuron node of the hidden layer.
Optionally, the optimally scheduling each device in the multi-energy system by using the optimal scheduling reinforcement learning model and based on the constraint condition includes:
determining a current action value according to the following formula:
Figure BDA0003561066350000031
Figure BDA0003561066350000032
wherein, a ij Action value, s, of the jth adjustable parameter for the ith device ijmax Is the maximum state value, s, in the state space ij Is the current state value, s ijmin Is the minimum state value in the state space.
Optionally, the optimally scheduling each device in the multi-energy system by using the optimal scheduling reinforcement learning model and based on the constraint condition includes:
determining an initial action value;
calculating a reward function value and a Q value based on the initial action value;
judging whether the reward function value and the Q value meet preset conditions or not;
if so, determining the initial action value as the action value;
otherwise, adjusting the initial action value by using a preset algorithm to obtain a new action value, calculating a reward function value and a Q value based on the new action value, and judging whether the preset condition is met;
and if so, determining the new action value as the action value, otherwise, continuing to execute the previous step until the reward function value and the Q value corresponding to the latest action value meet the preset condition.
Optionally, the establishing an optimized scheduling reinforcement learning model includes:
initializing network parameters of the optimized dispatching reinforcement learning model;
training the optimized scheduling reinforcement learning model after network parameters are initialized by using a pre-obtained training sample, and determining a loss function value of the optimized scheduling reinforcement learning model according to an obtained Q value;
adjusting the network parameters according to the following formula:
Figure BDA0003561066350000033
wherein, w m (t + 1) is the adjusted network parameter, w m (t) is the current network parameter, σ (t) is the loss function value.
The invention also provides an optimized dispatching system for the electricity-cold-heat-gas multi-energy demand typical park, which comprises the following components:
one or more processors;
storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement any of the methods described above.
The technical scheme of the embodiment of the invention has the following advantages:
according to the optimal scheduling method and system for the electricity-cold-heat-gas multi-energy demand typical park, provided by the embodiment of the invention, the multi-energy system is scheduled and optimized in real time by considering the cost and the objective function of carbon emission through reinforcement learning, so that the scheduling can meet the real-time change of the energy demand, and the economy and the environmental protection of the multi-energy system are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a specific example of an optimal scheduling method for an electric-cooling-heating-air multi-energy demand typical park according to embodiment 1 of the present invention;
fig. 2 is a flowchart of a specific example of selecting a current action value in embodiment 1 of the present invention;
fig. 3 is a schematic block diagram of a specific example of an optimal scheduling system of an electric-cooling-heating-air demand typical park according to embodiment 2 of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In describing the present invention, it is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms "comprises" and/or "comprising," when used in this specification, are intended to specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The term "and/or" includes any and all combinations of one or more of the associated listed items. The terms "center," "upper," "lower," "left," "right," "vertical," "horizontal," "inner," "outer," and the like are used in the orientation or positional relationship indicated in the drawings for convenience in describing the invention and for simplicity in description, and do not indicate or imply that the referenced device or element must have a particular orientation, be constructed and operated in a particular orientation, and are therefore not to be construed as limiting the invention. The terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The terms "mounted," "connected," and "coupled" are to be construed broadly and may, for example, be fixedly coupled, detachably coupled, or integrally coupled; can be mechanically or electrically connected; the two elements can be directly connected, indirectly connected through an intermediate medium, or communicated with each other inside; either a wireless or a wired connection. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1
The embodiment provides an optimized dispatching method for a typical park with electricity-cold-heat-gas multi-energy demand, as shown in fig. 1, comprising the following steps:
s1: acquiring performance parameters and constraint conditions of each device in an electricity, cold, heat and gas multi-energy system in a park;
s2: determining an objective function for optimizing scheduling, wherein the objective function comprises cost of electricity and gas and carbon emission;
s3: establishing an optimized dispatching reinforcement learning model, and determining a state space and a reward function, wherein the state space is determined according to the performance parameters of each device;
s4: and performing optimized scheduling on each device in the multi-energy system by using the optimized scheduling reinforcement learning model and based on the constraint conditions.
In the embodiment of the invention, the multi-energy system realizes multi-energy coordinated supply of cooling, heating and power by integrating energy supply resources in a park, and real-time scheduling and optimization are carried out on the multi-energy system by strengthening learning and taking the objective function of expense cost and carbon emission into consideration, so that the scheduling can meet the real-time change of energy requirements, and the economy and the environmental protection of the multi-energy system are improved.
The scheduling problem of the multi-energy park is an optimization problem of multivariable, multi-constraint and energy coupling relation in time. Wherein the objective function may be a normalized weighted sum of the cost and the carbon emissions. The constraint conditions comprise energy constraint conditions including electric balance constraint, cold and heat balance constraint and air balance constraint, and energy conversion constraint of the equipment.
The device in the multi-energy system comprises: the combined heat and power type micro-gas turbine, the electric boiler, the gas boiler, the storage battery, the heat storage equipment, the refrigeration equipment and the like.
Optionally, as shown in fig. 2, the performing, by using the optimal scheduling reinforcement learning model and based on the constraint condition, the optimal scheduling on each device in the multi-energy system, that is, step S4, includes:
s41: determining a plurality of selectable action values according to the current state, energy supply demand and environmental information of each device by using a first deep learning neural network model;
the energy supply demand may include heating power demand, cooling power demand, etc., and the environmental information may include ambient temperature, humidity, etc.;
s42: calculating the probability corresponding to each optional action value by utilizing a second deep learning neural network model;
s43: and selecting the selectable action value corresponding to the maximum probability value as the current action value and executing.
In the embodiment of the invention, the selectable actions under the current state, the energy supply requirement and the environment are determined through the first deep learning neural network model, the probability corresponding to each selectable action value is calculated by utilizing the second deep learning neural network model, and the selectable action value corresponding to the maximum probability value is selected as the current action value, so that the real-time optimization performance of scheduling can be improved.
Optionally, the first deep learning neural network model includes a radial basis function neural network, and a neuron excitation function of the radial basis function neural network is:
Figure BDA0003561066350000061
wherein, delta l (x) Is the excitation function of the ith neuron node in the hidden layer, x is the input vector, c l For the first neuron of the hidden layerCenter of excitation function of node, d l The width of the center of the excitation function of the ith neuron node of the hidden layer.
Optionally, the number of neuron nodes in the hidden layer may be calculated according to the following formula:
Figure BDA0003561066350000062
wherein L is 1 、L 2 Is the number of input layer neurons, the number of output layer neurons, L 1 Determined according to the number of input parameters, L 2 According to the maximum number of selectable action values.
Further optionally, the radial basis function neural network may be established by:
establishing an input layer, wherein the input layer is used for inputting the current state, energy supply demand and environmental information of each device;
establishing a Gaussian radial basis function layer, which can be specifically established according to the neuron excitation function of the radial basis neural network;
establishing a radial basis function weight connection layer;
establishing an output layer comprising: and establishing a weight matrix of the output layer to perform matrix product operation with the output of the radial basis function weight connection layer.
Alternatively, the center and center width of the excitation function may be determined using a K-means clustering method. Specifically, a predetermined number of training samples are selected from a plurality of training samples to serve as initial clustering centers; determining Euclidean spatial distances of a plurality of the training samples to each initial clustering center; assigning a plurality of the training samples to a cluster set to which each of the initial cluster centers belongs based on the Euclidean distance; calculating the average value of training samples contained in each cluster set, and taking the average value as a new cluster center; and if the difference value between the new clustering center and the initial clustering center is less than or equal to a preset threshold value, determining the new clustering center as the center of the radial basis excitation function. Then, the distance between each cluster center and its nearest neighbor cluster center is calculated, and the center width is calculated from the average value of the distances.
Optionally, the second deep learning neural network model includes a plurality of BP neural networks, and the plurality of BP neural networks are combined according to the following formula:
Figure BDA0003561066350000071
wherein f is m For the mth BP neural network, w m And (3) taking the weight of the mth BP neural network, wherein M =1,2, …, M and M are the number of the BP neural networks, and sign is a sign function.
Further optionally, the weight w of the mth BP neural network m Is calculated by the following method:
determining the weight w using the Adaboost algorithm m
Specifically, the weight w may be calculated according to the following formula m
Figure BDA0003561066350000072
Wherein, delta t And determining the error rate of the selectable action value with the highest probability for the mth BP neural network, wherein k is the number of the selectable action values.
The weight w can also be determined according to the minimum loss function value at the end of the BP neural network training m
Specifically, each BP neural network includes an input layer, three hidden layers, and an output layer. During forward propagation, an input signal (at least comprising each selectable action value, the current state of each device and energy supply requirement and environment information) acts on an output node through a hidden layer, an output signal is generated through nonlinear transformation, if actual output does not accord with expected output, a backward propagation process of an error is carried out, the output error is reversely propagated to the input layer by layer through the hidden layer, and the error is distributed to all neurons of each layer, so that the error signal obtained from each layer is used as a basis for adjusting the weight of each neuron. And (3) reducing the error along the gradient direction by adjusting the connection strength between the input node and the hidden node, the connection strength between the hidden node and the output node and the threshold, determining the weight and the threshold corresponding to the minimum error through repeated learning and training, and stopping the training.
Optionally, the optimally scheduling each device in the multi-energy system by using the optimal scheduling reinforcement learning model and based on the constraint condition includes:
determining the action value according to the following formula:
Figure BDA0003561066350000081
Figure BDA0003561066350000082
wherein, a ij Action value, s, of the jth adjustable parameter for the ith device ijmax Is the maximum state value, s, in the state space ij Is the current state value, s ijmin And rand is a random function which is the minimum state value in the state space.
In other optional specific embodiments, the action value may also be determined by:
determining an initial action value;
calculating a reward function value and a Q value based on the initial action value;
judging whether the reward function value and the Q value meet preset conditions or not; the preset condition may be that the difference between the reward function value and the Q value and the currently optimal reward function value and Q value is continuously small;
if so, determining the initial action value as the action value;
otherwise, adjusting the initial action value by using a preset algorithm to obtain a new action value, calculating a reward function value and a Q value based on the new action value, and judging whether the preset condition is met;
and if so, determining the new action value as the action value, otherwise, continuing to execute the previous step until the reward function value and the Q value corresponding to the latest action value meet the preset condition.
Wherein, the preset algorithm may be: a is ij ′=a ij +rand[-0.5,0.5]*Δa ij Wherein a is ij 、a ij ' is the action value before and after the adjustment of the jth adjustable parameter of the ith device, rand is a random function, delta a ij And adjusting the step length for the action value of the jth adjustable parameter of the ith device.
In the embodiment of the invention, the optimal action value at the current moment is selected in an iterative optimization mode, so that each scheduling optimization is the optimal scheduling under the current condition, the action frequency of each device can be further reduced, and the influence on the service life of the device caused by frequent actions is avoided.
In other alternative embodiments, the current action value may also be obtained by a lave search method.
Optionally, the establishing an optimized scheduling reinforcement learning model includes:
initializing network parameters of the optimized dispatching reinforcement learning model;
training the optimized scheduling reinforcement learning model after network parameters are initialized by using a pre-obtained training sample, and determining a loss function value of the optimized scheduling reinforcement learning model according to an obtained Q value;
adjusting the network parameters according to the following formula:
Figure BDA0003561066350000091
wherein, w m (t + 1) is the adjusted network parameter, w m (t) is the current network parameter, and σ (t) is the loss function value.
In other optional specific embodiments, the optimal scheduling reinforcement learning model may be established and the network parameters of the optimal scheduling reinforcement learning model may be determined by the following method:
clustering the training samples according to preset parameters, and determining the number of neurons of the hidden layer of the model according to a clustering result;
determining parameters to be trained according to the determined number of the neurons of the model hidden layer;
evaluating the adaptive value of each parameter to obtain an initial population;
selecting individuals from the initial population as potential field centers according to attraction potential field center data;
and calculating the selected probability in each potential field according to the adaptive value, and updating the individual positions in the population.
When the positions of individuals in the population are updated, the size relationship between the average particle distance of the population and a preset threshold value is different, and the updating calculation modes are different.
The embodiment of the invention can avoid the problem that the network parameters of the optimal scheduling reinforcement learning model are possibly trapped in local optimization during the optimization.
Example 2
The present embodiment provides an optimized dispatching system 30 for electricity-cooling-heating-gas multi-energy demand typical park, as shown in fig. 3, including:
one or more processors 301;
a storage device 302 for storing one or more programs;
the one or more programs, when executed by the one or more processors 301, cause the one or more processors 301 to implement any of the methods described above in embodiment 1.
In the embodiment of the invention, the multi-energy system realizes multi-energy coordinated supply of cooling, heating and power by integrating energy supply resources in a park, and real-time scheduling and optimization are carried out on the multi-energy system by strengthening learning and taking the objective function of expense cost and carbon emission into consideration, so that the scheduling can meet the real-time change of energy requirements, and the economy and the environmental protection of the multi-energy system are improved.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (2)

1. An optimized dispatching method for a typical park with electricity-cold-heat-gas multi-energy demand comprises the following steps:
acquiring performance parameters and constraint conditions of each device in an electricity, cold, heat and gas multi-energy system in a park;
determining an objective function for optimizing scheduling, wherein the objective function comprises cost of electricity and gas and carbon emission;
establishing an optimized scheduling reinforcement learning model, and determining a state space and a reward function, wherein the state space is determined according to the performance parameters of each device;
optimally scheduling each of the devices in the multi-energy system using the optimal scheduling reinforcement learning model and based on the constraints,
the optimally scheduling each device in the multi-energy system by using the optimally scheduling reinforcement learning model and based on the constraint condition comprises: determining an initial action value; calculating a reward function value and a Q value based on the initial action value; judging whether the reward function value and the Q value meet preset conditions or not; if so, determining the initial action value as the action value; otherwise, adjusting the initial action value by using a preset algorithm to obtain a new action value, calculating a reward function value and a Q value based on the new action value, and judging whether the preset condition is met; if so, determining the new action value as the action value, otherwise, continuing to execute the previous step until the reward function value and the Q value corresponding to the latest action value meet the preset condition;
the establishing of the optimized dispatching reinforcement learning model comprises the following steps:
initializing network parameters of the optimized dispatching reinforcement learning model;
training the optimized scheduling reinforcement learning model after network parameters are initialized by using a pre-obtained training sample, and determining a loss function value of the optimized scheduling reinforcement learning model according to an obtained Q value;
adjusting the network parameters according to the following formula:
Figure 124706DEST_PATH_IMAGE002
wherein,
Figure DEST_PATH_IMAGE003
in order to adjust the network parameters in question,
Figure 245109DEST_PATH_IMAGE004
for the current said network parameters to be used,
Figure DEST_PATH_IMAGE005
the optimizing and scheduling the equipment in the multi-energy system by utilizing the optimizing and scheduling reinforcement learning model and based on the constraint condition further comprises the following steps:
determining a plurality of selectable action values according to the current state, energy supply demand and environmental information of each device by using a first deep learning neural network model;
calculating the probability corresponding to each optional action value by utilizing a second deep learning neural network model;
selecting the selectable action value corresponding to the maximum probability value as a current action value and executing;
the first deep learning neural network model comprises a radial basis function neural network, and the radial basis function neural network is established by the following process:
establishing an input layer, wherein the input layer is used for inputting the current state, energy supply demand and environmental information of each device;
establishing a Gaussian radial basis function layer;
establishing a radial basis function weight connection layer;
establishing a weight matrix of an output layer to perform matrix product operation with the output of the radial basis function weight connection layer;
the neuron excitation function of the radial basis function neural network is as follows:
Figure 149480DEST_PATH_IMAGE006
wherein,
Figure DEST_PATH_IMAGE007
is a first in a hidden layer
Figure 669323DEST_PATH_IMAGE008
The excitation function of each of the neuron nodes,
Figure 342750DEST_PATH_IMAGE009
in order to input the vector, the vector is input,
Figure 89514DEST_PATH_IMAGE010
is a hidden layer
Figure 28651DEST_PATH_IMAGE008
The center of the excitation function of each neuron node,
Figure 680081DEST_PATH_IMAGE011
as a hidden layer
Figure 880118DEST_PATH_IMAGE008
A center width of an excitation function of each neuron node;
the number of the neuron nodes in the hidden layer can be calculated according to the following formula:
Figure 545586DEST_PATH_IMAGE012
wherein,
Figure 955707DEST_PATH_IMAGE013
the number of neurons in the input layer is,
Figure 489457DEST_PATH_IMAGE014
the number of neurons in the output layer is,
Figure 340738DEST_PATH_IMAGE013
is determined according to the number of the input parameters,
Figure 567320DEST_PATH_IMAGE014
determining a maximum number of selectable action values;
the second deep learning neural network model comprises a plurality of BP neural networks, which are combined according to the following formula:
Figure 215470DEST_PATH_IMAGE015
wherein,
Figure 414895DEST_PATH_IMAGE016
is as follows
Figure 589524DEST_PATH_IMAGE017
The number of the BP neural networks is equal to the number of the BP neural networks,
Figure 596795DEST_PATH_IMAGE018
is as follows
Figure 856875DEST_PATH_IMAGE017
The weight of each of the BP neural networks,
Figure 529165DEST_PATH_IMAGE019
Figure 558300DEST_PATH_IMAGE020
the number of the BP neural networks is the number of the BP neural networks,
Figure 251319DEST_PATH_IMAGE021
as a function of the sign
Figure 608482DEST_PATH_IMAGE017
Weight of BP neural network
Figure 287725DEST_PATH_IMAGE018
Is calculated by the following method:
Figure 296001DEST_PATH_IMAGE022
wherein,
Figure 441812DEST_PATH_IMAGE023
is a first
Figure 804047DEST_PATH_IMAGE017
The individual BP neural network determines the error rate of the most probable alternative action value,
Figure 896768DEST_PATH_IMAGE024
is the number of the selectable action values.
2. An optimal dispatch system for a typical electric-cold-hot-gas demand park, comprising:
one or more processors;
storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of claim 1.
CN202210289380.XA 2022-03-23 2022-03-23 Optimized dispatching method and system for electricity-cold-heat-gas multi-energy-demand typical park Active CN114611823B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210289380.XA CN114611823B (en) 2022-03-23 2022-03-23 Optimized dispatching method and system for electricity-cold-heat-gas multi-energy-demand typical park

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210289380.XA CN114611823B (en) 2022-03-23 2022-03-23 Optimized dispatching method and system for electricity-cold-heat-gas multi-energy-demand typical park

Publications (2)

Publication Number Publication Date
CN114611823A CN114611823A (en) 2022-06-10
CN114611823B true CN114611823B (en) 2022-11-08

Family

ID=81864579

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210289380.XA Active CN114611823B (en) 2022-03-23 2022-03-23 Optimized dispatching method and system for electricity-cold-heat-gas multi-energy-demand typical park

Country Status (1)

Country Link
CN (1) CN114611823B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118034066A (en) * 2024-04-11 2024-05-14 国网江苏省电力有限公司常州供电分公司 Coordinated operation control method, equipment and storage medium for energy system of multi-energy coupling cabin

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242286A (en) * 2018-08-27 2019-01-18 华北电力大学 A kind of Demand Side Response Potential model method based on radial base neural net
CN110046751A (en) * 2019-03-27 2019-07-23 上海建坤信息技术有限责任公司 Multi-energy system dispatching method based on the prediction of radial base energy consumption and real-time energy efficiency
WO2021007812A1 (en) * 2019-07-17 2021-01-21 深圳大学 Deep neural network hyperparameter optimization method, electronic device and storage medium
CN113723749A (en) * 2021-07-20 2021-11-30 中国电力科学研究院有限公司 Multi-park comprehensive energy system coordinated scheduling method and device
CN114091879A (en) * 2021-11-15 2022-02-25 浙江华云电力工程设计咨询有限公司 Multi-park energy scheduling method and system based on deep reinforcement learning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110990785A (en) * 2019-11-27 2020-04-10 江苏方天电力技术有限公司 Multi-objective-based optimal scheduling method for intelligent park comprehensive energy system
CN112288592B (en) * 2020-10-20 2022-10-25 东南大学 Gas-thermal coupling system SCUC optimization scheduling method, device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242286A (en) * 2018-08-27 2019-01-18 华北电力大学 A kind of Demand Side Response Potential model method based on radial base neural net
CN110046751A (en) * 2019-03-27 2019-07-23 上海建坤信息技术有限责任公司 Multi-energy system dispatching method based on the prediction of radial base energy consumption and real-time energy efficiency
WO2021007812A1 (en) * 2019-07-17 2021-01-21 深圳大学 Deep neural network hyperparameter optimization method, electronic device and storage medium
CN113723749A (en) * 2021-07-20 2021-11-30 中国电力科学研究院有限公司 Multi-park comprehensive energy system coordinated scheduling method and device
CN114091879A (en) * 2021-11-15 2022-02-25 浙江华云电力工程设计咨询有限公司 Multi-park energy scheduling method and system based on deep reinforcement learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
A multi-agent deep reinforcement learning approach enabled distributed energy management schedule for the coordinate control of multi-energy hub with gas, electricity, and freshwater;Guozhou Zhang,Weihao Hu,Di Cao,Zhenyuan Zhang,Qi Huang,.etc;《Energy Conversion and Management》;20220301;第255卷;全文 *

Also Published As

Publication number Publication date
CN114611823A (en) 2022-06-10

Similar Documents

Publication Publication Date Title
CN110705743B (en) New energy consumption electric quantity prediction method based on long-term and short-term memory neural network
CN110263995B (en) Distribution transformer overload prediction method considering load increase rate and user power utilization characteristics
CN111881505B (en) Multi-objective optimization transformation decision method for existing building based on GA-RBF algorithm
Li et al. Forecasting building energy consumption with hybrid genetic algorithm-hierarchical adaptive network-based fuzzy inference system
CN111598225B (en) Air conditioner cold load prediction method based on self-adaptive deep confidence network
CN113112077B (en) HVAC control system based on multi-step prediction deep reinforcement learning algorithm
Han et al. Energy saving of buildings for reducing carbon dioxide emissions using novel dendrite net integrated adaptive mean square gradient
CN110837915B (en) Low-voltage load point prediction and probability prediction method for power system based on hybrid integrated deep learning
CN112415924A (en) Energy-saving optimization method and system for air conditioning system
CN110071502A (en) A kind of calculation method of short-term electric load prediction
Chitsazan et al. Wind speed forecasting using an echo state network with nonlinear output functions
Song et al. An indoor temperature prediction framework based on hierarchical attention gated recurrent unit model for energy efficient buildings
CN115186803A (en) Data center computing power load demand combination prediction method and system considering PUE
CN117565727B (en) Wireless charging automatic control method and system based on artificial intelligence
CN114611823B (en) Optimized dispatching method and system for electricity-cold-heat-gas multi-energy-demand typical park
CN108303898B (en) Intelligent scheduling method of novel solar-air energy coupling cold and heat cogeneration system
Lee et al. Artificial intelligence enabled energy-efficient heating, ventilation and air conditioning system: Design, analysis and necessary hardware upgrades
CN116937601A (en) Multi-element controllable load cooperative scheduling strategy checking method based on online security analysis
CN111222762A (en) Solar cell panel coating process state monitoring and quality control system
CN109408896B (en) Multi-element intelligent real-time monitoring method for anaerobic sewage treatment gas production
Fu et al. Predictive control of power demand peak regulation based on deep reinforcement learning
Fayaz et al. An efficient energy consumption and user comfort maximization methodology based on learning to optimization and learning to control algorithms
CN113708418A (en) Micro-grid optimization scheduling method
Xu et al. Short-term electricity consumption forecasting method for residential users based on cluster classification and backpropagation neural network
CN116880169A (en) Peak power demand prediction control method based on deep reinforcement learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant