CN117722699B - Control method for operation of combustion equipment and electronic equipment - Google Patents

Control method for operation of combustion equipment and electronic equipment Download PDF

Info

Publication number
CN117722699B
CN117722699B CN202311689644.1A CN202311689644A CN117722699B CN 117722699 B CN117722699 B CN 117722699B CN 202311689644 A CN202311689644 A CN 202311689644A CN 117722699 B CN117722699 B CN 117722699B
Authority
CN
China
Prior art keywords
data
neural network
ann
input
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311689644.1A
Other languages
Chinese (zh)
Other versions
CN117722699A (en
Inventor
戚远航
刘效洲
朱光羽
容毅浜
刘杰成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China Zhongshan Institute
Original Assignee
University of Electronic Science and Technology of China Zhongshan Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China Zhongshan Institute filed Critical University of Electronic Science and Technology of China Zhongshan Institute
Priority to CN202311689644.1A priority Critical patent/CN117722699B/en
Publication of CN117722699A publication Critical patent/CN117722699A/en
Application granted granted Critical
Publication of CN117722699B publication Critical patent/CN117722699B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Feedback Control In General (AREA)

Abstract

The invention discloses a control method for operation of combustion equipment and electronic equipment, wherein the control method comprises the following steps: constructing a self-encoder network; taking the original input data X after the normalization of the combustion equipment as input parameters of a self-encoder network, and obtaining the data after dimension reduction; Constructing LSTMinput a neural network; constructing LSTMoutput a neural network; acquiring predictive data using LSTMinput and LSTMoutput neural networksAnd predictive data; Will predict dataSum dataSplicing to obtain first spliced data, and predicting dataAnd steam production dataSplicing to obtain second spliced data; performing forward prediction training and reverse prediction training on the first spliced data and the second spliced data by adopting a depth ANNForward neural network; optimizing a depth ANNForward neural network to obtain optimal input parameters; and inputting the optimal input parameters into the depth ANNForward neural network to obtain the steam production data corresponding to the combustion equipment.

Description

Control method for operation of combustion equipment and electronic equipment
Technical Field
The invention relates to the technical field of combustion equipment control, in particular to a control method for operation of combustion equipment and electronic equipment.
Background
Currently, the types of combustion equipment in power plants are various, and the steam production is one of important indexes for measuring the combustion efficiency of the power plants. Along with the problems of sustainable development, environmental protection, waste recycling and the like of various communities, the industry is constantly striving to improve the steam production of combustion equipment so as to achieve the optimal combustion effect, so that the fuel can be fully utilized, for example, the fuel is fully combusted and the generating capacity of the fuel is improved so as to achieve the purpose of improving the generating efficiency of the fuel.
The combustion devices currently used in domestic power plants mostly have a relatively large number of input parameters which control the operation of the individual components of the combustion device, such as the intake air quantity, the fuel supply quantity, etc. The various operation parameter settings of the combustion device can directly influence the combustion effect of the fuel, so that the parameter tuning of the combustion device is very important. At present, the parameter tuning mode of the combustion equipment is complex, the tuning effect is not obvious, and the optimal combustion operation parameter is difficult to find.
Disclosure of Invention
An object of the present invention is to provide a new solution for a control method of operation of a combustion apparatus, which at least solves the problem in the prior art that it is difficult to find an optimal combustion operation parameter.
The control method for the operation of the combustion equipment comprises the following steps:
S1, constructing a self-encoder network;
s2, taking the original input data X after normalization of the combustion equipment as the input parameters of the self-encoder network, and obtaining the data after dimension reduction
S3, using the dataAs input data of the neural network, constructing an LSTM input neural network;
s4, using the steam generation amount data of the combustion equipment As input data of the neural network, constructing an LSTM output neural network;
S5, acquiring prediction data by using the LSTM input neural network and the LSTM output neural network And predictive data/>
S6, the predicted data is processedAnd the data/>Splicing to obtain first spliced data, and carrying out/>, on the predicted dataAnd the steam generation amount data/>Splicing to obtain second spliced data;
S7, performing forward prediction training and reverse prediction training on the first spliced data and the second spliced data by adopting a deep ANN Forward neural network;
S8, optimizing the depth ANN Forward neural network to obtain optimal input parameters;
And S9, inputting the optimal input parameters into the depth ANN Forward neural network to obtain the steam generation amount data corresponding to the combustion equipment.
Optionally, the output coding formula from the input layer to the hidden layer of the encoder network is:
(1)
The decoding formula from the hidden layer to the output layer of the encoder network is:
(2)
the optimization objective function of the self-encoder network is a mean square error MSE, and the calculation formula is as follows:
(3)。
Optionally, step S2 includes:
S21, setting the dimension of the original input data X after normalization as a, wherein the training fitting error of the self-encoder network is MSE;
S22, setting the number of self-coding hidden layer neurons as b, and enabling an initial value b=a-1;
S23, setting the number of self-coding hidden layer neurons as b, taking the normalized original input data X as input training data of the self-coder network, and outputting the training error MSE;
s24, setting the number of self-coding hidden layer neurons as b=b-1, if b is 0, executing a step S25, otherwise executing a step S23;
s25, selecting the number b of self-coding hidden layer neurons corresponding to one training with the minimum training error MSE output in the step S23 as the dimension m of the self-coding dimension-reduced data.
Optionally, in step S25, when a plurality of identical training errors MSE are included, the smallest b is selected as the dimension m of the self-encoded reduced data.
Optionally, step S4 includes:
S41, acquiring predicted data of 500 time points in the future by using the LSTM input neural network trained in the step S3
S42, acquiring predicted data of 500 time points in the future by using the LSTM output neural network trained in the step S4
Optionally, in step S6, the prediction dataAnd the data/>Splicing in time sequence, wherein the predicted data/>And the steam generation amount data/>Splicing is carried out according to the time sequence.
Optionally, step S7 includes:
S71, taking the first spliced data as training input data of the deep ANN Forward neural network, taking the second spliced data as training output data of the deep ANN Forward neural network, and training the deep ANN Forward neural network;
And S72, taking the second spliced data as training input data of the deep ANN Backward neural network, taking the first spliced data as training output data of the deep ANN Backward neural network, and training the deep ANN Backward neural network.
Optionally, step S8 includes:
s81, initializing a bat number num of a bat algorithm, wherein the bat speed is v, the frequency is f, and the position of the kth bat at the time t is recorded as The historical optimal position of all bats at time t is/>
The bat update formula for the bat algorithm is shown as follows:
(4)
wherein beta is a random value within [0,1], For the current optimal individual position of the bat, f k is the sound wave frequency of bat k of the bat algorithm, the value is in the interval [ f min, fmax ], wherein f min takes on a value of 0 and f max takes on a value of 100;
s82, setting the adaptability function of the bat algorithm as follows:
(5)
Wherein the method comprises the steps of Is the final steam yield;
s83, performing single iteration on the deep ANN Forward neural network parameters by adopting a bat algorithm;
s84, repeating the step S83, and iterating the depth ANN Forward neural network parameters for a plurality of times to obtain the optimal bat position and optimal bat position
Optionally, step S83 includes:
s831, one bat position of the bat algorithm As an input parameter of the deep ANN Forward neural network, obtaining an output value Y Forward of the deep ANN Forward neural network;
S832, taking Y Forward obtained in the step S831 as an input parameter of the depth ANN Backward neural network to obtain an output value X Backward;
S833, calculating the integrated input value of the depth ANN Forward neural network The calculation formula is as follows:
(6)
S834, will As an input value of the deep ANN Forward neural network, a combined output value/> isobtained
S835, calculating the final output of the depth ANN Forward neural networkThe calculation formula is as follows:
(7)
S836, the bat algorithm optimizes the adaptability output of the single iteration of the deep ANN Forward neural network parameters as follows
Another object of the present invention is to provide an electronic apparatus capable of executing the control method of the operation of the above-described combustion apparatus.
According to the control method for the operation of the combustion equipment, the dimension of input data is reduced by constructing the self-encoder network, the neural network is trained by using the dimension reduced data as the input data of the neural network, the other neural network is trained by using the output data as the input data of the other neural network, then the two trained neural network prediction data are used, the prediction data are spliced and then subjected to deep training, and finally the trained training model and the input parameters are adopted, so that the optimal steam production parameters of the combustion equipment can be obtained. The method is simple to operate, has good tuning effect, and can effectively optimize the input parameters of the combustion equipment, thereby obtaining the optimal unit fuel steam production.
Other features of the present invention and its advantages will become apparent from the following detailed description of exemplary embodiments of the invention, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow chart of a method of controlling operation of a combustion apparatus according to an embodiment of the present invention;
Fig. 2 is a schematic structural view of an electronic device according to an embodiment of the present invention.
Reference numerals:
a processor 201;
a memory 202; an operating system 2021; an application 2022;
A network interface 203;
an input device 204;
A hard disk 205;
A display device 206.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of exemplary embodiments may have different values. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
A method of controlling the operation of a combustion apparatus according to an embodiment of the present invention will be described in detail with reference to the accompanying drawings.
As shown in fig. 1, the control method of the operation of the combustion apparatus according to the embodiment of the present invention includes the steps of:
S1, constructing a self-encoder network.
S2, taking the original input data X after normalization of the combustion equipment as the input parameters of the self-encoder network, and obtaining the data after dimension reduction
S3, using the dataAs input data of the neural network, LSTM input neural network was constructed.
S4, using the steam generation amount data of the combustion equipmentAs input data of the neural network, LSTM output neural network was constructed.
S5, acquiring prediction data by using the LSTM input neural network and the LSTM output neural networkAnd predictive data/>
S6, the predicted data is processedAnd the data/>Splicing to obtain first spliced data, and carrying out/>, on the predicted dataAnd the steam generation amount data/>And splicing to obtain second spliced data.
And S7, performing forward prediction training and reverse prediction training on the first spliced data and the second spliced data by adopting a deep ANN Forward neural network.
And S8, optimizing the depth ANN Forward neural network to obtain the optimal input parameters.
And S9, inputting the optimal input parameters into the depth ANN Forward neural network to obtain the steam generation amount data corresponding to the combustion equipment.
In other words, the control method for the operation of the combustion device according to the embodiment of the invention can be used for optimizing the input parameters of the combustion device and obtaining the optimal unit fuel steam yield of the combustion device according to the optimized input parameters. Specifically, when the control method for the operation of the combustion equipment according to the embodiment of the invention is operated, firstly, a self-encoder network is constructed, the self-encoder network can take the original data X after the normalization of the combustion equipment as an input parameter, and after the original data is reduced to m-dimension, the data after the dimension reduction is obtained. Then, the LSTM input neural network is trained using the reduced-dimension input data as the LSTM input neural network input data, and then the LSTM output neural network is trained using the output data as the LSTM output neural network input data. Wherein data/>, can be usedAs input data of the LSTM input neural network, the LSTM input neural network is constructed, and data/>, can be usedAs input data to the LSTM output neural network, an LSTM output neural network was constructed.
And then, predicting output data by using the trained LSTM input neural network and the trained LSTM output neural network, splicing the predicted output data with the dimensionality reduced data and the like to obtain first spliced data and second spliced data, and training the depth ANN Forward neural network by taking the spliced data as input and output data of the depth ANN Forward neural network. And finally, optimizing the depth ANN Forward neural network, obtaining the optimal input parameters, and inputting the optimal input parameters into the depth ANN Forward neural network to obtain the optimal steam yield parameters of the combustion equipment.
According to the control method for the operation of the combustion equipment, the input data are subjected to dimension reduction through constructing the self-encoder network, the dimension-reduced data are used as the input data of the neural network to train the neural network, the output data are used as the input data of the other neural network to train the other neural network, then the trained two neural network prediction data are used, the prediction data are spliced and then subjected to deep training, finally a trained training model is adopted, and parameters are input, so that the optimal steam production parameters of the combustion equipment can be obtained. The method is simple to operate, has good tuning effect, and can effectively optimize the input parameters of the combustion equipment, thereby obtaining the optimal unit fuel steam production.
According to one embodiment of the present invention, the output coding formula from the input layer to the hidden layer of the encoder network is:
The decoding formula from the hidden layer to the output layer of the encoder network is:
the optimization objective function of the self-encoder network is a mean square error MSE, and the calculation formula is as follows:
That is, the input layer to the hidden layer, from the hidden layer to the output layer, and the optimization objective function, etc. of the constructed self-encoder network have predetermined output encoding formulas, respectively, and thus the constructed self-encoder network can be used for subsequent dimension reduction processing of the input parameters.
In some embodiments of the invention, step S2 comprises:
S21, setting the dimension of the original input data X after normalization as a, and setting the training fitting error of the self-encoder network as MSE.
S22, the number of self-encoded hidden layer neurons is set to b, and the initial value b=a-1.
S23, setting the number of self-coding hidden layer neurons as b, taking the normalized original input data X as input training data of the self-coder network, and outputting the training error MSE.
S24, setting the number of self-encoded hidden layer neurons as b=b-1, if b is 0, executing step S25, otherwise executing step S23.
S25, selecting the number b of self-coding hidden layer neurons corresponding to one training with the minimum training error MSE output in the step S23 as the dimension m of the self-coding dimension-reduced data.
Optionally, in step S25, when a plurality of identical training errors MSE are included, the smallest b is selected as the dimension m of the self-encoded reduced data.
In other words, the control method of the operation of the combustion apparatus according to the embodiment of the present invention adopts the following steps to determine the data dimension m after the dimension reduction of the self-code, i.e., the hidden layer neuron number.
First, let the dimension of the original input data X after normalization be a, and the training fit error of the self-encoder be MSE. Then, the number of hidden layer neurons of the self-code is set to b, and the initial value b=a-1. Then, setting the number of the self-coding hidden layer neurons as b, taking the normalized original input data X as input training data of a self-coder network, and outputting an error MSE of the training. The number of hidden layer neurons of the self-code is set to b=b-1, if b is 0, the previous step S23 is executed, otherwise the next step S25 is executed. Finally, the number b of self-encoded hidden layer neurons corresponding to the training with the smallest training error MSE output in step S23 is selected as the dimension m of the self-encoded reduced-dimension data. If there are multiple identical MSEs, then b is chosen to be the smallest.
According to one embodiment of the invention, step S4 comprises:
S41, acquiring predicted data of 500 time points in the future by using the LSTM input neural network trained in the step S3
S42, acquiring predicted data of 500 time points in the future by using the LSTM output neural network trained in the step S4
Therefore, the predicted data of the future 500 time points can be obtained through the trained LSTM input neural network and the trained LSTM output neural network respectively, and the specific selection of the time points can be reasonably adjusted according to actual use requirements.
In some embodiments of the invention, in step S6, the prediction dataAnd the data/>Splicing in time sequence, wherein the predicted data/>And the steam generation amount data/>Splicing is carried out according to the time sequence.
In other words, data is to beAnd/>Spliced in time sequence to obtain data/>Data Y and dataSpliced in time sequence to obtain data/>Through the data splicing enhancement operation, the training data quantity of the neural network can be increased, and the learning capacity and the generalization capacity of the neural network are improved.
Optionally, step S7 includes:
S71, taking the first spliced data as training input data of the deep ANN Forward neural network, taking the second spliced data as training output data of the deep ANN Forward neural network, and training the deep ANN Forward neural network;
And S72, taking the second spliced data as training input data of the deep ANN Backward neural network, taking the first spliced data as training output data of the deep ANN Backward neural network, and training the deep ANN Backward neural network.
That is, the neural network for depth ANN Forward can be subjected to forward prediction training and reverse prediction training, and data is obtained when the forward prediction training is performedData/>, as training input data for deep ANN Forward neural networkAs training output data of the deep ANN Forward neural network, the deep ANN Forward neural network is trained. Data/>, when reverse predictive training is performedData/>, as training input data for deep ANN Backward neural networkAs training output data of the deep ANN Backward neural network, the deep ANN Backward neural network is trained. The ANN Backward neural network has the function of predicting an input value by using an output value, and the predicted value obtained by using the ANN Backward neural network can be used for correcting the input value of the ANN Forward neural network, so that the final prediction accuracy is improved.
In step S8, the deep ANN Forward neural network is optimized using a bat algorithm, according to one embodiment of the present invention.
Specifically, step S8 includes:
s81, initializing a bat number num of a bat algorithm, wherein the bat speed is v, the frequency is f, and the position of the kth bat at the time t is recorded as The historical optimal position of all bats at time t is/>
The bat update formula for the bat algorithm is shown as follows:
(4)
wherein beta is a random value within [0,1], For the current optimal individual position of the bat, f k is the sound wave frequency of bat k of the bat algorithm, the value is in the interval [ f min, fmax ], wherein f min takes on a value of 0 and f max takes on a value of 100;
s82, setting the adaptability function of the bat algorithm as follows:
(5)
Wherein the method comprises the steps of Is the final steam yield;
s83, performing single iteration on the deep ANN Forward neural network parameters by adopting a bat algorithm;
s84, repeating the step S83, and iterating the depth ANN Forward neural network parameters for a plurality of times to obtain the optimal bat position and optimal bat position
Wherein, step S83 includes:
s831, one bat position of the bat algorithm As an input parameter of the deep ANN Forward neural network, obtaining an output value Y Forward of the deep ANN Forward neural network;
S832, taking Y Forward obtained in the step S831 as an input parameter of the depth ANN Backward neural network to obtain an output value X Backward;
S833, calculating the integrated input value of the depth ANN Forward neural network The calculation formula is as follows:
(6)
S834, will As an input value of the deep ANN Forward neural network, a combined output value/> isobtained
S835, calculating the final output of the depth ANN Forward neural networkThe calculation formula is as follows:
(7)
S836, the bat algorithm optimizes the adaptability output of the single iteration of the deep ANN Forward neural network parameters as follows
In other words, the algorithm flow of the optimized deep ANN Forward neural network by adopting the bat algorithm is as follows: first, the bat number num, bat speed v, frequency f, and the position of the kth bat at time t are initializedThe historical optimal position of all bats at time t is/>
The bat update formula of the bat algorithm is shown as follows:
wherein beta is a random value within [0,1], For the current optimal individual location of the bat, f k is the sound wave frequency of the bat algorithm bat k, with a value in the interval [ f min, fmax ], where f min takes a value of 0 and f max takes a value of 100. Wherein the encoding dimension and/>, of the parameters bat of the bat algorithmSimilarly, 7 dimensions may be assumed, with each dimension having a bat encoding range of [0,1], and a bat speed encoding of [ -1,1].
Then, the fitness function of the bat algorithm is set as:
Wherein the method comprises the steps of The steam yield of the final garbage furnace is obtained.
Specifically, the bat algorithm optimizes the single iteration flow of the deep ANN Forward neural network parameters as follows:
First, one bat position of the bat algorithm is calculated As an input parameter of the deep ANN Forward neural network, an output value Y Forward of the deep ANN Forward neural network is obtained.
And secondly, taking Y Forward obtained in the first step as an input parameter of the depth ANN Backward neural network to obtain an output value X Backward.
Third, calculate the integrated input value of the depth ANN Forward neural networkThe calculation formula is as follows:
Fourth step, will As an input value to the deep ANN Forward neural network, a unified output value/>, is obtained
Fifth, calculate the final output of the deep ANN Forward neural networkThe calculation formula is as follows:
Sixthly, the adaptive output of single iteration of the bat algorithm optimization depth ANN Forward neural network parameters is that
Wherein, the maximum iteration number of the bat algorithm optimization depth ANN Forward neural network is set to be 100 times, so as to obtain the optimal bat position and optimal bat position. The optimal bat position is the optimal input parameter of the combustion equipment.
And finally, inputting the optimal input parameters obtained in the step S8 into the ANN Forward neural network, and outputting the steam production corresponding to the optimal input parameters of the combustion equipment.
Therefore, by adopting the control method for the operation of the combustion equipment, the unit fuel steam production of the combustion equipment can be effectively predicted, and the relative error between the data predicted by adopting the control method and the true value is smaller, so that the use requirement can be met.
The invention also provides an electronic device, comprising: a processor 201 and a memory 202, wherein computer program instructions are stored in the memory 202, wherein the computer program instructions, when run by the processor 201, cause the processor 201 to perform the steps of the control method of the operation of the combustion apparatus in the above-described embodiments.
Further, as shown in fig. 2, the electronic device further comprises a network interface 203, an input device 204, a hard disk 205, and a display device 206.
The interfaces and devices described above may be interconnected by a bus architecture. The bus architecture may include any number of interconnected buses and bridges. One or more central processing units 201 (CPUs), in particular represented by processor 201, and various circuits of one or more memories 202, represented by memories 202, are connected together. The bus architecture may also connect various other circuits together, such as peripheral devices, voltage regulators, and power management circuits. It is understood that a bus architecture is used to enable connected communications between these components. The bus architecture includes, in addition to a data bus, a power bus, a control bus, and a status signal bus, all of which are well known in the art and therefore will not be described in detail herein.
The network interface 203 may be connected to a network (e.g., the internet, a local area network, etc.), and may obtain relevant data from the network and store the relevant data in the hard disk 205.
Input device 204 may receive various instructions entered by an operator and send to processor 201 for execution. The input device 204 may include a keyboard or pointing device (e.g., a mouse, a trackball, a touch pad, or a touch screen, among others).
A display device 206 may display results obtained by the execution of instructions by the processor 201.
The memory 202 is used for storing programs and data necessary for the operation of the operating system 2021, and data such as intermediate results in the calculation process of the processor 201.
It will be appreciated that the memory 202 in embodiments of the invention can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The nonvolatile memory may be Read Only Memory (ROM), programmable Read Only Memory (PROM), erasable Programmable Read Only Memory (EPROM), electrically Erasable Programmable Read Only Memory (EEPROM), or flash memory, among others. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. The memory 202 of the apparatus and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory 202.
In some implementations, the memory 202 stores the following elements, executable modules or data structures, or a subset thereof, or an extended set thereof: an operating system 2021 and application programs 2022.
The operating system 2021 contains various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application programs 2022 include various application programs 2022, such as a Browser (Browser), for implementing various application services. The program implementing the method of the embodiment of the present invention may be contained in the application program 2022.
The above-described processor 201 executes the steps of the control method of the operation of the combustion apparatus according to the above-described embodiment when calling and executing the application program 2022 and data stored in the memory 202, specifically, programs or instructions stored in the application program 2022.
The method disclosed in the above embodiment of the present invention may be applied to the processor 201 or implemented by the processor 201. The processor 201 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 201 or by instructions in the form of software. The processor 201 may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention.
A general purpose processor may be a microprocessor or the processor 201 may be any conventional processor 201 or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 202, and the processor 201 reads the information in the memory 202 and, in combination with its hardware, performs the steps of the method described above.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or a combination thereof. For a hardware implementation, the processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions of the application, or a combination thereof.
For a software implementation, the techniques herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions herein. The software codes may be stored in the memory 202 and executed by the processor 201. The memory 202 may be implemented within the processor 201 or external to the processor 201.
Specifically, the processor 201 is further configured to read the computer program and perform the steps of predicting a stake pocket method and outputting answers to questions asked by the user.
In a fourth aspect of the present invention, there is also provided a computer-readable storage medium storing a computer program which, when executed by the processor 201, causes the processor 201 to execute the steps of the control method of the operation of the combustion apparatus of the above-described embodiment.
In the several embodiments provided in the present application, it should be understood that the disclosed methods and apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may be physically included separately, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform part of the steps of the transceiving method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or other various media capable of storing program codes.
While certain specific embodiments of the invention have been described in detail by way of example, it will be appreciated by those skilled in the art that the above examples are for illustration only and are not intended to limit the scope of the invention. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the invention. The scope of the invention is defined by the appended claims.

Claims (7)

1. A method of controlling the operation of a combustion apparatus, comprising the steps of:
S1, constructing a self-encoder network;
s2, taking the original input data X after normalization of the combustion equipment as the input parameters of the self-encoder network, and obtaining the data after dimension reduction
S3, using the dataAs input data of the neural network, constructing an LSTM input neural network;
s4, using the steam generation amount data of the combustion equipment As input data of the neural network, constructing an LSTM output neural network;
S5, acquiring prediction data by using the LSTM input neural network and the LSTM output neural network And predictive data
S6, the predicted data is processedAnd the data/>Splicing to obtain first spliced data, and carrying out/>, on the predicted dataAnd the steam generation amount data/>Splicing to obtain second spliced data;
S7, performing forward prediction training and reverse prediction training on the first spliced data and the second spliced data by adopting a deep ANN Forward neural network;
S8, optimizing the depth ANN Forward neural network to obtain optimal input parameters;
S9, inputting the optimal input parameters into the depth ANN Forward neural network to obtain steam generation data corresponding to the combustion equipment;
wherein, step S7 includes:
S71, taking the first spliced data as training input data of the deep ANN Forward neural network, taking the second spliced data as training output data of the deep ANN Forward neural network, and training the deep ANN Forward neural network;
s72, taking the second spliced data as training input data of a deep ANN Backward neural network, taking the first spliced data as training output data of the deep ANN Backward neural network, and training the deep ANN Backward neural network;
step S8 optimizes the deep ANN Forward neural network using a bat algorithm, and step S8 includes:
s81, initializing a bat number num of a bat algorithm, wherein the bat speed is v, the frequency is f, and the position of the kth bat at the time t is recorded as The historical optimal position of all bats at time t is/>
The bat update formula for the bat algorithm is shown as follows:
(4)
wherein beta is a random value within [0,1], For the current optimal individual position of the bat, f k is the sound wave frequency of bat k of the bat algorithm, the value is in the interval [ f min, fmax ], wherein f min takes on a value of 0 and f max takes on a value of 100;
s82, setting the adaptability function of the bat algorithm as follows:
(5)
Wherein the method comprises the steps of Is the final steam yield;
s83, performing single iteration on the deep ANN Forward neural network parameters by adopting a bat algorithm;
s84, repeating the step S83, and iterating the depth ANN Forward neural network parameters for a plurality of times to obtain the optimal bat position and optimal bat position
Step S83 includes:
s831, one bat position of the bat algorithm As an input parameter of the deep ANN Forward neural network, obtaining an output value Y Forward of the deep ANN Forward neural network;
S832, taking Y Forward obtained in the step S831 as an input parameter of the depth ANN Backward neural network to obtain an output value X Backward;
S833, calculating the integrated input value of the depth ANN Forward neural network The calculation formula is as follows:
(6)
S834, will As an input value of the deep ANN Forward neural network, a combined output value/> isobtained
S835, calculating the final output of the depth ANN Forward neural networkThe calculation formula is as follows:
(7)
S836, the bat algorithm optimizes the adaptability output of the single iteration of the deep ANN Forward neural network parameters as follows
2. The method of claim 1, wherein the output encoding formula from the input layer to the hidden layer of the encoder network is:
(1)
The decoding formula from the hidden layer to the output layer of the encoder network is:
(2)
the optimization objective function of the self-encoder network is a mean square error MSE, and the calculation formula is as follows:
(3)。
3. the method according to claim 1, wherein step S2 comprises:
S21, setting the dimension of the original input data X after normalization as a, wherein the training fitting error of the self-encoder network is MSE;
S22, setting the number of self-coding hidden layer neurons as b, and enabling an initial value b=a-1;
S23, setting the number of self-coding hidden layer neurons as b, taking the normalized original input data X as input training data of the self-coder network, and outputting the training error MSE;
s24, setting the number of self-coding hidden layer neurons as b=b-1, if b is 0, executing a step S25, otherwise executing a step S23;
s25, selecting the number b of self-coding hidden layer neurons corresponding to one training with the minimum training error MSE output in the step S23 as the dimension m of the self-coding dimension-reduced data.
4. A method according to claim 3, characterized in that in step S25, when a plurality of identical training errors MSE are included, the smallest b is selected as the dimension m of the self-encoded reduced data.
5. The method according to claim 1, wherein step S4 comprises:
S41, acquiring predicted data of 500 time points in the future by using the LSTM input neural network trained in the step S3
S42, acquiring predicted data of 500 time points in the future by using the LSTM output neural network trained in the step S4
6. The method according to claim 1, wherein in step S6, the prediction dataAnd the dataSplicing in time sequence, wherein the predicted data/>And the steam generation amount data/>Splicing is carried out according to the time sequence.
7. An electronic device characterized in that it is capable of executing the control method of the operation of a combustion device according to any one of claims 1-6.
CN202311689644.1A 2023-12-11 2023-12-11 Control method for operation of combustion equipment and electronic equipment Active CN117722699B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311689644.1A CN117722699B (en) 2023-12-11 2023-12-11 Control method for operation of combustion equipment and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311689644.1A CN117722699B (en) 2023-12-11 2023-12-11 Control method for operation of combustion equipment and electronic equipment

Publications (2)

Publication Number Publication Date
CN117722699A CN117722699A (en) 2024-03-19
CN117722699B true CN117722699B (en) 2024-05-28

Family

ID=90202798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311689644.1A Active CN117722699B (en) 2023-12-11 2023-12-11 Control method for operation of combustion equipment and electronic equipment

Country Status (1)

Country Link
CN (1) CN117722699B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110837223A (en) * 2018-08-15 2020-02-25 大唐南京发电厂 Combustion optimization control method and system for gas turbine
CN110986084A (en) * 2019-12-25 2020-04-10 华润电力技术研究院有限公司 Air distribution control method and system of pulverized coal fired boiler and related equipment
CN115422995A (en) * 2022-08-03 2022-12-02 沈阳化工大学 Intrusion detection method for improving social network and neural network
CN116741315A (en) * 2023-05-26 2023-09-12 浙江中和建筑设计有限公司 Method for predicting strength of geopolymer concrete
DE202023104093U1 (en) * 2023-07-21 2023-10-25 Anuradha Laishram Optimal path planning for a navigation system for mobile robots

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11774944B2 (en) * 2016-05-09 2023-10-03 Strong Force Iot Portfolio 2016, Llc Methods and systems for the industrial internet of things

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110837223A (en) * 2018-08-15 2020-02-25 大唐南京发电厂 Combustion optimization control method and system for gas turbine
CN110986084A (en) * 2019-12-25 2020-04-10 华润电力技术研究院有限公司 Air distribution control method and system of pulverized coal fired boiler and related equipment
CN115422995A (en) * 2022-08-03 2022-12-02 沈阳化工大学 Intrusion detection method for improving social network and neural network
CN116741315A (en) * 2023-05-26 2023-09-12 浙江中和建筑设计有限公司 Method for predicting strength of geopolymer concrete
DE202023104093U1 (en) * 2023-07-21 2023-10-25 Anuradha Laishram Optimal path planning for a navigation system for mobile robots

Also Published As

Publication number Publication date
CN117722699A (en) 2024-03-19

Similar Documents

Publication Publication Date Title
Yang et al. Learning dynamic context augmentation for global entity linking
Cai et al. A fuzzy adaptive chaotic ant swarm optimization for economic dispatch
CN110264012B (en) Renewable energy power combination prediction method and system based on empirical mode decomposition
Liang et al. A combined model based on CEEMDAN, permutation entropy, gated recurrent unit network, and an improved bat algorithm for wind speed forecasting
Hsu et al. Developing a fuzzy proportional–derivative controller optimization engine for engineering design optimization problems
US20190318249A1 (en) Interpretable general reasoning system using key value memory networks
CN117313620B (en) DTCO formula modeling method based on multitask deep learning symbolic regression
Chen et al. Ultra-short-term wind power prediction based on bidirectional gated recurrent unit and transfer learning
CN113920379B (en) Zero sample image classification method based on knowledge assistance
Wu et al. Explainable temporal dependence in multi-step wind power forecast via decomposition based chain echo state networks
CN117722699B (en) Control method for operation of combustion equipment and electronic equipment
CN102724506B (en) JPEG (joint photographic experts group)_LS (laser system) general coding hardware implementation method
Dou et al. ENAP: An efficient number-aware pruning framework for design space exploration of approximate configurations
CN113468883B (en) Fusion method and device of position information and computer readable storage medium
CN117634740A (en) Power load prediction method based on large language model
CN117453929A (en) Knowledge graph completion method, system, equipment and storage medium
Kurano et al. A fuzzy approach to Markov decision processes with uncertain transition probabilities
CN117273208A (en) Combined power prediction method, device, equipment and storage medium
CN112712202B (en) Short-term wind power prediction method and device, electronic equipment and storage medium
Xie et al. A novel conformable fractional nonlinear grey Bernoulli model and its application
CN109472366B (en) Coding and decoding method and device of machine learning model
CN114707655B (en) Quantum line conversion method, quantum line conversion system, storage medium and electronic equipment
Khan et al. Forecasting day, week and month ahead electricity load consumption of a building using empirical mode decomposition and extreme learning machine
US11783194B1 (en) Evolutionary deep learning with extended Kalman filter for modeling and data assimilation
Kuang et al. HGBO-DSE: Hierarchical GNN and Bayesian Optimization based HLS Design Space Exploration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant