CN117095248A - Furnace flame oxygen concentration monitoring method based on generating convolutional neural network - Google Patents
Furnace flame oxygen concentration monitoring method based on generating convolutional neural network Download PDFInfo
- Publication number
- CN117095248A CN117095248A CN202310860102.XA CN202310860102A CN117095248A CN 117095248 A CN117095248 A CN 117095248A CN 202310860102 A CN202310860102 A CN 202310860102A CN 117095248 A CN117095248 A CN 117095248A
- Authority
- CN
- China
- Prior art keywords
- neural network
- convolutional neural
- flame
- oxygen concentration
- layers
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 title claims abstract description 52
- 239000001301 oxygen Substances 0.000 title claims abstract description 52
- 229910052760 oxygen Inorganic materials 0.000 title claims abstract description 52
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 49
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000012544 monitoring process Methods 0.000 title claims abstract description 23
- 238000012360 testing method Methods 0.000 claims abstract description 22
- 238000012549 training Methods 0.000 claims abstract description 22
- 238000011176 pooling Methods 0.000 claims abstract description 13
- 238000003723 Smelting Methods 0.000 claims abstract description 7
- 238000011156 evaluation Methods 0.000 claims abstract description 7
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 27
- 238000009826 distribution Methods 0.000 claims description 19
- 230000004913 activation Effects 0.000 claims description 15
- 239000006185 dispersion Substances 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 11
- 238000000605 extraction Methods 0.000 abstract description 4
- 238000002485 combustion reaction Methods 0.000 description 9
- 238000013528 artificial neural network Methods 0.000 description 7
- 238000004590 computer program Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 239000000446 fuel Substances 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000011496 digital image analysis Methods 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 239000003344 environmental pollutant Substances 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000000295 fuel oil Substances 0.000 description 1
- 239000007789 gas Substances 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000002844 melting Methods 0.000 description 1
- 230000008018 melting Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 231100000719 pollutant Toxicity 0.000 description 1
- 239000011819 refractory material Substances 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/766—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The application relates to the technical field of image recognition monitoring of process working conditions, in particular to a method for monitoring the flame oxygen concentration of a smelting furnace based on a generated convolutional neural network, which comprises the following steps: acquiring flame image data; preprocessing flame image data: compressing the flame image to form a data set, and dividing the compressed data set into a training set and a testing set; establishing a VA-WGAN data enhancement model, and amplifying minority data by establishing the VA-WGAN data enhancement model; building a convolutional neural network regression model: the convolutional neural network regression model comprises a convolutional layer, a pooling layer and a full-connection layer; and carrying out predictive evaluation on the oxygen concentration of the flame image. The application solves the technical problems that the prior art can only process simple original multi-metadata and can not extract deeper information, realizes effective extraction of image characteristics and generation of high-quality images, and improves the flame oxygen concentration prediction performance of the model.
Description
Technical Field
The application relates to the technical field of image recognition monitoring of process working conditions, in particular to a method for monitoring flame oxygen concentration of a smelting furnace based on a generated convolutional neural network.
Background
Because of its large capacity, high temperature and fast combustion speed, a furnace is an indispensable device in industrial combustion process, and it uses fuel or air combustion to complete conversion of heat energy. However, during combustion, the air content has a certain effect on both the energy consumption of the furnace and the exhaust gas emissions. If the air content is too low, problems such as enlarged pollutant emissions and low combustion efficiency are caused. Too high an air content can overheat the refractory material in the furnace and blow-by the burner. Oxygen analyzers are commonly used in industry to determine oxygen content, and because of uneven distribution of oxygen and fuel in local locations, the difference in setup position and equipment maintenance time can cause measurement failure when the analyzer is measuring the oxygen content in a pipeline. And the oxygen analyzer is expensive to purchase and maintain and is therefore unsuitable for use in a combustion process.
With the advancement of digital image analysis technology, flame monitoring technology has further advanced. In particular, a charge coupled device (Charge Coupled Device, CCD) is simple in structure and low in cost, and can represent flame characteristics in a non-contact mode so as to monitor flame states.
However, in the process of implementing the technical scheme of the embodiment of the application, the inventor discovers that the above technology has at least the following technical problems:
due to the extreme environment in which industrial furnaces operate, it is difficult to accurately and effectively measure the quality variable (oxygen content) of the flame, the information of the flame obtained by the CCD method is limited, and the measurement result lacks a certain accuracy and reliability. Compared with the traditional method, the data driving method is a popular method of flame monitoring tasks, and has the advantages that the operation mechanism in the production process is not required to be considered, and the corresponding model can be built to represent the process working condition only by analyzing and extracting information contained in the data in the process. Although there are many advantages to the data driven approach, the approach can only handle simple raw multi-metadata and cannot extract deeper information.
Disclosure of Invention
The embodiment of the application solves the technical problems that the traditional method in the prior art can only process simple original multi-element data and can not extract deeper information by providing the method for monitoring the flame oxygen concentration of the smelting furnace based on the generated convolutional neural network, realizes effective image feature extraction and high-quality image generation, and ensures that model training is more stable, thereby improving the flame oxygen concentration prediction performance of the model.
The embodiment of the application provides a method for monitoring the flame oxygen concentration of a smelting furnace based on a generated convolutional neural network, which comprises the following steps of:
s1, acquiring flame image data;
step S2, preprocessing flame image data: compressing the flame image to form a data set, and dividing the compressed data set into a training set and a testing set;
step S3, building a VA-WGAN data enhancement model: the VA-WGAN data enhancement model comprises an encoder, a decoder and a discriminator, wherein the VA-WGAN data enhancement model is built to amplify minority class data, minority class data in a data set which is input into the VA-WGAN data enhancement model is amplified, the encoder and the decoder are used for generating a new flame image for a reconstructed and encoded sample, the discriminator is used for guaranteeing the quality of the generated flame image, the minority class sample is increased and balanced with the number of the majority class sample, so that a balanced data set is obtained;
s4, building a convolutional neural network regression model: the convolutional neural network regression model comprises a convolutional layer, a pooling layer and a full connection layer, the convolutional neural network regression model is trained by utilizing a balanced data set, after the training by using a flame image data set, an activation value obtained from the full connection layer is an oxygen concentration prediction result of the convolutional neural network regression model according to the extracted image characteristics, and a test set is used for evaluating the prediction performance of the convolutional neural network regression model;
and S5, performing predictive evaluation on the oxygen concentration of the flame image.
In the step S1, a charge coupled device CCD is used to capture an image of flames in the furnace and a corresponding oxygen content label thereof.
In the step S2, the flame image is compressed into a data set of three-channel images, and the data set is processed according to the following 3: the 1 proportion is divided into two parts of a training set and a testing set.
Wherein in the step S3, w is input into the VA-WGAN data enhancement modelh/>c is a flame image, wherein w and h are the width and the height of the image respectively, and c is the channel number of the image; the original flame image is an RGB color picture, c=3; assuming that the input dataset is composed ofaSum of minority classes%b-a) A plurality of classes; the activation function of the encoder and the arbiter hidden layer is a leak ReLU, and the activation function of the decoder hidden layer is a ReLU; the encoder and the arbiter use LN normalization methods, and the decoder uses BN normalization methods;
the loss function of the VA-WGAN is calculated as follows:
;
in the middle ofAs potential vector, ++>For the original data +.>,/>Is->Is a function of the probability distribution of (1),is->Is true of distribution of->Representative calculation->And->A kind of electronic deviceKL(Kullback-Leibler,KL) The degree of dispersion is determined by the degree of dispersion,,/>and->Is->Mean and variance of corresponding gaussian distribution, +.>Representing a computational expectation; />For a real sample, ++>For the generated sample, ++>Representing the output of the arbiter, +.>For the distribution of real samples, +.>For generating the distribution of samples, < >>Samples generated by the decoder for random noise, < >>Is->Generated sample distribution by encoder and decoder, < > and method for encoding and decoding the same>Is->And->Linear interpolation of>Represents penalty factors,/->Representing the gradient 2 norm of the arbiter; />For the loss function of VAE->For the loss function of WGAN +.>For weighting factors, for balancing losses between VAE and WGAN,/>Is the loss function of VA-WGAN.
In the step S3, the encoder is composed of 3 layers of convolution layers, the first layer of convolution layer of the encoder includes 32 feature layers, the second layer of convolution layer of the encoder includes 64 feature layers, and the third layer of convolution layer of the encoder includes 128 feature layers;
the decoder comprises 3 deconvolution layers, and the discriminator comprises 2 convolution layers and 1 full connection layer;
the convolution kernel size of all convolution layers and deconvolution layers in the VA-WGAN data enhancement model is 5, with a step size of 2.
In the step S4, the convolutional neural network regression model includes 2 convolutional layers, 2 pooling layers and 2 full-connection layers, the convolution kernel sizes of all the convolutional layers and pooling layers in the convolutional neural network regression model are 5, and the step sizes of the convolutional layers and the pooling layers are 1 and 2 respectively.
The activation functions of the convolution layer and the full-connection layer of the convolution neural network regression model are ReLU, the convolution neural network regression model is trained by utilizing a balanced data set, and after the training by using a flame image data set, an activation value obtained from the full-connection layer is an oxygen concentration prediction result of the convolution neural network regression model according to the extracted image characteristics.
In the step S5, the root mean square error is used to predict the evaluation of the performance, and the calculation formula is as follows:;
in the middle ofIs the number of test samples, +.>Is the actual value of the test sample, +.>Is the predicted value of the test sample.
One or more technical solutions provided in the embodiments of the present application at least have the following technical effects or advantages:
due to the adoption of the acquisition and the processing of the flame image, the advantages of combining a self-encoder with a VA-WGAN data enhancement model and generating an countermeasure network are further combined, and the convolutional neural network regression model is matched, so that the technical problem that in the prior art, the traditional method can only process simple original multi-element data and cannot extract deeper information is effectively solved, the effective extraction of image features and the generation of high-quality images are realized, the model training is more stable, and the flame oxygen concentration prediction performance of the model is improved.
Drawings
FIG. 1 is a flow chart of a method for monitoring the oxygen concentration of a furnace flame based on a convolutional neural network generated in accordance with an embodiment of the present application;
FIG. 2 is an oxygen concentration curve of a training set sample according to an embodiment of the present application;
FIG. 3 is a graph of flame images at different oxygen concentrations in accordance with an embodiment of the present application;
fig. 4 is a schematic diagram of a VA-WGAN data enhancement model according to an embodiment of the present application.
Detailed Description
The embodiment of the application solves the technical problems that the traditional method in the prior art can only process simple original multi-element data and can not extract deeper information by providing the method for monitoring the flame oxygen concentration of the smelting furnace based on the generated convolutional neural network, realizes effective image feature extraction and high-quality image generation, and ensures that model training is more stable, thereby improving the flame oxygen concentration prediction performance of the model.
The technical scheme in the embodiment of the application aims to solve the technical problems, and the overall thought is as follows:
the embodiment of the application provides a method for monitoring the flame oxygen concentration of a smelting furnace based on a generated convolutional neural network, which comprises the following steps of:
s1, acquiring flame image data;
step S2, preprocessing flame image data: compressing the flame image to form a data set, and dividing the compressed data set into a training set and a testing set;
step S3, building a VA-WGAN data enhancement model: the VA-WGAN data enhancement model comprises an encoder, a decoder and a discriminator, wherein the VA-WGAN data enhancement model is built to amplify minority class data, minority class data in a data set which is input into the VA-WGAN data enhancement model is amplified, the encoder and the decoder are used for generating a new flame image for a reconstructed and encoded sample, the discriminator is used for guaranteeing the quality of the generated flame image, the minority class sample is increased and balanced with the number of the majority class sample, so that a balanced data set is obtained;
s4, building a convolutional neural network regression model: the convolutional neural network regression model comprises a convolutional layer, a pooling layer and a full connection layer, the convolutional neural network regression model is trained by utilizing a balanced data set, after the training by using a flame image data set, an activation value obtained from the full connection layer is an oxygen concentration prediction result of the convolutional neural network regression model according to the extracted image characteristics, and a test set is used for evaluating the prediction performance of the convolutional neural network regression model;
and S5, performing predictive evaluation on the oxygen concentration of the flame image.
In order to better understand the above technical solutions, the following detailed description will refer to the accompanying drawings and specific embodiments.
Example 1
In the first embodiment of the present application, in step S1, a charge coupled device CCD is used to capture a flame image and a corresponding oxygen content label in a furnace; the experimental device adopted by the embodiment of the application is a melting furnace, and the oxygen concentration in the combustion process is difficult to directly measure, and the flame image shot on line can directly reflect the current oxygen concentration, so that the flame image is utilized to realize on-line monitoring of the flame combustion process.
The furnace combustion system adopts industrial heavy oil as fuel to heat, the model is North America 5514-6, and the heat productivity can reach 451000 kilocalories/hour. In the experiment, a charge coupled device CCD was mounted to capture images of flames in a furnace, the charge coupled device CCD camera having a specification of 658X 492 pixels, a resolution of 24 bits/pixel, and a speed of capturing images of flames of 73 frames/second. During operation, both the flame image and the oxygen content were recorded, together with 8512 flame images and their corresponding oxygen content labels.
In step S2, the flame image is compressed into a data set of three-channel images, and the data set is processed according to 3: the 1 proportion is divided into a training set and a testing set; since the furnace flame image is a color image and the pixels are 658×492, the data amount of the single image exceeds 971208; in order to prevent memory overflow and reduce the calculated amount of data, the flame image is compressed into a three-channel image of 100 multiplied by 100; the dataset was as per 3:1 is divided into a training set and a testing set, wherein the training set has 6144 pictures, and the testing set has 2368 pictures.
Example two
In step S3, w is input to the VA-WGAN data enhancement model in the second embodiment of the present applicationh/>c is a flame image, wherein w and h are the width and the height of the image respectively, and c is the channel number of the image; the original flame image is an RGB color picture, c=3; assuming that the input dataset is composed ofaSum of minority classes%b-a) A plurality of classes; the whole network architecture of the VA-WGAN data enhancement model comprises an encoder, a decoder and a discriminator, and each structure adopts a convolution layer to extract characteristics from a data space;
in the step S3, the encoder is composed of 3 layers of convolution layers, the first layer of convolution layer of the encoder includes 32 feature layers, the second layer of convolution layer of the encoder includes 64 feature layers, and the third layer of convolution layer of the encoder includes 128 feature layers; the decoder comprises 3 deconvolution layers, and the discriminator comprises 2 convolution layers and 1 full connection layer; the convolution kernel size of all convolution layers and deconvolution layers in the VA-WGAN data enhancement model is 5, with a step size of 2.
The activation function of the encoder and the arbiter hidden layer is a leak ReLU, and the activation function of the decoder hidden layer is a ReLU; the encoder and the arbiter use LN normalization methods, and the decoder uses BN normalization methods;
the loss function of the VA-WGAN is calculated as follows:
;
in the middle ofAs potential vector, ++>For the original data +.>,/>Is->Is a function of the probability distribution of (1),is->Is true of distribution of->Representative calculation->And->A kind of electronic deviceKL(Kullback-Leibler,KL) The degree of dispersion is determined by the degree of dispersion,,/>and->Is->Mean and variance of corresponding gaussian distribution, +.>Representing a computational expectation; />For a real sample, ++>For the generated sample, ++>Representing the output of the arbiter, +.>For the distribution of real samples, +.>For generating the distribution of samples, < >>Samples generated by the decoder for random noise, < >>Is->Generated sample distribution by encoder and decoder, < > and method for encoding and decoding the same>Is->And->Linear interpolation of>Represents penalty factors,/->Representing the gradient 2 norm of the arbiter; />For the loss function of VAE->For the loss function of WGAN +.>For weighting factors, for balancing losses between VAE and WGAN,/>Is the loss function of VA-WGAN.
Example III
In the third embodiment of the present application, in step S4, the convolutional neural network regression model includes 2 convolutional layers, 2 pooling layers and 2 fully-connected layers, the convolution kernel sizes of all the convolutional layers and pooling layers in the convolutional neural network regression model are 5, and the step sizes of the convolutional layers and pooling layers are 1 and 2, respectively.
The activation functions of the convolution layer and the full-connection layer of the convolution neural network regression model are ReLU, the balanced data set is utilized to train the convolution neural network regression model, after training by the flame image data set, the activation value obtained from the full-connection layer is the oxygen concentration prediction result of the convolution neural network regression model according to the extracted image characteristics, and the test set is used for evaluating the prediction performance of the convolution neural network regression model.
Example IV
In the fourth embodiment of the application, the WGCM model consists of a generating model VA-WGAN data enhancement model and a convolutional neural network regression model based on the generating convolutional neural network, and when the experimental furnace is operated, the CCD records flame images in real time and inputs the flame images to the WGCM model, thereby determining the oxygen concentration of the flame and giving an indication of supplementing or exhausting air. Therefore, the model can predict the oxygen concentration and simultaneously ensure the high-efficiency and environment-friendly operation of the machine.
In step S5, in order to evaluate the effectiveness of the method of the present application, an index is needed to quantitatively evaluate the quality of the result, and the root mean square error is used to predict the evaluation of the performance, where the calculation formula is as follows:;
in the middle ofIs the number of test samples, +.>Is the actual value of the test sample, +.>Is the predicted value of the test sample. The smaller the RMSE, the better the predictive performance of the representative model; the oxygen concentration of the training set sample is plotted into a curve, and 11 phases of the training set sample can be found, so that the data set can be divided into 11 oxygen concentration intervals. Table 1 shows that the method has strong prediction performance as shown by the results of comparison of different CNN regression models in a part of minority intervals.
Minority class interval model | 1 | 3 | 10 | 11 |
CNN | 0.1175 | 0.1165 | 0.1252 | 0.1208 |
WGCM | 0.1153 | 0.1085 | 0.1111 | 0.1103 |
The embodiment of the application provides a furnace flame oxygen concentration monitoring method based on a generated convolutional neural network by utilizing rich information contained in image data shot in the running process of equipment, and the method utilizes the advantages of a self-encoder and a generated countermeasure network combined by a generated model, can effectively extract image characteristics and generate high-quality images, and introduces Wasserstein distance and gradient penalty items, so that model training is more stable, and the flame oxygen concentration prediction performance of the model is improved.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the application
Clear spirit and scope. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (8)
1. The method for monitoring the flame oxygen concentration of the smelting furnace based on the generated convolutional neural network is characterized by comprising the following steps of:
s1, acquiring flame image data;
step S2, preprocessing flame image data: compressing the flame image to form a data set, and dividing the compressed data set into a training set and a testing set;
step S3, building a VA-WGAN data enhancement model: the VA-WGAN data enhancement model comprises an encoder, a decoder and a discriminator, wherein the VA-WGAN data enhancement model is built to amplify minority class data, minority class data in a data set which is input into the VA-WGAN data enhancement model is amplified, the encoder and the decoder are used for generating a new flame image for a reconstructed and encoded sample, the discriminator is used for guaranteeing the quality of the generated flame image, the minority class sample is increased and balanced with the number of the majority class sample, so that a balanced data set is obtained;
s4, building a convolutional neural network regression model: the convolutional neural network regression model comprises a convolutional layer, a pooling layer and a full connection layer, the convolutional neural network regression model is trained by utilizing a balanced data set, after the training by using a flame image data set, an activation value obtained from the full connection layer is an oxygen concentration prediction result of the convolutional neural network regression model according to the extracted image characteristics, and a test set is used for evaluating the prediction performance of the convolutional neural network regression model;
and S5, performing predictive evaluation on the oxygen concentration of the flame image.
2. The method for monitoring the oxygen concentration of a furnace flame based on a generated convolutional neural network according to claim 1, wherein in the step S1, the image of the flame in the furnace and the corresponding oxygen content label thereof are captured by a charge coupled device CCD.
3. The method for monitoring the oxygen concentration of a furnace flame based on a generated convolutional neural network according to claim 1, wherein in the step S2, the flame image is compressed into a three-channel image data set, and the data set is processed according to the following 3: the 1 proportion is divided into two parts of a training set and a testing set.
4. The method for monitoring the flame oxygen concentration of a furnace based on a convolutional neural network according to claim 1, wherein in the step S3, w is input into a VA-WGAN data enhancement modelh/>c is a flame image, wherein w and h are the width and the height of the image respectively, and c is the channel number of the image; the original flame image is an RGB color picture, c=3; assuming that the input dataset is composed ofaSum of minority classes%b-a) A plurality of classes; the activation function of the encoder and the arbiter hidden layer is a leak ReLU, and the activation function of the decoder hidden layer is a ReLU; the encoder and the arbiter use LN normalization methods, and the decoder uses BN normalization methods;
the loss function of the VA-WGAN is calculated as follows:
;
in the middle ofAs potential vector, ++>For the original data +.>,/>Is->Probability distribution of->Is->Is true of distribution of->Representative calculation->And->A kind of electronic deviceKL(Kullback-Leibler, KL) The degree of dispersion is determined by the degree of dispersion,,/>and->Is->Mean and variance of corresponding gaussian distribution, +.>Representing a computational expectation; />For a real sample, ++>For the generated sample, ++>Representing the output of the arbiter, +.>For the distribution of real samples, +.>For generating the distribution of samples, < >>Samples generated by the decoder for random noise, < >>Is->Generated sample distribution by encoder and decoder, < > and method for encoding and decoding the same>Is->And->Linear interpolation of>Represents penalty factors,/->Representing the gradient 2 norm of the arbiter; />For the loss function of VAE->For the loss function of WGAN +.>For weighting factors, for balancing losses between VAE and WGAN,/>Is the loss function of VA-WGAN.
5. The method for monitoring the flame oxygen concentration of a furnace based on a generated convolutional neural network according to claim 1, wherein in the step S3, the encoder is composed of 3 convolutional layers, the first convolutional layer of the encoder comprises 32 characteristic layers, the second convolutional layer of the encoder comprises 64 characteristic layers, and the third convolutional layer of the encoder comprises 128 characteristic layers;
the decoder comprises 3 deconvolution layers, and the discriminator comprises 2 convolution layers and 1 full connection layer;
the convolution kernel size of all convolution layers and deconvolution layers in the VA-WGAN data enhancement model is 5, with a step size of 2.
6. The method for monitoring the flame oxygen concentration of the furnace based on the generated convolutional neural network according to claim 1, wherein in the step S4, the convolutional neural network regression model comprises 2 convolutional layers, 2 pooling layers and 2 fully connected layers, the convolution kernel sizes of all the convolutional layers and the pooling layers in the convolutional neural network regression model are 5, and the step sizes of the convolutional layers and the pooling layers are 1 and 2 respectively.
7. The method for monitoring the flame oxygen concentration of the furnace based on the generated convolutional neural network according to claim 6, wherein the activation functions of the convolutional layer and the full-connection layer of the convolutional neural network regression model are ReLU, the convolutional neural network regression model is trained by using a balanced data set, and after the training by using the flame image data set, the activation value obtained from the full-connection layer is an oxygen concentration prediction result of the convolutional neural network regression model according to the extracted image features.
8. The method for monitoring the flame oxygen concentration of a furnace based on a convolutional neural network according to claim 1, wherein in the step S5, the evaluation of the performance is predicted by using a root mean square error, and the calculation formula is as follows:;
in the middle ofIs the number of test samples, +.>Is the actual value of the test sample, +.>Is the predicted value of the test sample.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310860102.XA CN117095248A (en) | 2023-07-13 | 2023-07-13 | Furnace flame oxygen concentration monitoring method based on generating convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310860102.XA CN117095248A (en) | 2023-07-13 | 2023-07-13 | Furnace flame oxygen concentration monitoring method based on generating convolutional neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117095248A true CN117095248A (en) | 2023-11-21 |
Family
ID=88770616
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310860102.XA Pending CN117095248A (en) | 2023-07-13 | 2023-07-13 | Furnace flame oxygen concentration monitoring method based on generating convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117095248A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117663397A (en) * | 2024-01-30 | 2024-03-08 | 深圳市永宏光热能科技有限公司 | Air mixing control method and system for air conditioner hot air burner |
-
2023
- 2023-07-13 CN CN202310860102.XA patent/CN117095248A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117663397A (en) * | 2024-01-30 | 2024-03-08 | 深圳市永宏光热能科技有限公司 | Air mixing control method and system for air conditioner hot air burner |
CN117663397B (en) * | 2024-01-30 | 2024-04-09 | 深圳市永宏光热能科技有限公司 | Air mixing control method and system for air conditioner hot air burner |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109035233B (en) | Visual attention network system and workpiece surface defect detection method | |
CN117095248A (en) | Furnace flame oxygen concentration monitoring method based on generating convolutional neural network | |
CN112802016A (en) | Real-time cloth defect detection method and system based on deep learning | |
CN110020658B (en) | Salient object detection method based on multitask deep learning | |
CN109977834B (en) | Method and device for segmenting human hand and interactive object from depth image | |
CN112270658A (en) | Elevator steel wire rope detection method based on machine vision | |
CN114677362B (en) | Surface defect detection method based on improved YOLOv5 | |
Jiang et al. | Attention M-net for automatic pixel-level micro-crack detection of photovoltaic module cells in electroluminescence images | |
CN113780423A (en) | Single-stage target detection neural network based on multi-scale fusion and industrial product surface defect detection model | |
CN115631186A (en) | Industrial element surface defect detection method based on double-branch neural network | |
CN115272826A (en) | Image identification method, device and system based on convolutional neural network | |
CN108764026B (en) | Video behavior detection method based on time sequence detection unit pre-screening | |
Ma et al. | A hierarchical attention detector for bearing surface defect detection | |
CN116664941A (en) | Visual detection method for surface defects of bearing ring | |
CN113780136B (en) | VOCs gas leakage detection method, system and equipment based on space-time texture recognition | |
CN115100451B (en) | Data expansion method for monitoring oil leakage of hydraulic pump | |
Qin et al. | EDDNet: An efficient and accurate defect detection network for the industrial edge environment | |
CN115330743A (en) | Method for detecting defects based on double lights and corresponding system | |
CN115601357A (en) | Stamping part surface defect detection method based on small sample | |
CN112927222B (en) | Method for realizing multi-type photovoltaic array hot spot detection based on hybrid improved Faster R-CNN | |
CN109376749B (en) | Power transmission and transformation equipment infrared image temperature wide range identification method based on deep learning | |
CN113901947A (en) | Intelligent identification method for tire surface flaws under small sample | |
Zheng et al. | MD-YOLO: Surface Defect Detector for Industrial Complex Environments | |
CN111488785A (en) | Rotary kiln working condition detection method based on in-kiln image | |
Cui et al. | An efficient targeted design for real-time defect detection of surface defects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |