CN114444790A - Method for predicting time sequence of various measuring points on gas turbine based on steady-state feature picture - Google Patents

Method for predicting time sequence of various measuring points on gas turbine based on steady-state feature picture Download PDF

Info

Publication number
CN114444790A
CN114444790A CN202210062290.7A CN202210062290A CN114444790A CN 114444790 A CN114444790 A CN 114444790A CN 202210062290 A CN202210062290 A CN 202210062290A CN 114444790 A CN114444790 A CN 114444790A
Authority
CN
China
Prior art keywords
steady
state
convolution
time
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210062290.7A
Other languages
Chinese (zh)
Inventor
谢宗霞
陈岩哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202210062290.7A priority Critical patent/CN114444790A/en
Publication of CN114444790A publication Critical patent/CN114444790A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Quality & Reliability (AREA)
  • Game Theory and Decision Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Evolutionary Biology (AREA)
  • Operations Research (AREA)
  • Development Economics (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method for predicting the time sequence of various measuring points on a gas turbine based on steady state feature mapping, which comprises the steps of extracting the steady state features of an input sample in an end-to-end mode, constructing an incidence relation network, guiding the incidence relation mapping through steady state loss, and establishing a space-time neural network sequence prediction model based on the steady state feature mapping; the steady-state characteristics of the system under the fixed working condition are extracted by adopting the time-space neural network sequence prediction model, the sequence prediction is carried out by using the time-space convolution, the self-adaptive steady-state characteristic learning is realized, and finally, the multi-sensor time sequence prediction curve of the gas turbine is obtained by utilizing the prediction model. In time series prediction, the instantaneous characteristics of a sequence are introduced into a composition module, and the dynamic construction of an association network is carried out by inputting the sequence. Compared with the prior art, the method has better prediction effect, and can extract the steady-state characteristics of the system under a fixed working condition so as to improve the time series prediction effect and analyze the associated network abnormity to check the working condition of the system.

Description

Method for predicting time sequence of various measuring points on gas turbine based on steady-state feature picture
Technical Field
The invention belongs to the design and application of a neural network model, and particularly relates to a time-space convolution network model of a steady-state characteristic diagram and a method for realizing time sequence prediction of various measuring points on a gas turbine by utilizing the model.
Background
The multivariate time series prediction problem has been a hotspot of statistical and deep learning studies. Uncertainty is divided into model uncertainty and data uncertainty. The main idea is to differentiate the non-stationary sequence to make it become a stationary sequence, and then to fit the differentiated sequence with the ARMA model, which is mainly applied to the linear models in the single variable and same variance occasions. ARIMA requires that time series have univariate and homovariance and that the series follow linear regression, but these premises are not true in many cases, and in real life, there are many time series that are related to heterovariance, multivariate and nonlinearity. So far, the traditional statistical time series prediction method is very important. The graph convolution neural network represents pairwise relations between nodes through a neighborhood matrix, and information of the nodes can be propagated to neighbor nodes through graph convolution. Although the partial composition method gets rid of the limitation of the traditional composition method, the stable system is not considered, and the relationship among partial nodes can be kept stable at any time.
Disclosure of Invention
Aiming at the problems of the defects in the prior art, in order to further improve the prediction effect of the time sequence, a novel algorithm is required to be provided to add the steady-state information implicit in the data into the construction of the relational network, and the established prediction model considering the factors can extract the steady-state characteristics of the system under the fixed working condition, so that the effect of time sequence prediction can be improved, and the correlation network abnormity can be analyzed to check the working condition of the system.
In order to solve the technical problem, the invention provides a method for predicting the time sequence of various measuring points on a gas turbine based on steady-state feature picture composition, which is characterized in that the steady-state features of an input sample are extracted in an end-to-end mode and an incidence relation network is constructed, the incidence relation picture composition is guided through steady-state loss, and a time-space neural network sequence prediction model is established based on the steady-state feature picture composition; the steady-state characteristics of the system under the fixed working condition are extracted by adopting the space-time neural network sequence prediction model, and sequence prediction is carried out by using space-time convolution, so that self-adaptive steady-state characteristic learning is realized, and finally, a multi-sensor time sequence prediction curve of the gas turbine is obtained.
Furthermore, the steady state feature pattern-based space-time neural network sequence prediction model comprises a steady state network construction module, a space-time convolution feature fusion module and a sequence prediction output feedback module; the steady-state network construction module uses pooling, convolution and linear variation operation, is used for extracting steady-state feature information in a sample and constructing a steady-state associated network, and is used for constructing each input sequence of the system to be used as the input of a subsequent space-time convolution feature fusion module; for the steady-state network constructed by each iteration sample, calculating the composition variance to serve as an additional loss function to guide back propagation so as to update the network parameters of the steady-state network construction module; the time-space convolution feature fusion module uses time sequence convolution and graph convolution operation and is used for completing the extraction of time sequence features and space features of a single sample in each iteration of the system, and the input of an original sample and the features of an output aggregation sample of the steady-state network construction module are received in the working process; the sequence prediction output feedback module is used for realizing the iterative optimization of the network and the output of a prediction sequence, receives the output from the space-time convolution characteristic fusion module, compares the prediction sequence output by the sequence prediction output feedback module with a real time sequence, combines the steady-state loss and the sequence prediction loss of the steady-state network construction module, optimizes the weight values of all levels in a back propagation mode, and outputs the feedback result to the input end of each level in the whole modeling process.
The method comprises the following steps:
step 1, data preprocessing: and (4) according to the time of the same interval, the measurement data of the gas turbine multi-sensor are formed into a plurality of time sequence data according to the time sequence.
Step 2, constructing steady-state associationA relationship matrix: firstly, extracting steady-state characteristics from the preprocessed data through a steady-state network construction module and constructing a steady-state associated network; selecting two feature extractors based on one-dimensional convolution to extract steady-state features on a time sequence, and respectively constructing two steady-state feature vectors for node features; two steady-state feature vectors are expanded into R through matrix multiplication operation and linear transformation of multilayer perceptronNxNThe steady-state incidence relation matrix; for the steady-state network constructed by each iteration sample, calculating the composition variance to serve as an additional loss function to guide back propagation so as to update the network parameters of the steady-state network construction module;
step 3, space-time sequence convolution processing: performing multilayer space-time convolution on the time sequence by using the space-time convolution descriptor and the steady-state association relation matrix obtained in the step 2 to perform feature extraction and feature fusion, wherein the feature fusion comprises graph convolution feature fusion and time sequence convolution feature fusion; recording and integrating and outputting the result of each time of space-time convolution; wherein: the graph convolution characteristic fusion is that a graph convolution module receives a residual error link of an original input characteristic and a characteristic of a previous round of space-time convolution, high-dimensional graph convolution operation is carried out by using a relation between nodes represented by a steady-state associated network, and information between the nodes is aggregated; the time sequence convolution characteristic fusion is to use an expansion causal time sequence convolution operation to aggregate information in a single node time sequence direction and output an output result of each round to an output module and a next layer of time-space convolution;
step 4, back propagation of the space-time convolution network: and calculating a loss function, and optimizing each level of weight values in a back propagation mode. The feedback result is transmitted to each level input end of the whole modeling process, so that the self-adaptive learning of the whole model and the network is realized, and a trained model is obtained;
step 5, obtaining a multi-sensor time sequence prediction curve of the gas turbine: and taking the time sequence of the moments 1-t as the input of a model, wherein the output of the model is a gas turbine multi-sensor time sequence prediction curve corresponding to the moment t + 1.
The details of the steps of the method of the present invention are described in the detailed description of the preferred embodiments.
Compared with the prior art, the invention has the beneficial effects that:
in time series prediction, transient characteristics and steady-state losses of a sequence are introduced into a patterning module, and dynamic construction of an association network is performed by inputting the sequence. In addition, the time-space convolution is also used for extracting time sequence and space characteristics. On the basis, a time-space convolution time sequence prediction model based on the steady-state feature composition is provided, compared with the prior art, the time-space convolution time sequence prediction model has a better prediction effect, and a modeling result with better global performance and generalization performance can be obtained.
Drawings
FIG. 1 is a schematic structural diagram of a spatio-temporal neural network sequence prediction model established based on steady-state feature mapping in the present invention;
FIG. 2 is a flow chart of a method for predicting a plurality of measurement point timings by using the prediction model shown in FIG. 1;
FIG. 3-1 is a plot of the measured points and predicted curves at the vibration measurement points of the gas turbine casing;
FIG. 3-2 is a plot of measured points and predicted curves at the high pressure rotational speed measurement points of the gas turbine;
3-3 are measured point curves and predicted curves at gas turbine fuel pressure measurement points;
FIGS. 3-4 are measured and predicted curves for gas turbine engine fuel tank temperature measurements.
Detailed Description
The invention will be further described with reference to the following drawings and specific examples, which are not intended to limit the invention in any way.
The design concept of the method for predicting the time sequence of various measuring points on the gas turbine based on the steady-state feature picture is as follows: extracting steady-state characteristics of an input sample in an end-to-end mode, constructing an association relation network, guiding an association relation picture composition through steady-state loss, and establishing a space-time neural network sequence prediction model based on the steady-state characteristic picture composition; the steady-state characteristics of the system under the fixed working condition are extracted by adopting the time-space neural network sequence prediction model, the sequence prediction is carried out by using the time-space convolution, the self-adaptive steady-state characteristic learning is realized, and finally, the multi-sensor time sequence prediction curve of the gas turbine is obtained by utilizing the prediction model. In time series prediction, the transient characteristics of a sequence are introduced into a patterning module, and the dynamic construction of an association network is performed by inputting the sequence. Compared with the prior art, the method has better prediction effect, and can extract the steady-state characteristics of the system under a fixed working condition so as to improve the effect of time series prediction and analyze the associated network abnormity to check the working condition of the system.
As shown in fig. 1, the model for predicting spatio-temporal neural network sequences based on steady-state feature patterns in the present invention includes: the system comprises a steady-state network construction module, a space-time convolution characteristic fusion module and a sequence prediction output feedback module.
The steady-state network construction module uses pooling, convolution and linear variation operations, is used for extracting steady-state feature information in a sample and constructing a steady-state associated network, and is used for constructing each input sequence of the system and used as the input of a subsequent time-space convolution feature fusion module; for each steady state network constructed from the samples, the network parameters of the steady state network construction module are updated by calculating the composition variance as an additional loss function to guide back propagation.
The time-series convolution and graph convolution operation is used by the time-series convolution feature fusion module for extracting the time-series feature and the space feature of a single sample in each iteration of the system, and the input of an original sample and the feature of an output aggregation sample of the steady-state network construction module are received in the working process.
The sequence prediction output feedback module is used for realizing iterative optimization of a network and output of a prediction sequence, the sequence prediction output feedback module receives the output from the space-time convolution characteristic fusion module, compares the prediction sequence output by the sequence prediction output feedback module with a real time sequence, combines steady-state loss and sequence prediction loss of a steady-state network construction module, optimizes weight values of all levels in a back propagation mode, and outputs a feedback result to input ends of all levels in the whole modeling process.
As shown in fig. 2, the modeling process of the spatio-temporal neural network sequence prediction model based on the steady-state feature pattern includes the following steps:
step 1, data preprocessing:
the measurement data of the gas turbine multi-sensor (for example, comprising gas turbine shell vibration, turbine rotating speed, fuel pressure, fuel tank temperature and the like) are formed into a plurality of time sequence data according to the same interval time. For the recorded time series data, the data of all sensors from 0 to t are taken as a sample. By analogy, sampling results in 1 to T +1, 2 to T +2, … to TMaxTo T + TMaxThe data in the time period is used as a training sample of the model. After determining the sample, carrying out normalization operation on the numerical value by adopting a maximum and minimum normalization method;
Figure BDA0003478686110000041
in the formula (1), xiAnd x'iRespectively representing the original vector and the normalized vector at the ith node, Min (x)i) Represents the vector formed by the minimum sample value of the node, Min (x)i) Representing the vector formed by the maximum sample value of the node, and N represents the number of the nodes;
for example, the following steps are carried out: for a gas turbine data set sampled 1 time in 5 seconds, 1 new sample x for 1 minute was obtained from a 12 sequence of 5 second samples. Here, step size 12 is used, consisting of xi,xi+1,...,xi+11]To obtain [ x ]i+12,xi+13,...,xi+23]All time series in the 1,2,3, … dataset were normalized between 0 and 1, with min-max normalization using a normalization function for each node. For the normalized structured data, firstly, randomly scrambling the data, and then dividing the data into three parts: 70% are used as training sets to train the prediction models, 20% are used as verification sets to select the optimal model parameters, and the other 10% are used as test sets to evaluate the prediction performance.
Step 2, constructing a steady-state incidence relation matrix:
preprocessed data headerFirstly, a steady-state network construction module is used for extracting steady-state features and constructing a steady-state associated network. And selecting a feature extractor based on the mean pooling and the two one-dimensional convolutions to extract steady-state features on the time sequence, and respectively constructing two steady-state feature vectors for the node features. Two steady-state feature vectors are expanded into R through matrix multiplication operation and linear transformation of multilayer perceptronNxNThe steady-state incidence relation matrix; for the steady-state network constructed by each iteration sample, calculating the composition variance to be used as a final loss function to carry out back propagation so as to update the network parameters; the specific iteration includes the following substeps:
step 201) steady state feature extraction: the steady-state feature extraction is that in a steady-state network construction module, 2 different steady-state feature vectors are extracted from an input sequence through mean pooling smoothing and convolution operations. Assume that there is a set of N input sequences X ═ X with a node length l1,x1...,xn]∈RN*lThe input sequence was smoothed by first pooling the mean values of length 1 x, and then two one-dimensional convolution kernels Conv of length 1 x (l/x) were used1And Conv2Respectively extracting time sequence characteristics on each node to obtain steady-state characteristic vector representation Gs of the two nodes1=tanh(Conv1(pool (X))) and Gs2=tanh(Conv2 (pool(X)))。
Step 202) steady state association network construction: the steady-state associated network is constructed by mapping the steady-state features into a steady-state associated network relation representation by using Einstein summation operation, and the steady-state associated network is used for graph convolution feature fusion. Descriptor 1 and descriptor 2 are chosen to represent the original steady-state associative network and the laplacian matrix representation using laplacian transform reduction computation, respectively.
Descriptor 1: by the function pi:
Figure BDA0003478686110000042
mapping steady-state feature vectors to a steady-state incidence relation matrix AgsWherein, a multiplication operation is used to construct a corresponding correlation network according to the number of samples, that is:
Figure BDA0003478686110000051
in the formula (2), Ags Network descriptor 1 is associated for the original steady state.
Descriptor 2: the adoption of the addition rule may cause the problem of gradient explosion or disappearance in the network training process. Graph convolution operations are simplified using laplacian normalization. Performing auxiliary calculation by using the identity matrix I and the degree matrix D, namely:
Figure BDA0003478686110000052
in the formula (3), D is a measurement matrix of the steady-state correlation network A;
using the measurement matrix to perform laplacian normalization operation on the original matrix to a laplacian matrix L, that is:
Figure BDA0003478686110000053
Figure BDA0003478686110000054
in the formula (4), the reaction mixture is,
Figure BDA0003478686110000055
is laplacian matrix descriptor 2.
Step 203) calculates the steady state patterning loss: in order for the steady state map to have stable characteristics over the entire data, a composition stability index needs to be calculated to guide the composition module. And (4) using the inter-sample variance of each iteration as a steady-state composition loss, and combining the steady-state composition loss with the sequence prediction loss of the output module to serve as a final loss function to carry out back propagation to update the network parameters.
In the process of training the model, the standard deviation of the generated steady-state correlation network is calculated for each batch of sampling samples to be used as the steady-state LosssNamely:
Figure BDA0003478686110000056
in the formula (6), AgsTaking the mean value of each batch of steady state images, and counting the standard deviation of the composition as the composition loss; the pattern Loss and the sequence prediction Loss are combined into a final Loss function Loss, namely:
L=(1-β)Lossp+βLosss (7)
in formula (7), LosspAnd controlling the proportion of the two parts of loss functions through a hyper-parameter beta for the MAE index of the mean absolute value error of sequence prediction.
Step 3, space-time sequence convolution processing:
performing multi-layer space-time convolution on the time sequence by using the space-time convolution descriptor and the steady-state incidence relation matrix obtained in the step 2 to perform feature extraction and feature fusion, wherein the feature fusion comprises graph convolution feature fusion and time sequence convolution feature fusion; recording and integrating and outputting the result of each time of space-time convolution; specifically, the iterative process includes the following substeps:
step 301) graph convolution feature fusion: the graph convolution characteristic fusion is that a graph convolution module receives a residual error link of an original input characteristic and a characteristic of a previous round of space-time convolution, high-dimensional graph convolution operation is carried out by using a relation between nodes represented by a steady-state associated network, and information between the nodes is aggregated; feature fusion is performed by performing a multi-layer graph convolution operation using the laplacian matrix descriptor 2 obtained by equation (5) and the original input sequence X, as follows:
Figure BDA0003478686110000057
Figure BDA0003478686110000058
in formulae (8) and (9), HiConvolved for ith pictureHidden layer feature, H0=HinThe initial input of the graph convolution module is represented by a hyper-parameter beta for controlling the characteristics of the initial input; the hidden layer transforms the matrix W through the corresponding lineariFusing to obtain graph convolution output Hout
Step 302) time series convolution feature fusion: and aggregating information in the single-node time sequence direction by using a swelling causal time sequence convolution operation, and outputting the output result of each round to an output module and the next layer of space-time convolution.
Using the graph convolution output obtained in step 301) as a time series convolution input, performing a time series convolution operation on the feature. The time-sequential convolution module includes a dilation convolution and a multi-size convolution kernel operation. For the space-time convolution of n layers, the expansion coefficient of the time sequence convolution module of the i layer is di=di-1And d is the expansion convolution parameter set by the model.
Extracting time series characteristics of different periods by using a multi-size convolution kernel, and aligning and integrating outputs by using truncation operation, wherein the definition is as follows:
Figure BDA0003478686110000061
Figure BDA0003478686110000062
in equations (10) and (11), z is the output from the graph convolution operation received by the time-series convolution, and z is HoutTime-series convolution of two sizes 1 x 2 and 1 x 6 is used, and two activation functions tanh and sigmoid are used as gating mechanisms to control the feature output ratio, ZoutIs the final output of the time series convolution.
Step 4, back propagation of the space-time convolution network:
the computation framework can automatically deduce the back propagation of the whole network according to the model automatic derivation method given by the pytorch official network (the pytorch back propagation formula https:// pytorch. org/channels/begin/blitz/neural _ networks _ tunnel. html. back prop). And calculating sequence prediction loss and steady-state composition loss, and outputting a feedback result to each level input end of the whole modeling process so as to realize back propagation of the steady-state composition module and the space-time convolution module. The back propagation of the whole adaptive network is realized by using an automatic derivative tool in the Pythrch, so that a prediction model is obtained. The optimization mode can be a random gradient descent method, a momentum optimization method, Adam and the like, and loss functions such as a logarithmic loss function, a quadratic loss function, an absolute value loss function and a custom loss function are all suitable. In this example, using the adammoptimizer optimizer, the loss function is the mean absolute error.
In step 2 of the method, the steady-state characteristics obtained by mean pooling and linear transformation in the input time sequence and steady-state network construction module are used, and the steady-state characteristics are expanded into a steady-state correlation network by matrix multiplication. In the model, linear change weight in a steady-state network construction module is updated in a back propagation mode through sequence prediction loss and steady-state composition loss, so that self-adaptive steady-state feature learning is realized.
Obtaining a multi-sensor time series prediction curve of the gas turbine by using the prediction model: and taking the time series of the moments 1-t as the input of the model, wherein the output of the model is the multi-sensor time series prediction curve of the gas turbine corresponding to the moment t + 1.
The trained prediction model can receive data input from a gas turbine sensor, and is subjected to data preprocessing, steady-state association network construction and time sequence convolution after steady-state feature extraction and feature fusion of graph convolution operation, and finally feature integration is performed through a sequence prediction output module of the model, so that a subsequent prediction sequence is given. By predicting the sequence, an operator may be better able to monitor changes in the operating conditions of the gas turbine. And receiving real-time input and obtaining a prediction curve after training is finished. The curves with round points represent the actual measuring point curves of the shell vibration, the high-pressure rotating speed, the fuel oil pressure and the fuel tank temperature in the gas turbine, the curves with crosses represent the prediction curves of the shell vibration, the high-pressure rotating speed, the fuel oil pressure and the fuel tank temperature in the gas turbine obtained by the prediction model designed by the invention, and the prediction curves of the shell vibration, the high-pressure rotating speed, the fuel oil pressure and the fuel tank temperature in the gas turbine are respectively shown in the figures 3-1, 3-2, 3-3 and 3-4. Because the steady state information implied in the data is added into the construction of the relational network in the prediction model designed by the invention, the physical quantities measured by each part of the gas turbine at the future time can be given by the prediction curve obtained by the invention, and the monitoring of the running state and trend of the gas turbine by operators is facilitated.
While the present invention has been described with reference to the accompanying drawings, the present invention is not limited to the above-described embodiments, which are illustrative only and not restrictive, and many modifications may be made by those skilled in the art without departing from the spirit of the present invention, within the scope of the appended claims.

Claims (6)

1. A method for predicting the time sequence of various measuring points on a gas turbine based on steady-state feature mapping is characterized in that steady-state features of input samples are extracted in an end-to-end mode, an incidence relation network is constructed, the incidence relation mapping is guided through steady-state loss, and a space-time neural network sequence prediction model is established based on the steady-state feature mapping; the steady-state characteristics of the system under the fixed working condition are extracted by adopting the space-time neural network sequence prediction model, and sequence prediction is carried out by using space-time convolution, so that self-adaptive steady-state characteristic learning is realized, and finally, a multi-sensor time sequence prediction curve of the gas turbine is obtained.
2. The method according to claim 1, wherein the steady state feature-based patterned spatio-temporal neural network sequence prediction model comprises a steady state network construction module, a spatio-temporal convolution feature fusion module and a sequence prediction output feedback module;
the steady-state network construction module uses pooling, convolution and linear variation operations, is used for extracting steady-state feature information in a sample and constructing a steady-state associated network, and is used for constructing each input sequence of the system and used as the input of a subsequent time-space convolution feature fusion module; for each steady-state network constructed by the iteration samples, calculating the composition variance to serve as an additional loss function to guide back propagation so as to update the network parameters of the steady-state network construction module;
the time-series convolution and graph convolution operation is used by the time-series convolution feature fusion module for completing the extraction of time-series features and space features of a single sample in each iteration of the system, and the input of an original sample and the features of an output aggregation sample of the steady-state network construction module are received in the working process;
the sequence prediction output feedback module is used for realizing iterative optimization of a network and output of a prediction sequence, receives the output from the space-time convolution characteristic fusion module, compares the prediction sequence output by the sequence prediction output feedback module with a real time sequence, combines steady-state loss and sequence prediction loss of a steady-state network construction module, optimizes weight values of all levels in a back propagation mode, and outputs a feedback result to input ends of all levels in the whole modeling process.
3. The method according to claim 2, characterized in that it comprises the following steps:
step 1, data preprocessing:
forming the measurement data of the multiple sensors of the gas turbine into multi-element time sequence data according to the time of the same interval; for recorded time sequence data, firstly taking data of all sensors from 0 to t as a sample; by analogy, sampling results in 1 to T +1, 2 to T +2, … to TMaxTo T + TMaxTaking data in a time period as a training sample of the model; after determining the sample, carrying out normalization operation on the numerical value by adopting a maximum and minimum normalization method;
Figure FDA0003478686100000011
in the formula (1), xiAnd x'iRespectively represent the ithThe original vector and the normalized vector on each node, Min (x)i) A vector representing the minimum sample value of the node, Min (x)i) Representing the vector formed by the maximum sample value of the node, and N represents the number of the nodes;
step 2, constructing a steady-state incidence relation matrix:
firstly, extracting steady-state characteristics from the preprocessed data through a steady-state network construction module and constructing a steady-state associated network; selecting two feature extractors based on one-dimensional convolution to extract steady-state features on a time sequence, and respectively constructing two steady-state feature vectors for node features; two steady-state feature vectors are expanded into R through matrix multiplication operation and linear transformation of multilayer perceptronNxNThe steady-state incidence relation matrix; for the steady-state network constructed by each iteration sample, calculating the composition variance to be used as a final loss function to carry out back propagation so as to update the network parameters;
step 3, space-time sequence convolution processing:
performing multi-layer space-time convolution characteristic fusion on the time sequence data by using the space-time convolution descriptor and the steady-state incidence relation matrix obtained in the step 2 and combining a space-time convolution characteristic fusion module; the feature fusion comprises graph convolution feature fusion and time sequence convolution feature fusion; recording and integrating and outputting the result of each time of space-time convolution; wherein:
the graph convolution characteristic fusion is that a graph convolution module receives a residual error link of an original input characteristic and a characteristic of a previous round of space-time convolution, high-dimensional graph convolution operation is carried out by using an inter-node relation expressed by a steady-state association network, and information among nodes is aggregated;
the time sequence convolution characteristic fusion is to use an expansion causal time sequence convolution operation to aggregate information in a single node time sequence direction and output an output result of each round to an output module and a next layer of time-space convolution;
step 4, back propagation of the space-time convolution network:
calculating a loss function, and optimizing each level of weight values in a back propagation mode; the feedback result is transmitted to each level input end of the whole modeling process, so that the self-adaptive learning of the whole model and the network is realized, and a trained model is obtained;
step 5, obtaining a multi-sensor time sequence prediction curve of the gas turbine:
and taking the time series of the moments 1-t as the input of a model, wherein the output of the model is the gas turbine multi-sensor time series prediction curve corresponding to the moment t + 1.
4. The method of claim 3, wherein in step 1, the measurement data of the gas turbine multi-sensor comprises at least gas turbine casing vibration, turbine speed, fuel pressure and fuel tank temperature.
5. The method of claim 3, wherein: in the step 2, in the step of processing,
the steady-state feature extraction through the steady-state network construction module is that 2 different steady-state feature vectors are extracted from an input sequence through mean value pooling smoothing and convolution operations in the steady-state network construction module, and the process is as follows: suppose that there is a set of N input sequences of length l X ═ X1,x1...,xn]∈RN*lThe input sequence was smoothed by pooling the mean values of length 1 x, and then two one-dimensional convolution kernels of length 1 x (l/x), Conv, were used1And Conv2Respectively extracting time sequence characteristics on each node to obtain steady-state characteristic vector representation Gs of the two nodes1=tanh(Conv1(pool (X))) and Gs2=tanh(Conv2(pool(X)));
The steady-state associated network is constructed by mapping steady-state features into steady-state associated network relation representation by using Einstein summation operation for graph convolution feature fusion; selecting a descriptor 1 and a descriptor 2 to respectively represent an original steady-state associated network and a Laplace matrix representation which uses Laplace transformation to simplify calculation; wherein:
descriptor 1: by the function pi:
Figure FDA0003478686100000031
mapping steady-state feature vectors to a steady-state incidence relation matrix AgsWherein, a multiplication operation is used to construct a corresponding correlation network according to the number of samples, that is:
Figure FDA0003478686100000032
in the formula (2), AgsOriginal steady-state association network descriptor 1;
descriptor 2: the graph convolution operation is simplified by using Laplace normalization, and the identity matrix I and the degree matrix D are used for auxiliary calculation, namely:
Figure FDA0003478686100000033
in the formula (3), D is a measurement matrix of the steady-state correlation network A;
using the measurement matrix to perform laplacian normalization operation on the original matrix to a laplacian matrix L, that is:
Figure FDA0003478686100000034
Figure FDA0003478686100000035
in the formula (4), the reaction mixture is,
Figure FDA0003478686100000036
is a laplacian matrix descriptor 2;
in the process of training the model, the standard deviation of the generated steady-state correlation network is calculated for each batch of sampling samples to be used as the steady-state LosssNamely:
Figure FDA0003478686100000037
in the formula (6), AgsTaking the mean value of each batch of steady state images, and counting the standard deviation of the composition as the composition loss; the pattern Loss and the sequence prediction Loss are combined into a final Loss function Loss, namely:
L=(1-β)Lossp+βLosss (7)
in formula (7), LosspAnd controlling the proportion of the two parts of loss functions through a hyper-parameter beta for the MAE index of the mean absolute value error of sequence prediction.
6. The method of claim 3, wherein: in the step 3, the step of processing the image,
the specific content of the graph convolution feature fusion is as follows: performing multi-layer graph convolution operation to perform feature fusion by using the laplacian matrix descriptor 2 obtained by the formula (5) and the original input sequence X, wherein the formula is as follows:
Figure FDA0003478686100000038
Figure FDA0003478686100000039
in formulae (8) and (9), HiHidden layer feature for ith graph convolution, H0=HinThe initial input of the graph convolution module is represented by a hyper-parameter beta for controlling the characteristics of the initial input; the hidden layer transforms the matrix W through the corresponding lineariFusing to obtain graph convolution output Hout
The specific content of the time sequence convolution characteristic fusion is as follows: taking the graph convolution output as time sequence convolution input, and performing time sequence convolution operation on the characteristics; including dilation convolution and multi-size convolution kernel operations; for the space-time convolution of n layers, the expansion coefficient of the time sequence convolution module of the i layer is di=di-1Wherein d is a model-set dilation convolution parameter;
extracting time series characteristics of different periods by using a multi-size convolution kernel, aligning outputs by using a truncation operation, and integrating the outputs, wherein the following definitions are defined in the following steps:
Figure FDA0003478686100000041
Figure FDA0003478686100000042
the time-series convolution of two sizes 1 x 2 and 1 x 6 is used in the formula (10) and the formula (11), and two activation functions tanh and sigmoid are used as a gating mechanism to control the characteristic output proportion, ZoutIs the final output of the time series convolution.
CN202210062290.7A 2022-01-19 2022-01-19 Method for predicting time sequence of various measuring points on gas turbine based on steady-state feature picture Pending CN114444790A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210062290.7A CN114444790A (en) 2022-01-19 2022-01-19 Method for predicting time sequence of various measuring points on gas turbine based on steady-state feature picture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210062290.7A CN114444790A (en) 2022-01-19 2022-01-19 Method for predicting time sequence of various measuring points on gas turbine based on steady-state feature picture

Publications (1)

Publication Number Publication Date
CN114444790A true CN114444790A (en) 2022-05-06

Family

ID=81367739

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210062290.7A Pending CN114444790A (en) 2022-01-19 2022-01-19 Method for predicting time sequence of various measuring points on gas turbine based on steady-state feature picture

Country Status (1)

Country Link
CN (1) CN114444790A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116772944A (en) * 2023-08-25 2023-09-19 克拉玛依市燃气有限责任公司 Intelligent monitoring system and method for gas distribution station
CN116835540A (en) * 2023-04-28 2023-10-03 福建省龙德新能源有限公司 Preparation method of phosphorus pentafluoride

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116835540A (en) * 2023-04-28 2023-10-03 福建省龙德新能源有限公司 Preparation method of phosphorus pentafluoride
CN116835540B (en) * 2023-04-28 2024-05-21 福建省龙德新能源有限公司 Preparation method of phosphorus pentafluoride
CN116772944A (en) * 2023-08-25 2023-09-19 克拉玛依市燃气有限责任公司 Intelligent monitoring system and method for gas distribution station
CN116772944B (en) * 2023-08-25 2023-12-01 克拉玛依市燃气有限责任公司 Intelligent monitoring system and method for gas distribution station

Similar Documents

Publication Publication Date Title
Li et al. The emerging graph neural networks for intelligent fault diagnostics and prognostics: A guideline and a benchmark study
CN112580263B (en) Turbofan engine residual service life prediction method based on space-time feature fusion
CN114444790A (en) Method for predicting time sequence of various measuring points on gas turbine based on steady-state feature picture
Shawel et al. Convolutional LSTM-based long-term spectrum prediction for dynamic spectrum access
US5859773A (en) Residual activation neural network
CN113434970B (en) Health index curve extraction and service life prediction method for mechanical equipment
US20180157771A1 (en) Real-time adaptation of system high fidelity model in feature space
CN114428803B (en) Air compression station operation optimization method, system, storage medium and terminal
Zhang et al. Adaptive spatio-temporal graph convolutional neural network for remaining useful life estimation
CN115694985B (en) TMB-based hybrid network flow attack prediction method
CN116557787B (en) Intelligent evaluation system and method for pipe network state
CN113094860A (en) Industrial control network flow modeling method based on attention mechanism
CN114169091A (en) Method for establishing prediction model of residual life of engineering mechanical part and prediction method
CN115953900A (en) Traffic flow prediction method based on multi-dimensional time and space dependence mining
CN117419828B (en) New energy battery temperature monitoring method based on optical fiber sensor
CN113780420A (en) Method for predicting concentration of dissolved gas in transformer oil based on GRU-GCN
CN112001115A (en) Soft measurement modeling method of semi-supervised dynamic soft measurement network
CN114819388A (en) Condenser vacuum degree prediction method and device based on frequency domain information guidance
CN114818847A (en) Steam turbine backpressure trend prediction method based on catboost algorithm
Huang et al. Attention-augmented recalibrated and compensatory network for machine remaining useful life prediction
CN106919759B (en) Modeling method of aero-engine performance based on fitting sensitivity and model application
CN112348158A (en) Industrial equipment state evaluation method based on multi-parameter deep distribution learning
US11531907B2 (en) Automated control of a manufacturing process
CN117093848A (en) Attention mechanism-based double-wall structure feature extraction method
CN116796189A (en) Aerosol extinction coefficient profile prediction method based on deep learning technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination