WO2022004039A1 - Procédé d'entraînement de modèle prédictif, dispositif d'entraînement de modèle prédictif et système de commande d'installation - Google Patents

Procédé d'entraînement de modèle prédictif, dispositif d'entraînement de modèle prédictif et système de commande d'installation Download PDF

Info

Publication number
WO2022004039A1
WO2022004039A1 PCT/JP2021/004804 JP2021004804W WO2022004039A1 WO 2022004039 A1 WO2022004039 A1 WO 2022004039A1 JP 2021004804 W JP2021004804 W JP 2021004804W WO 2022004039 A1 WO2022004039 A1 WO 2022004039A1
Authority
WO
WIPO (PCT)
Prior art keywords
evaluation value
prediction model
sensitivity direction
learning
variable
Prior art date
Application number
PCT/JP2021/004804
Other languages
English (en)
Japanese (ja)
Inventor
一幸 若杉
Original Assignee
三菱重工業株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱重工業株式会社 filed Critical 三菱重工業株式会社
Publication of WO2022004039A1 publication Critical patent/WO2022004039A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring

Definitions

  • This disclosure relates to a prediction model learning method, a prediction model learning device, and a plant control system.
  • predictions of future events are being made using prediction models that represent the relationships between explanatory variables and objective variables that are causally related to each other.
  • a prediction model is constructed with the control parameters that define the operating conditions of the plant as explanatory variables and the efficiency of the plant as the objective variable, and changes in operating conditions are used using the prediction model.
  • Optimal operating conditions can be verified by predicting changes in efficiency with respect to.
  • Such a prediction model can be constructed by machine learning using teacher data that represents the relationship between the explanatory variables and the objective function based on past data. For example, when using a neural network, which is a method of machine learning, it is necessary to learn parameters such as neural network weights and biases so that the output value of the prediction model when training data is input approaches the value of the teacher data. Then, the prediction model is constructed.
  • Event prediction using predictive models built by machine learning has spread to a variety of applications in recent years and does not lead to inappropriate decisions except for extremely low-risk applications such as product recommendations. Therefore, it is necessary to ensure sufficient reliability.
  • the relationship between the explanatory variables and the objective variables learned by the predictive model is different from the existing knowledge, it may cause inappropriate decision making. For example, according to existing knowledge, if the temperature rises, the sales of ice cream will increase, but due to lack of data etc., the prediction model will learn the opposite relationship (the lower the temperature, the more ice cream will sell). If so, the prediction model recommendations are the opposite of what they really are and can lead to inappropriate decisions.
  • Patent Document 1 the data area defined by the explanatory variable and the objective variable is divided into a plurality of areas, and the index value indicating the data shortage is calculated for each area, so that the user can use the data shortage area. It has been proposed to help obtain the additional data needed to recognize and build reliable forecasting models.
  • Patent Document 1 an area in which additional data acquisition is required in order to improve the prediction accuracy of the prediction model by calculating an index value indicating data shortage in consideration of the sensitivity of the objective variable to the explanatory variable in each area. Is presented to the user.
  • Patent Document 1 presents to the user an area in which data necessary for constructing a predictive model having sufficient reliability is insufficient. Therefore, even if the user can recognize the lacking data, the prediction model cannot be improved when the data acquisition itself is difficult, for example, when the data collection requires cost and time. Even if the prediction model is built on the basis of sufficient teacher data, it is not built as a prediction model with correct sensitivity due to the nature of the algorithm and the multicollinearity of the data, making inappropriate decisions. May bring.
  • At least one aspect of the present disclosure is made in view of the above circumstances, and provides a learning method of a predictive model capable of constructing a highly reliable predictive model, a learning device of the predictive model, and a plant control system. With the goal.
  • the method of learning the prediction model according to at least one aspect of the present disclosure is to solve the above-mentioned problems.
  • the first evaluation value calculation process for calculating the first evaluation value indicating the prediction error of the prediction model, and A second evaluation value indicating the degree of agreement between the feature amount related to the sensitivity direction of at least a part of the at least one explanatory variable with respect to the objective variable and the permissible range set for the feature amount based on known information is calculated.
  • Second evaluation value calculation process and A predictive model update process in which learning is performed by updating the predictive model based on the first evaluation value and the second evaluation value, and To prepare for.
  • the learning device for the prediction model is to solve the above-mentioned problems.
  • a prediction model learning device for predicting an objective variable from at least one explanatory variable.
  • the first evaluation value calculation unit that calculates the first evaluation value indicating the prediction error of the prediction model, and A second evaluation value indicating the degree of agreement between the feature amount related to the sensitivity direction of at least a part of the at least one explanatory variable with respect to the objective variable and the permissible range set for the feature amount based on known information is calculated.
  • the second evaluation value calculation unit and A prediction model update unit that performs learning by updating the prediction model based on the first evaluation value and the second evaluation value. To prepare for.
  • the plant control system is to solve the above problems.
  • the learning device for the prediction model according to at least one aspect of the present disclosure, A prediction unit that predicts parameters corresponding to the objective variable by inputting input parameters corresponding to the at least one explanatory variable into the prediction model updated by the prediction model update unit.
  • An optimum value calculation unit that calculates the optimum value of the at least one explanatory variable based on the parameters predicted by the prediction unit.
  • a control unit that controls the plant based on the optimum value, To prepare for.
  • a predictive model learning method capable of constructing a highly reliable predictive model, a predictive model learning device, and a plant control system.
  • the PD plot and the ICE plot obtained for each explanatory variable.
  • expressions such as “same”, “equal”, and “homogeneous” that indicate that things are in the same state not only represent exactly the same state, but also have tolerances or differences to the extent that the same function can be obtained. It shall also represent the existing state.
  • an expression representing a shape such as a quadrangular shape or a cylindrical shape not only represents a shape such as a quadrangular shape or a cylindrical shape in a geometrically strict sense, but also an uneven portion or a chamfer within the range where the same effect can be obtained. It shall also represent the shape including the part and the like.
  • the expressions “equipped”, “equipped”, “equipped”, “included”, or “have” one component are not exclusive expressions excluding the existence of other components.
  • FIG. 1 is an overall configuration diagram of a plant control system 100 according to at least one embodiment of the present disclosure.
  • the plant control system 100 controls the control end of the plant.
  • the plant control system 100 has a hardware configuration including an electronic arithmetic unit such as a computer, and cooperates with the hardware configuration by installing a program for executing the control described below as software. It is configured to function as the plant control system 100 according to at least one embodiment of the present disclosure.
  • FIG. 1 shows a functional configuration of such a plant control system 100 as a block diagram.
  • the plant control system 100 includes a prediction model storage unit 110, an explanatory variable acquisition unit 120, a prediction unit 130, an optimum value calculation unit 140, a control unit 145, and a learning device 150.
  • the prediction model storage unit 110 stores a prediction model M for predicting the objective variable y from at least one explanatory variable.
  • the prediction model M stored in advance is stored in the prediction model storage unit 110, and the prediction model M is configured to be updatable by machine learning by the learning device 150.
  • the predictive model M is constructed using an algorithm that learns the parameters of the model so as to optimize the evaluation index indicating the performance of the model, for example, a neural network.
  • a prediction model M for inputting a parameter related to the operating conditions of the plant as the explanatory variable X k and predicting a parameter related to the performance of the plant (for example, efficiency) as the objective variable y will be described as an example.
  • kth (k is an arbitrary natural number) of a plurality of explanatory variables X k is indicated, it is indicated as "explanatory variable x k ".
  • the explanatory variable acquisition unit 120 acquires at least one explanatory variable X k to be input to the prediction model M.
  • at least one explanatory variable X k is a parameter relating to the operating conditions of the plant, and these parameters are, for example, actually measured values such as sensor detection values installed in the plant and each component of the plant. A control value for, etc. can be used.
  • the prediction unit 130 predicts the objective variable y from at least one explanatory variable X k acquired by the explanatory variable acquisition unit 120 using the prediction model M. Specifically, the prediction unit 130 reads the prediction model M from the prediction model storage unit 110 by accessing the prediction model storage unit 110, and at least one explanation acquired by the explanatory variable acquisition unit 120 for the prediction model M. By inputting the variable X k , the objective variable y is calculated. In the present embodiment, as described above, by inputting the parameters related to the operating conditions of the plant into the prediction model M, for example, the parameters related to the performance of the plant are predicted.
  • the optimum value calculation unit 140 calculates the optimum value for controlling the control end of the plant based on the prediction result of the prediction unit 130.
  • the optimum value is calculated as an explanatory variable X k when the objective variable y calculated by the prediction unit 130 is the best. For example, when the objective variable y predicted by the prediction model M is the plant efficiency, the parameter corresponding to the explanatory variable X k when the plant efficiency is the best is calculated as the optimum value.
  • Control unit 145 based on the optimum value of the explanatory variable X k calculated by the optimum value calculating unit 140, controls the plant. This enables plant control in which the objective variable y is the best.
  • the learning device 150 updates the prediction model M stored in the prediction model storage unit 110 by performing machine learning. By repeatedly performing machine learning by the learning device 150 at a predetermined timing, the reliability of the prediction model M is sequentially improved. In particular, the learning device 150 learns the prediction model M so as to have the following configuration, which is less likely to cause inappropriate decision making and has excellent reliability.
  • the learning device 150 includes a data set creation unit 152, a first evaluation value calculation unit 156, a second evaluation value calculation unit 158, a knowledge table storage unit 159, and an analysis unit 160. It is provided with a model update unit 162.
  • the data set creation unit 152 creates a data set DS including a plurality of explanatory variables X k and an objective variable y by using the prediction model M stored in the prediction model storage unit 110.
  • the creation of the data set DS is performed, for example, by inputting a plurality of explanatory variables X k into the prediction model M, outputting the objective variable y, and associating these explanatory variables x and the objective variable y.
  • the data set creation unit 152 outputs the teacher data TD used for machine learning of the prediction model M.
  • the teacher data TD is prepared, for example, in a database (not shown) in which plant operation data is stored in advance, and the database creation unit 152 outputs the teacher data TD acquired by accessing the database.
  • the teacher data TD is prepared to include a plurality of explanatory variables X k and an objective variable y, as in the data set DS created by the data set creation unit 152.
  • the first evaluation value calculation unit 156 calculates the first evaluation value R1.
  • the first evaluation value R1 is an index indicating a prediction error with respect to the teacher data of the prediction model M.
  • the first evaluation value calculation unit 156 has a prediction error e calculated based on the data set DS created by the data set creation unit 152 and the teacher data TD acquired by the teacher data acquisition unit 154. Calculated as p.
  • the second evaluation value calculation unit 158 calculates the second evaluation value R2.
  • the second evaluation value R2 is an index relating to the sensitivity direction of the plurality of explanatory variables X k in the prediction model M with respect to the objective variable y, and more specifically, the feature quantity relating to the sensitivity direction of the plurality of X k objective variables y. , Indicates the degree of agreement with the allowable range set for the feature amount.
  • the calculation of the second evaluation value R2 is performed based on the knowledge table KT stored in the knowledge table storage unit 159.
  • the knowledge table KT defines a sensitivity direction type t sd for classifying the sensitivity direction.
  • the sensitivity direction of the plurality of explanatory variables X k with respect to the objective variable y can be classified into any of the sensitivity direction types t sd specified in the knowledge table KT.
  • FIG. 2 is a table showing the classification example of sensitivity direction type t sd, the feature amount and the allowable range corresponding to each sensitivity direction type t sd.
  • Various patterns can be considered for the classification of the sensitivity direction type t sd , but in this embodiment, "increase”, “decrease”, “convex downward”, “convex upward”, “ignore” and “unknown”. The case where it can be classified into the 6 patterns of the above will be described by way of example.
  • the explanatory variable x k increases when the other explanatory variables X c excluding the explanatory variable x k among the plurality of explanatory variables X k are set as fixed values. Therefore, the objective variable y also has an increasing sensitivity direction (at this time, among the plurality of explanatory variables X k , the other explanatory variables X c excluding the explanatory variable x k are constants).
  • the objective variable y has a sensitivity direction in which it decreases (at this time, among the plurality of explanatory variables X k , the other explanatory variables X c excluding the explanatory variable x k are constants).
  • the explanatory variables X c other than the explanatory variables x k among the plurality of explanatory variables X k are set as fixed values, the explanatory variables x at the initial stage.
  • the objective function y decreases as k increases, and the objective function y has a sensitivity direction in which the objective function y starts to increase from the middle (at this time, among the plurality of explanatory variables X k , the other explanatory variables X c excluding the explanatory variables x k are Let it be a constant).
  • An upper limit value x ipmax and a lower limit value x ipmin are specified.
  • the explanatory variables X c other than the explanatory variables x k among the plurality of explanatory variables X k are set as fixed values, the explanatory variables x at the initial stage.
  • the objective function y increases as k increases, and the objective function y has a sensitivity direction in which the objective function y starts to decrease from the middle (at this time, among the plurality of explanatory variables X k , the other explanatory variables X c excluding the explanatory variables x k are Let it be a constant).
  • the sensitivity direction type t sd is "negligible"
  • the other explanatory variables X c excluding the explanatory variables x k among the plurality of explanatory variables X k are set as fixed values, the change of the explanatory variables x k is satisfied.
  • the sensitivity direction is so small that the change in the objective variable y is negligible (at this time, among the plurality of explanatory variables X k , the other explanatory variables X c excluding the explanatory variable x k are constants). In this case, it corresponds to the case where the inclination a is sufficiently small when the above-mentioned sensitivity direction type t sd is "increase” or "decrease".
  • the upper limit value a max and the lower limit value a min of the inclination a are defined.
  • the explanatory variable x k and the objective variable y when the other explanatory variables X c excluding the explanatory variable x k among the plurality of explanatory variables X k are set as fixed values. Since the relationship with the feature amount is unknown, the feature amount and the permissible range corresponding to the feature amount are not defined, and the sensitivity direction error is always regarded as 0.
  • the second evaluation value calculation unit 158 calculates the sensitivity direction error e sd of the prediction model by calculating the sensitivity direction error e sdk with respect to the objective variable y of at least one explanatory variable X k .
  • FIG. 3 is a block diagram showing a configuration for calculating the sensitivity direction error e sdk with respect to the objective variable y of a certain explanatory variable x k in the second evaluation value calculation unit 158 of FIG. 1.
  • the second evaluation value calculation unit 158 includes a feature amount calculation unit 158a and a sensitivity direction error calculation unit 158b.
  • the feature amount calculation unit 158a calculates the feature amount based on the sensitivity direction type t sd selected for the explanatory variable x k and the data set DS created by the data set creation unit 152. Feature quantity calculated here is based on the type of sensitive direction type t sd to be input to the feature quantity calculation unit 158a, it is calculated as follows.
  • x k and j indicate the jth of the m components of the explanatory variable x k in the ICE plot.
  • F (X k , X c (i) ) is a vector of each f (x k, j , X c (i) ), and F'(X k , X c (i) ) is each f'(x). It is a vector of k, j , X c (i)).
  • f' is a derivative of f by x k.
  • the sensitivity direction error calculation unit 158b calculates the sensitivity direction error e sdk based on the feature amount calculated by the feature amount calculation unit 158a and the allowable range corresponding to the feature amount.
  • the sensitivity direction type t sd is "increase”, “decrease”, if "negligible”
  • the sensitivity direction error e sdk is feature value A k corresponding to the sensitive direction type t sd
  • tolerance (a max, a min) corresponding to the amount a k is obtained by the following equation using the.
  • e sdk calDev (A k, a max, a min) (4)
  • calDev is a function that outputs a root mean square deviation amount from the permissible range of the feature amount A k (a max, a min ).
  • the sensitivity direction error e sdk is feature value A 'k corresponding to the sensitive direction type t sd, X ipk, and , the feature amount a 'k, the allowable range corresponding to X ipk (a' max, a 'min, x ipmax, x ipmin) using obtained by the following expression.
  • e sdk calDev (A 'k , a' max, a 'min) + calDev (X ipk, x ipmax, x ipmin) (5)
  • calDev the feature amount A 'k, X tolerance of ipk (a' max, a ' min, x ipmax, x ipmin) is a function that outputs a root mean square deviation amount from.
  • the second evaluation value calculation unit 158 calculates the sensitivity direction error e sdk for each explanatory variable x k in this way, and by synthesizing these, the second evaluation value R2 indicating the sensitivity direction type t sd of the prediction model M. Is calculated.
  • the analysis unit 160 makes a prediction based on the first evaluation value R1 calculated by the first evaluation value calculation unit 156 and the second evaluation value R2 calculated by the second evaluation value calculation unit 158.
  • the total error e of the model M is calculated.
  • the total error e is calculated as the linear sum of the prediction error e p calculated as the first evaluation value R1 and the sensitivity direction error e sd calculated as the second evaluation value R2.
  • the prediction model update unit 162 updates the prediction model M stored in the prediction model storage unit 110 based on the total error e calculated by the analysis unit 160. For example, the prediction model update unit 162 updates the prediction model M so that the total error e is minimized. For example, when the prediction model M is constructed as a neural network, each coefficient set between each hidden layer existing between the input layer and the output layer of the neural network minimizes the total error e. Will be updated.
  • the prediction model M is learned by executing such an update operation of the prediction model M a predetermined number of times.
  • such a problem can be effectively solved by updating the prediction model M based on both the first evaluation value R1 and the second evaluation value R2. Since the second evaluation value R2 is calculated in consideration of the sensitivity direction error e sd of the prediction model M as described above, by considering it together with the first evaluation value R1 indicating the prediction error e p, for example, the teacher data TD Even when it is difficult to learn the correct sensitivity, such as when there is a shortage, the prediction accuracy by the prediction model M can be improved. For example, when the data set DS used for learning is close to the teacher data TD, overfitting may occur in the learning considering only the first evaluation value R1, but the second evaluation value R2 should be considered.
  • FIG. 4 is a flowchart showing the learning method of the prediction model M according to at least one embodiment of the present disclosure for each process.
  • the first evaluation value calculation unit 156 calculates the first evaluation value R1 (step S10). First calculate the first evaluation value R1, as described above, obtaining the data set created by the data set creation unit 152 DS, and the prediction error e p based on teaching data TD obtained by the teacher data acquisition unit 154 It is done by.
  • the data set DS created by the data set creation unit 152 may be treated as batch data.
  • the batch data includes at least one dataset DS and may include a plurality of dataset DSs.
  • the second evaluation value calculation unit 158 calculates the second evaluation value R2 (step S20).
  • the calculation method of the second evaluation value R2 will be described in more detail with reference to FIG.
  • FIG. 5 is a flowchart showing the detailed process of step S20 of FIG.
  • the constant k for designating an arbitrary one from the plurality of explanatory variables X k is set to the initial value “1” (step S21).
  • the sensitivity direction type t sd is selected for the explanatory variable X k (step S22).
  • the selection of the sensitivity direction type t sd is based on the knowledge of the user, for example, when the explanatory variables X c other than the explanatory variable X k are set as constants, which of the explanatory variables X k is relative to the objective variable y. It is done by determining if it has the sensitivity direction type t sd.
  • the feature amount calculating section 158a calculates the X ipk (step S23).
  • sensitivity direction error calculating unit 158b includes a feature quantity A k calculated in step S23, A 'k, using X ipk, sensitivity direction error e sdk corresponding to the explanatory variable X k is calculated (step S24 ).
  • the calculation of the sensitivity direction error e sdk in step S24 is performed according to the above equation (4) or (5) so as to correspond to the sensitivity direction type t sd selected in step S22.
  • step S25 the constant k is incremented (step S25), and then it is determined whether or not the constant k is equal to or greater than the upper limit value k max (step S26).
  • step S26 the process is returned to step S22, so that the same sensitivity direction error e sdk is calculated for the next explanatory variable X k. Calculation loop of such a sensitivity direction error e sdk is repeated until the calculation of the sensitivity direction error e sdk for all explanatory variables is performed.
  • the analysis unit 160 synthesizes the sensitivity direction error e sdk calculated for each explanatory variable to obtain the sensitivity of the prediction model M.
  • the direction error e sd is calculated (step S27).
  • the sensitivity direction error e sd is obtained by the following equation as a linear sum of the sensitivity direction error e sdk calculated for each explanatory variable.
  • e sd e sd 1 + e sd 2 + ... (6)
  • a predetermined weighting coefficient may be set for the sensitivity direction error e sdk corresponding to each explanatory variable.
  • the prediction model update unit 162 updates the prediction model M based on the first evaluation value R1 calculated in step S10 and the second evaluation value R2 calculated in step S20 (step). S30).
  • the update of the prediction model M in step S30 is performed so that the total error e obtained by the following equation is minimized.
  • e e p + r sde x e sd (7)
  • the coefficient r sde is an arbitrary constant.
  • the control end of the plant can be appropriately controlled by predicting the control parameters using the prediction model M.
  • FIG. 6 is a scatter matrix of the data set DS used for constructing the prediction model M by the learning device 150.
  • This scatter matrix is composed of 10 sets of data set DS including 9 explanatory variables x1 to x9 and an objective variable y.
  • Each component of the scatter matrix is defined at intervals of 0.01 steps between 0.0 and 1.0 so as to satisfy the following equation, and is formed by adding a small amount of noise component to the interval. ..
  • the sensitivity direction type t sd is selected for each of the nine explanatory variables x1 to x9 defined in the scatter matrix prepared in this way.
  • FIG. 7 is an example of selecting the sensitivity direction type t sd for each explanatory variable x1 to x9.
  • the sensitivity direction type t sd of "upwardly convex” is selected for the explanatory variables x1, x6, sensitivity direction type t sd of "downward convex” is selected for the explanatory variables x2, x7, explanatory variable x3, x8 sensitivity direction type t sd of "reduced” to is selected, the sensitivity direction type t sd of "increase” to the explanatory variables x4, x9 is selected, "ignored for explanatory variables x5
  • the "possible" sensitivity direction type t sd is selected.
  • the permissible range corresponding to the sensitivity direction type t sd selected for each explanatory variable x1 to x9 is shown.
  • FIG. 8 is a PD plot and an ICE plot obtained for each explanatory variable x1 to x9.
  • (A) and (f) show the sensitivity directions of the explanatory variables x1 and x6 with respect to the objective variable y, and both the PD plot and the ICE plot show the sensitivity direction type t selected for the explanatory variables x1 and x6 in FIG. It was confirmed that the sensitivity direction of "convex upward" which is sd is reflected.
  • (b) and (g) indicate the sensitivity directions of the explanatory variables x2 and x7 with respect to the objective variable y, and both the PD plot and the ICE plot are the sensitivity direction types selected for the explanatory variables x2 and x7 in FIG. It was confirmed that the sensitivity direction of "downward convex", which is t sd, was reflected. Further, (c) and (h) indicate the sensitivity directions of the explanatory variables x3 and x8 with respect to the objective variable y, and both the PD plot and the ICE plot are the sensitivity direction types selected for the explanatory variables x3 and x8 in FIG.
  • FIG. 9 shows the verification results of the convergence of the prediction error e p and the sensitivity direction error e sd with respect to the number of epochs.
  • the sensitivity direction errors e sdk corresponding to the explanatory variables x1 to x9 are shown, respectively, and the sensitivity direction error e sdk converges when the number of epochs is about 20.
  • the prediction error e p of the prediction model M and the sensitivity direction error e sd (the linear sum of the sensitivity direction error e sdk corresponding to the explanatory variables x1 to x9) are shown, respectively.
  • e sd converges when the number of epochs is about 20.
  • the prediction error e p converges when the number of epochs is about 10, and shows the behavior of converging faster than the sensitivity direction error e sd.
  • the case where the calculation frequency of the prediction error e p and the calculation frequency of the sensitivity direction error e sd are equal has been described.
  • the calculation frequency of the prediction error e p and the calculation frequency of the sensitivity direction error e sd are described. It may be different.
  • the verification results shown in FIG. 9 since the prediction error e p is converged at a sensitivity direction error e sd fewer number of epochs, to the extent that convergence is not too slow of the prediction error e p, calculated frequencies of the prediction error e p May be less than the calculation frequency of the sensitivity direction error. As a result, the number of operations can be reduced within the range in which the calculation accuracy of the prediction error e p can be ensured.
  • FIG. 10A is a CCPP data set showing the relationship between AT (horizontal axis) representing the outside air temperature and PE (vertical axis) representing the generated power
  • FIG. 10B is a prediction model M obtained from the CCPP data set of FIG. 10A.
  • FIG. 11A is a bike sharing data set showing the relationship between the atom (horizontal axis) representing the sensible temperature and the ct (vertical axis) representing the number of rental bicycles
  • FIG. 11B is obtained from the bike sharing data set of FIG. 11A. It is a prediction model M to be obtained.
  • (f) the prediction model M obtained by the above-mentioned learning device 150 and the learning model obtained by the machine learning of another method are shown in (a) to (e), respectively. ..
  • (a) is a learning model obtained by machine learning using Random Forest (RF).
  • (B) is a learning model obtained by machine learning using Lasso CV.
  • (C) is a learning model obtained by machine learning using SVR (Support Vector Regression).
  • D) is a learning model obtained by machine learning using TPOT (The Tree-Based Pipeline Optimization Tool).
  • (E) is a learning model obtained by machine learning using a neural network (NN).
  • FIG. 12 is a table comparing the evaluation results of the prediction model M obtained by each method of FIGS. 10B and 11B.
  • FIG. 12 shows RMSE (Root Mean Square Err Wegr) and R2 (Coffeeient of Determination) as evaluation items, and in the prediction model M obtained by the learning device 150 according to the present embodiment, it is obtained by another learning method. Good results were obtained in all the evaluation items compared to the predicted model. This indicates that the learning method of the present embodiment is used to learn the prediction model M, which is more reliable than the conventional one.
  • RMSE Root Mean Square Err GmbHr
  • R2 Coffeeient of Determination
  • the learning method of the prediction model is A method of learning a prediction model (for example, the prediction model M of the above embodiment) for predicting an objective variable (for example, the objective variable y of the above embodiment) from at least one explanatory variable (for example, the explanatory variable X k of the above embodiment).
  • a first evaluation value calculation step for example, the above embodiment for calculating a first evaluation value (for example, the first evaluation value R1 of the above embodiment) indicating a prediction error of the prediction model (for example, a sensitivity direction error e p of the above embodiment).
  • tolerance of the above embodiments range a max, a min, a ' max, a' min, x ipmax, first shows the degree of coincidence between x Ipmin) 2
  • a second evaluation value calculation step for example, step S20 of the above embodiment
  • an evaluation value for example, the second evaluation value R2 of the above embodiment
  • a prediction model update step for example, step S30 of the above embodiment in which learning is performed by updating the prediction model based on the first evaluation value and the second evaluation value.
  • the prediction model is updated by machine learning based on the second evaluation value in addition to the first evaluation value indicating the prediction error of the prediction model.
  • the second evaluation value is an index showing the degree of agreement between the feature amount related to the sensitivity direction of the explanatory variable with respect to the objective variable and the allowable range set for the feature amount based on known information.
  • a sensitivity direction error (for example, the sensitivity direction error e sd of the above embodiment) with respect to the allowable range of the feature amount is calculated as the second evaluation value.
  • the sensitivity direction of the prediction model deviates from the permissible range of the sensitivity direction based on known information such as knowledge possessed by the user. It is calculated as a so-called sensitivity direction error (or sensitivity direction accuracy), which is an index for quantitatively evaluating whether or not.
  • the known information includes a plurality of preclassified sensitivity direction types (eg, the sensitivity direction type tsd of the above embodiment) and the permissible range of the feature amount defined for each of the plurality of sensitivity direction types. include.
  • a plurality of preclassified sensitivity direction types are set with respect to the sensitivity direction of the objective variable with respect to the explanatory variable.
  • the known information used in the calculation of the second evaluation value includes such a plurality of sensitivity direction types and the allowable range of the feature amount specified for each sensitivity direction type.
  • the second evaluation value is calculated by synthesizing the degree of agreement for each explanatory variable using a preset weighting coefficient.
  • the sensitivity of each explanatory variable to the objective variable is comprehensively evaluated by synthesizing the degree of agreement calculated for the permissible range for the feature value in the sensitivity direction for each explanatory variable.
  • the second evaluation value is calculated as an index.
  • the prediction model update step the prediction model is updated so that the linear sum of the first evaluation value and the second evaluation value is minimized.
  • the prediction model is updated so that the linear sum of the first evaluation value and the second evaluation value is minimized.
  • machine learning is performed so that the prediction error (conversely, the prediction accuracy) and the degree of agreement with the allowable range of the feature amount in the sensitivity direction are minimized, and a prediction model with excellent reliability can be constructed. ..
  • batch data including the plurality of data sets having at least one explanatory variable and the objective variable (for example, the data set DS of the above embodiment) is used. , The first evaluation value and the second evaluation value are calculated, respectively.
  • the calculation load can be effectively reduced by performing the calculation of the first evaluation value and the second evaluation value using the batch data including a plurality of batch data.
  • the calculation of the first evaluation value and the second evaluation value is performed at different frequencies from each other.
  • the learning device for the prediction model is A learning device (for example, a prediction model M of the above embodiment) for predicting an objective variable (for example, the objective variable y of the above embodiment) from at least one explanatory variable (for example, the explanatory variable X k of the above embodiment).
  • a first evaluation value calculation unit for example, the embodiment for calculating a first evaluation value (for example, the first evaluation value R1 of the embodiment) indicating a prediction error of the prediction model (for example, a sensitivity direction error e p of the embodiment).
  • the 1st evaluation value calculation unit 156) and Wherein at least part of at least one of the feature quantity relating to the sensitivity direction for the objective variable (feature amount A k of example above embodiments, A 'k, X ipk) explanatory variables and the known information (e.g., the knowledge table of the embodiment ) tolerance range set for the feature amount based on (e.g.
  • the second evaluation value calculation unit for example, the second evaluation value calculation unit 158 of the above embodiment
  • a prediction model update unit for example, the prediction model update unit 162 of the above embodiment
  • the prediction model is updated by machine learning based on the second evaluation value in addition to the first evaluation value indicating the prediction error of the prediction model.
  • the second evaluation value is an index showing the degree of agreement between the feature amount related to the sensitivity direction of the explanatory variable with respect to the objective variable and the allowable range set for the feature amount based on known information.
  • the plant control system is The learning device of the prediction model according to the aspect (8) above, By inputting the at least one explanatory variable regarding the control parameters of the plant into the prediction model updated by the prediction model update unit, the prediction unit that predicts the objective variable regarding the performance of the plant (for example, in the above embodiment). Prediction unit 130) and A control unit that controls the plant (for example, the control unit 145 of the above embodiment) based on the objective variable predicted by the prediction unit. To prepare for.
  • the objective variable which is a parameter related to the performance of the plant, is predicted by using the prediction model updated by the learning method described above.
  • the control unit can optimize the plant performance by performing plant control based on the objective variable predicted in this way.
  • Plant control system 110 Prediction model storage unit 120 Explanatory variable acquisition unit 130 Prediction unit 140 Optimal value calculation unit 145 Control unit 150 Learning device 152 Data set creation unit 156 First evaluation value calculation unit 158 Second evaluation value calculation unit 158a Feature quantity Calculation unit 158b Sensitivity direction error calculation unit 159 Knowledge table storage unit 160 Analysis unit 162 Prediction model update unit DS data set M Prediction model R1 First evaluation value R2 Second evaluation value TD Teacher data

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Feedback Control In General (AREA)
  • Testing And Monitoring For Control Systems (AREA)

Abstract

La présente invention concerne un modèle prédictif permettant de prédire une variable objective à partir d'au moins une variable explicative, le modèle prédictif étant mis à jour en fonction d'une première valeur d'évaluation et d'une seconde valeur d'évaluation. La première valeur d'évaluation est calculée en tant qu'indicateur indiquant une erreur de prédiction du modèle prédictif. La seconde valeur d'évaluation est calculée en tant qu'indicateur indiquant un degré de coïncidence entre une caractéristique liée à une direction de sensibilité d'au moins un sous-ensemble de la variable explicative par rapport à la variable objective et une plage de tolérance définie pour la caractéristique en fonction d'informations connues.
PCT/JP2021/004804 2020-06-30 2021-02-09 Procédé d'entraînement de modèle prédictif, dispositif d'entraînement de modèle prédictif et système de commande d'installation WO2022004039A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-113248 2020-06-30
JP2020113248A JP6831030B1 (ja) 2020-06-30 2020-06-30 予測モデルの学習方法、予測モデルの学習装置、及び、プラント制御システム

Publications (1)

Publication Number Publication Date
WO2022004039A1 true WO2022004039A1 (fr) 2022-01-06

Family

ID=74562408

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/004804 WO2022004039A1 (fr) 2020-06-30 2021-02-09 Procédé d'entraînement de modèle prédictif, dispositif d'entraînement de modèle prédictif et système de commande d'installation

Country Status (2)

Country Link
JP (1) JP6831030B1 (fr)
WO (1) WO2022004039A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102485528B1 (ko) * 2021-11-02 2023-01-06 주식회사 에이젠글로벌 금융 서비스를 위한 금융 모델 및 금융 데이터 가치 평가 방법 및 이러한 방법을 수행하는 장치

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06243273A (ja) * 1993-02-18 1994-09-02 Nippon Steel Corp ニューラルネットワークの学習方式
JP2019032649A (ja) * 2017-08-07 2019-02-28 ファナック株式会社 制御装置及び機械学習装置
JP2019197257A (ja) * 2018-05-07 2019-11-14 株式会社Nttドコモ 情報処理装置
JP2020057165A (ja) * 2018-10-01 2020-04-09 株式会社椿本チエイン 異常判定装置、信号特徴量予測器、異常判定方法、学習モデルの生成方法及び学習モデル

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013011715A1 (de) * 2013-07-15 2015-01-15 Fresenius Medical Care Deutschland Gmbh Verfahren zum Steuern einer Blutbehandlungsvorrichtung und Vorrichtungen
JP7065685B2 (ja) * 2018-05-07 2022-05-12 株式会社日立製作所 データ不足提示システムおよびデータ不足提示方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06243273A (ja) * 1993-02-18 1994-09-02 Nippon Steel Corp ニューラルネットワークの学習方式
JP2019032649A (ja) * 2017-08-07 2019-02-28 ファナック株式会社 制御装置及び機械学習装置
JP2019197257A (ja) * 2018-05-07 2019-11-14 株式会社Nttドコモ 情報処理装置
JP2020057165A (ja) * 2018-10-01 2020-04-09 株式会社椿本チエイン 異常判定装置、信号特徴量予測器、異常判定方法、学習モデルの生成方法及び学習モデル

Also Published As

Publication number Publication date
JP6831030B1 (ja) 2021-02-17
JP2022011858A (ja) 2022-01-17

Similar Documents

Publication Publication Date Title
JP4393586B2 (ja) 多成分系材料の設計方法、最適化解析装置及び多成分系材料の最適化解析プログラムを記録した記録媒体
Arumugam et al. A novel and effective particle swarm optimization like algorithm with extrapolation technique
Ryberg et al. Metamodel-based multidisciplinary design optimization for automotive applications
Golmohammadi et al. Supplier selection based on a neural network model using genetic algorithm
Abbaszadeh Shahri et al. A modified firefly algorithm applying on multi-objective radial-based function for blasting
CN117610435A (zh) 土木建筑施工混合材料自动配比方法及***
CN110837939A (zh) 一种电网多目标项目筛选方法和***
WO2022004039A1 (fr) Procédé d'entraînement de modèle prédictif, dispositif d'entraînement de modèle prédictif et système de commande d'installation
Salmasnia et al. A robust intelligent framework for multiple response statistical optimization problems based on artificial neural network and Taguchi method
JP7481902B2 (ja) 管理計算機、管理プログラム、及び管理方法
Mahmoodi et al. A developed stock price forecasting model using support vector machine combined with metaheuristic algorithms
CN117875724B (zh) 一种基于云计算的采购风险管控方法及***
KR102054500B1 (ko) 설계 도면 제공 방법
CN116502455A (zh) 一种激光选区熔化技术的工艺参数确定方法及***
Gosavi et al. Maintenance optimization in a digital twin for Industry 4.0
Haque et al. Parameter and Hyperparameter Optimisation of Deep Neural Network Model for Personalised Predictions of Asthma
Chen et al. Prediction intervals for industrial data with incomplete input using kernel-based dynamic Bayesian networks
TW202046213A (zh) 計劃制定系統及其方法
Bourdache et al. Active preference elicitation by bayesian updating on optimality polyhedra
Wang Sensitivity analysis and evolutionary optimization for building design
Thedy et al. Reliability-based structural optimization using adaptive neural network multisphere importance sampling
Márquez-Grajales et al. A Surrogate-Assisted Symbolic Time-Series Discretization Using Multi-breakpoints and a Multi-objective Evolutionary Algorithm
US20240176311A1 (en) Method and apparatus for performing optimal control based on dynamic model
US20240241486A1 (en) Method and apparatus for performing optimal control
WO2024128089A1 (fr) Dispositif de traitement d'informations, système de commande, procédé et programme de calcul de valeur d'indice

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21833949

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21833949

Country of ref document: EP

Kind code of ref document: A1