CN111625995B - Online time-space modeling method integrating forgetting mechanism and double ultralimit learning machines - Google Patents

Online time-space modeling method integrating forgetting mechanism and double ultralimit learning machines Download PDF

Info

Publication number
CN111625995B
CN111625995B CN202010450196.XA CN202010450196A CN111625995B CN 111625995 B CN111625995 B CN 111625995B CN 202010450196 A CN202010450196 A CN 202010450196A CN 111625995 B CN111625995 B CN 111625995B
Authority
CN
China
Prior art keywords
learning machine
model
ultralimit
ultralimit learning
online
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010450196.XA
Other languages
Chinese (zh)
Other versions
CN111625995A (en
Inventor
徐康康
杨海东
印四华
朱成就
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202010450196.XA priority Critical patent/CN111625995B/en
Publication of CN111625995A publication Critical patent/CN111625995A/en
Application granted granted Critical
Publication of CN111625995B publication Critical patent/CN111625995B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Feedback Control In General (AREA)

Abstract

The invention discloses an online time-space modeling method integrating a forgetting mechanism and a double-overrun learning machine, which comprises the following steps: decoupling the time-space output variable into a time coefficient sequence and a space sequence according to a nonlinear partial differential equation of a curing process of a curing oven of a nonlinear distribution parameter system; establishing a first ultralimit learning machine model and a second ultralimit learning machine model based on the time coefficient sequence; updating model parameters of the first and second ultralimit learning machine models by using an online sequential learning algorithm of an integrated forgetting mechanism; the space sequence and the updated first and second ultralimit learning machine models are integrated to reconstruct an online space-time model, so that the problems that the existing modeling method of a nonlinear distribution parameter system such as a curing furnace is mostly developed in an offline environment, the timeliness of online sequence training data cannot be reflected, and the calculation efficiency is low are solved, and the online prediction of the temperature of the curing furnace is more matched with the dynamic change of the actual temperature.

Description

Online time-space modeling method integrating forgetting mechanism and double ultralimit learning machines
Technical Field
The invention relates to the technical field of space-time modeling of a nonlinear distributed parameter system, in particular to an online space-time modeling method integrating a forgetting mechanism and a double-overrun learning machine.
Background
Many industrial processes, such as thermal processes, fluid flows, chemical engineering, etc., are not only time-dependent but also space-dependent, and these systems are typically nonlinear Distributed Parametric Systems (DPSs), which, unlike Lumped Parameter Systems (LPSs), are usually described by one or a set of Partial Differential Equations (PDEs) with corresponding initial and boundary conditions, and the inputs, outputs, and even state parameters of nonlinear distributed parametric systems are coupled in space-time, resulting in infinite dimensional characteristics of such systems, and thus, modeling of such systems is very difficult.
The curing oven is important equipment for providing required temperature distribution in the curing process in the semiconductor rear-end packaging industry, belongs to a nonlinear distribution parameter system, and is difficult to obtain accurate partial differential equation description in the curing process due to very complex boundary conditions and influence of unknown disturbance inside the curing process, but the distribution of the temperature of the curing oven directly influences the curing quality of a chip, and modeling of the nonlinear distribution parameter system such as the curing oven has very important significance for online prediction of the temperature of the curing oven.
Chinese patent publication No. CN109145346A, published as 2019, 1 month and 4 days, proposes a curing thermal process space-time modeling method based on a dual least squares support vector machine, which is used for online prediction and control of the thermal process of a chip curing oven, and is based on Principal Component Analysis (PCA) space-time modeling, but the space-time model based on principal component analysis is developed in an offline environment, i.e., all training data is collected and prepared before modeling, the modeling method cannot reflect the timeliness of online sequence training data, and in the actual chip curing process, the system usually has large-scale time-varying characteristics, which requires online update of the space-time model with new samples to maintain satisfactory performance, although the conventional space-time modeling method has satisfactory modeling performance for the thermal process of the curing oven, there are still some problems in online implementation of the model, the method mainly comprises the following aspects:
1) and (3) online updating: spatio-temporal models based on principal component analysis are model drifts that often lead to large-scale, time-varying systems developed in an off-line environment.
2) Calculating efficiency: for large scale time varying systems, there is a large difference between old and new samples. If the spatio-temporal model only continuously adds new samples in the training sample set without processing the old samples, the learning capability of the spatio-temporal model on the new training samples is limited, so that the characteristics of the time-varying system are difficult to accurately describe. In addition, the increase in the number of samples increases the computational burden on the system and occupies a large amount of memory space.
Disclosure of Invention
In order to overcome the defects that the existing modeling methods of nonlinear distributed parameter systems such as curing furnaces are mostly developed in an offline environment and cannot reflect the timeliness of online sequence training data, the online updating capability of the models is poor, and the calculation efficiency is low, the invention provides the online time-space modeling method integrating the forgetting mechanism and the double-overrun learning machine, the time-varying characteristics of the system are considered, the online updating capability of the established models is strong, the accuracy is high, and the online prediction of the temperature of the curing furnaces is more matched with the dynamic change of the actual temperature.
In order to achieve the above purpose, the technical scheme of the invention is as follows:
an online space-time modeling method integrating a forgetting mechanism and a double-overrun learning machine at least comprises the following steps:
s1, decoupling a space-time output variable into a time coefficient sequence and a space sequence according to a nonlinear partial differential equation of a curing heat process of a curing oven of a nonlinear distributed parameter system;
s2, establishing a first ultralimit learning machine model and a second ultralimit learning machine model based on the time coefficient sequence;
s3, updating model parameters of the first ultralimit learning machine model and the second ultralimit learning machine model by utilizing an online sequential learning algorithm of an integrated forgetting mechanism;
and S4, integrating the spatial sequence with the updated first and second ultralimit learning machine models to reconstruct an online space-time model.
Preferably, the nonlinear partial differential equation of the curing process of the curing oven with the nonlinear distributed parameter system in the step S1 is:
Figure BDA0002507427140000021
the Neumann boundary conditions and initial conditions are:
Figure BDA0002507427140000022
where T (S, T) represents (x, y, z) ∈ (0, S) at time T and position S0) P represents density, k represents thermal coefficientC represents the specific heat coefficient, f (T (S, T)) is the unknown nonlinear thermal dynamics associated with the space-time output variable T, Q (S, T) represents the heat source,
Figure BDA0002507427140000023
representing the Laplace space operator, T0(S,0) representing an initial spatiotemporal output variable;
the spatial-temporal output variable T (S, T) is decoupled into a time coefficient sequence and a space sequence, and the expression is as follows:
Figure BDA0002507427140000024
wherein, ai(t) the ith time coefficient of the time coefficient sequence is
Figure BDA0002507427140000031
φi(S) denotes the ith spatial position, the spatial sequence BFSs is
Figure BDA0002507427140000032
The expression for the ith time coefficient is:
ai(t)=gi(ai(t-1))+hi(u(t-1))
wherein, gi(. and h)iBoth refer to non-linear functions.
Here, since the nonlinear partial differential equation of the curing heat process of the curing oven of the nonlinear distributed parameter system cannot be directly used for space/time coupling of online prediction and control, the space/time separation method is adopted to decouple the space-time output variables, the space sequence BFs can be learned by a KL method with collected space-time distribution data, and the space sequence BFs can be learned by a KL method with collected space-time distribution data
Figure BDA0002507427140000033
The first n-th order of (A) may capture the dominant dynamic behavior of the DPS, and there are many data-based identification methods to estimate low-order temporal models that may be used later after trainingAnd a space-time model is reconstructed through space/time synthesis to lay a foundation.
Preferably, the expression of the first overrun learning machine model in step S2 is:
Figure BDA0002507427140000034
the expression of the second ultralimit learning machine model is:
Figure BDA0002507427140000035
wherein, betaσOutput weight, β 'representing a connection between an output node and a hidden node in the first ultralimit learner model'δOutput weight, ω, representing the connection of output node and hidden node in the second ultralimit learning machine modelσRepresenting input weights, ω 'connecting the output node and the hidden node in the first ultralimit learner model'δRepresenting input weights connecting the output node and the hidden node in the second ultralimit learning machine model, sigma representing the sigma-th hidden node in the first ultralimit learning machine model, N1Representing the number of hidden nodes in the first ultralimit learning machine model, N2Representing the number of hidden nodes in the second ultralimit learning machine model, G1Representing an activation function of a hidden layer in the first ultralimit learning machine model; g2Representing the activation function, η, of the hidden layer in the second ultralimit learning machine modelσThreshold, η 'representing a hidden node in the first ultralimit learner model'δA threshold value representing a hidden node in the second ultralimit learning machine model, u (t) a time coefficient in a time coefficient sequence on which the second ultralimit learning machine model is built, gi(. cndot.) and hi(. two non-linear functions satisfy: gi(·)=g(·),hi(·)=h(·)。
Preferably, based on the expressions of the first and second ultralimit learning machine models, the expression of the time coefficient a (t) is further expressed as:
Figure BDA0002507427140000041
at this time, the input weight ω connecting the output node and the hidden node in the first ultralimit learning machine modelσInput weight ω 'connecting output node and hidden node in second ultralimit learner model'δAnd a threshold eta of a hidden node in the first ultralimit learning machine modelσAnd a threshold η 'of a hidden node in the second ultralimit learning machine model'δAll generated randomly, independent of the training data and independent of each other, output weights beta connecting the output nodes and the hidden nodes in the first ultralimit learning machine modelσAnd an output weight β 'connecting the output node and the hidden node in the second ultralimit learner model'δDetermining according to the input and output data; the further expression of the time coefficient a (t) is omega generated randomly according to the mathematical description of a double-overrun learning machine designed by coupling double nonlinear structuresσ,ω′δσAnd η'δOnce estimated, these values will be fixed during subsequent learning.
Further expressed as: a (t) ═ hT(t)θ
Wherein h (t) represents parameter vectors of the first and second ultralimit learning machine models related to the input-output data, and the expression is as follows:
Figure BDA0002507427140000042
theta represents the unknown parameter vector of the first and second ultralimit learning machine models to be identified,
Figure BDA0002507427140000043
the matrix form of the time coefficients a (t) is:
A=Hθ
wherein a ═ a (2), a (3),.., a (l)]TAn output vector form representing the time coefficient a (t), L representing the data length of the phasor; h ═ HT(2)L hT(L)]TRepresenting a regression matrix;
then
Figure BDA0002507427140000044
Wherein,
Figure BDA0002507427140000045
is the Moore-Penrose generalized inverse of matrix H,
Figure BDA0002507427140000046
representing the unknown parameter vector taken from a and H.
Since the output vector form A and the regression matrix H of the time coefficients a (t) are known, only H is considered hereTH is a non-singular condition satisfying
Figure BDA0002507427140000047
Preferably, the initial training set block is:
Figure BDA0002507427140000048
the initial model parameters of the first and second ultralimit learning machines are
Figure BDA0002507427140000049
Subscript 0 denotes the block with the initial training set
Figure BDA00025074271400000410
The associated initial vector or matrix, set to input/output data { z (L)0+1),a(L0+1) } arrive, then:
Figure BDA0002507427140000051
updated parameter θ using MP generalized inverse matrix1To write as:
Figure BDA0002507427140000052
wherein,
Figure BDA0002507427140000053
will theta0Is denoted as P1、h(L0+1) and a (L)0+1), then:
Figure BDA0002507427140000054
wherein,
Figure BDA0002507427140000055
by using the formula of the Woodbury,
P1=((P0)-1+h(L0+1)hT(L0+1))-1
=P0-P0h(L0+1)(I+hT(L0+1)P0h(L0+1))-1hT(L0+1)P0
Figure BDA0002507427140000056
when the m +1 th data { z (L)0+m+1),a(L0+ m +1) } arrive,
Figure BDA0002507427140000057
Figure BDA0002507427140000058
wherein,
Figure BDA0002507427140000059
i.e. according to a known parameter thetamAnd newly arrived data z (L)0+m+1),a(L0+ m +1) }, update the model parameter θm+1
Here, online data usually arrives one by one or block by block, and because the model structure of the online data matches with that of a general thermal system, the established first and second ultralimit learning models can well simulate the spatio-temporal dynamic characteristics of the DPSs, however, for a large time-varying system, the model developed in an offline environment should be updated online to maintain satisfactory model performance, and the conventional method for realizing online updating is to combine the existing data with the new data and train the model repeatedly from zero, which brings a great computational burden to practical application. Thus, online updating of spatio-temporal models requires an online sequential learning algorithm that ensures that only learned model parameters and new data are used.
Preferably, the new input-output data is set as: { z (L)0+1),a(L0+1), { z (2), a (2) } indicates old input-output data { z (2), a (2) } with respect to new input-output data, then:
Figure BDA0002507427140000061
based on MP generalized inverse matrix and forgetting mechanism, parameter theta1Comprises the following steps:
Figure BDA0002507427140000062
wherein,
Figure BDA0002507427140000063
further:
Figure BDA0002507427140000064
Figure BDA0002507427140000065
when the m +1 th data { z (L)0+m+1),a(L0+ m +1) } arrives, discarding the oldest data { z (m +2), a (m +2) }, then:
Figure BDA0002507427140000071
Figure BDA0002507427140000072
here, for a large-scale time-varying system, the training samples are usually time-efficient, that is, the training samples have a certain period of validity, and the learning process is continuous, and since the samples are continuously added, the number of the trained samples in the whole system will be continuously increased, which will increase the dimension of the regression matrix H, thereby increasing the computational burden of the system and occupying a large amount of memory space. Therefore, in the online sequential learning process of the DPSs, the forgetting mechanism is embedded into the online learning process, the learning effect can be improved by discarding outdated training data, and therefore the adverse effect of the outdated training data on subsequent learning is reduced, and the embedding of the forgetting mechanism can ensure that online modeling has the advantages of improving the calculation work efficiency, saving a large amount of calculation work time and being free of storing outdated data.
Preferably, in step S3, the process of updating the model parameters of the first and second ultralimit learning machine models by using the online sequential learning algorithm with the forgetting mechanism is as follows:
s301, initialization: setting the sequence m of the number of training data as zero, selecting the activation function G of the hidden layer in the first ultralimit learning machine model1Activation function G of hidden layer in second ultralimit learning machine model2The number N of hidden nodes in the first ultralimit learning machine model1The number N of hidden nodes in the second ultralimit learning machine model2Randomly generating input weights omega for connecting the output node and the hidden node in the first ultralimit learning machine modelσInput weight ω 'connecting the output node and the hidden node in the second ultralimit learner model'δThreshold η of hidden node in first ultralimit learning machine modelσThreshold η 'of hidden node in second ultralimit learner model'δ
Calculating an initial output matrix of a hidden layer of the first ultralimit learning machine model and the second ultralimit learning machine model:
H0=[hT(2) L hT(L0)]Tcalculating the parameters of an initial unknown model: theta.theta.0=P0H0 TA0
Wherein the initial unknown model parameter is the initial output weight of the model, P0=(H0 TH0)-1,A0=[a(2),a(3),...,a(L0)]T
S302, performing online sequential learning of an integrated forgetting mechanism:
when the m +1 th data { z (L)0+m+1),a(L0+ m +1) } arrive, the m +1 activation function is computed:
Figure BDA0002507427140000073
Figure BDA0002507427140000081
s303, calculating the output weight of the model:
Figure BDA0002507427140000082
s304, utilizing the calculated model output weight to evaluate the first ultralimit learning machine model and the second ultralimit learning machine model according to the condition that A is H theta;
s305, increasing the value of m by 1, and returning to the step S302.
Here, no analytical expressions of DPSs are required in the implementation process, and this is also very suitable for general industrial applications.
Preferably, when the model parameters of the first and second ultralimit learning machine models are updated by using an online sequential learning algorithm integrated with a forgetting mechanism, the training data is received one by one or block by block.
Here, since the training data is received one by one or block by block, if the training data arrives block by block, it is possible to prevent the occurrence of a failure in the reception of the training data
Figure BDA0002507427140000083
Figure BDA0002507427140000084
Preferably, the training data L is initialized0The number of the first overrun learning model and the number of the hidden neurons of the second overrun learning model are not less than the sum of the numbers of the hidden neurons of the first overrun learning model and the second overrun learning model, so that the initial regression matrix H is ensured0The rank of (2) is the sum of the number of hidden layers of the two over-limit learning machine models.
Preferably, the expression of the reconstructed online spatio-temporal model in step S4 is:
Figure BDA0002507427140000085
wherein,
Figure BDA0002507427140000086
representing a time coefficient, phi, in a time series after updating by integrating a forgetting mechanism and a dual-overrun learning machinei(S) represents a spatial sequence;
Figure BDA0002507427140000087
and representing the space-time output variable of the reconstructed online space-time model.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention provides an online space-time modeling method integrating a forgetting mechanism and a double-overrun learning machine, which considers the limitation that a nonlinear partial differential equation of a curing furnace in a nonlinear distributed parameter system curing heat process cannot be used for online prediction and space/time coupling control, decouples a space-time output variable into a time coefficient sequence and a space sequence, establishes a first overrun learning machine model and a second overrun learning machine model based on the time coefficient sequence, updates model parameters of the first overrun learning machine model and the second overrun learning machine model by utilizing an online sequential learning algorithm of the integrated forgetting mechanism, embeds the forgetting mechanism into the online learning process, can improve the learning effect by giving up outdated training samples, updates the training data online, improves the online updating capability of the models, ensures that the online modeling has higher calculation work efficiency, A large amount of calculation working time is saved, outdated data does not need to be stored, and the reconstructed online space-time model is closer to the actual curing thermal process of the curing oven, so that the online prediction of the temperature of the curing oven is more matched with the dynamic change of the actual temperature.
Drawings
FIG. 1 is a flow chart of an online spatiotemporal modeling method integrating a forgetting mechanism and a dual ultralimit learning machine according to the present invention;
FIG. 2 is a schematic frame structure diagram of an internal structure of an actual curing oven according to an embodiment of the present invention;
FIG. 3 is a view showing an internal structure of a practical rapid curing oven system according to an embodiment of the present invention;
FIG. 4 is a layout diagram of sensors on a lead frame for data acquisition as set forth in an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating a comparison between a predicted time coefficient and an actual time coefficient according to an embodiment of the present invention;
FIG. 6 is a graph showing the actual temperature profile of a curing oven system according to an embodiment of the present invention;
FIG. 7 is a distribution diagram of the absolute relative error ARE in the embodiment of the present invention;
FIG. 8 is a graph of predicted versus actual temperature T distribution at the sensor 6 using an initial off-line model in an embodiment of the present invention;
FIG. 9 is a graph of the predicted and actual absolute relative error ARE at the sensor 6 using an initial off-line model in an embodiment of the present invention;
FIG. 10 is a graph comparing predicted and actual temperature distributions on the sensor 6 using an online update model in an embodiment of the present invention;
FIG. 11 is a graph of the predicted and actual absolute relative error ARE at the sensor 6 using an online update model in an embodiment of the present invention;
FIG. 12 is a graph comparing predicted and actual temperature distributions on the sensor 11 using an initial off-line model in an embodiment of the present invention;
FIG. 13 is a graph of the predicted and actual absolute relative error ARE at the sensor 11 using an initial off-line model in an embodiment of the present invention;
FIG. 14 is a graph comparing predicted and actual temperature distributions on the sensor 11 using an online update model in an embodiment of the present invention;
FIG. 15 is a schematic diagram of the absolute relative error ARE predicted at the sensor 11 using the online update model of the present application in an embodiment of the present invention;
FIG. 16 is a schematic diagram of the absolute relative error ARE predicted by the online update model according to the embodiment of the present invention;
FIG. 17 is a schematic representation of the absolute relative error ARE of the OS-ELM based spatio-temporal model prediction in an embodiment of the present invention;
FIG. 18 is a comparison between simulation times using the method of the present application and the OS-ELM-based method in an embodiment of the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted. The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
An online spatiotemporal modeling method integrating a forgetting mechanism and a double-overrun learning machine is disclosed, a flow diagram is shown in figure 1, and the online spatiotemporal modeling method comprises the following steps:
s1, decoupling a space-time output variable into a time coefficient sequence and a space sequence according to a nonlinear partial differential equation of a curing heat process of a curing oven of a nonlinear distributed parameter system; the method specifically comprises the following steps: the nonlinear partial differential equation of the curing process of the curing oven of the nonlinear distributed parameter system is as follows:
Figure BDA0002507427140000101
the Neumann boundary conditions and initial conditions are:
Figure BDA0002507427140000102
where T (S, T) represents (x, y, z) ∈ (0, S) at time T and position S0) P represents density, k represents thermal coefficient, c represents specific heat coefficient, f (T (S, T)) is an unknown nonlinear thermal dynamics related to the spatio-temporal output variable T, Q (S, T) represents a heat source,
Figure BDA0002507427140000103
representing the Laplace space operator, T0(S,0) representing an initial spatiotemporal output variable;
the spatial-temporal output variable T (S, T) is decoupled into a time coefficient sequence and a space sequence, and the expression is as follows:
Figure BDA0002507427140000111
wherein, ai(t) the ith time coefficient of the time coefficient series of
Figure BDA0002507427140000112
φi(S) denotes the ith spatial position, the spatial sequence BFSs is
Figure BDA0002507427140000113
The expression for the ith time coefficient is:
ai(t)=gi(ai(t-1))+hi(u(t-1))
wherein, gi(. and h)iBoth refer to non-linear functions.
S2, establishing a first ultralimit learning machine model and a second ultralimit learning machine model based on the time coefficient sequence; the expression of the first overrun learning machine model is as follows:
Figure BDA0002507427140000114
the expression of the second ultralimit learning machine model is:
Figure BDA0002507427140000115
wherein, betaσOutput weight, β 'representing a connection between an output node and a hidden node in the first ultralimit learner model'δOutput weight, ω, representing the connection of output node and hidden node in the second ultralimit learning machine modelσRepresenting input weights, ω 'connecting the output node and the hidden node in the first ultralimit learner model'δRepresenting input weights connecting the output node and the hidden node in the second ultralimit learning machine model, sigma representing the sigma-th hidden node in the first ultralimit learning machine model, N1Representing the number of hidden nodes, N, in the first ultralimit learning machine model2Representing the number of hidden nodes in the second ultralimit learning machine model, G1Representing an activation function of a hidden layer in the first ultralimit learning machine model; g2Representing the activation function, η, of the hidden layer in the second ultralimit learning machine modelσRepresenting a threshold, η ', of hidden nodes in the first ultralimit learner model'δA threshold value representing a hidden node in the second ultralimit learning machine model, u (t) a time coefficient in a time coefficient sequence based on which the second ultralimit learning machine model is established, gi(. and h)i(. cndot.) two nonlinear functions satisfy: gi(·)=g(·),hi(·)=h(·)。
Based on the expressions of the first and second ultralimit learning machine models, the expression of the time coefficient a (t) is further expressed as:
Figure BDA0002507427140000116
at this time, the input weight ω connecting the output node and the hidden node in the first ultralimit learning machine modelσInput weight ω 'connecting output node and hidden node in second ultralimit learner model'δAnd a threshold eta of a hidden node in the first ultralimit learning machine modelσAnd a threshold η 'of a hidden node in the second ultralimit learning machine model'δAll generated randomly, independent of the training data and independent of each other, output weights beta connecting the output nodes and the hidden nodes in the first ultralimit learning machine modelσAnd an output weight β 'connecting the output node and the hidden node in the second ultralimit learner model'δDetermining according to the input and output data; the further expression of the time coefficient a (t) is omega generated randomly according to the mathematical description of a double-overrun learning machine designed by coupling double nonlinear structuresσ,ω′δσAnd η'δOnce estimated, these values will be fixed during subsequent learning.
Further expressed as: a (t) ═ hT(t)θ
Wherein h (t) represents parameter vectors of the first and second ultralimit learning machine models related to the input-output data, and the expression is as follows:
Figure BDA0002507427140000121
theta represents unknown parameter vectors of the first and second ultralimit learning machine models to be identified,
Figure BDA0002507427140000122
the matrix form of the time coefficients a (t) is:
A=Hθ
wherein, A ═ a (2), a (3), a (L)]TAn output vector form representing the time coefficient a (t), L representing the data length of the phasor; h ═ HT(2) L hT(L)]TRepresenting a regression matrix;
then
Figure BDA0002507427140000123
Wherein,
Figure BDA0002507427140000124
is the Moore-Penrose generalized inverse of matrix H,
Figure BDA0002507427140000125
representing the unknown parameter vector taken from a and H.
Since the output vector form A and the regression matrix H of the time coefficients a (t) are known, only H is considered hereTH is a non-singular condition, satisfies
Figure BDA0002507427140000126
S3, updating model parameters of the first and second ultralimit learning machine models by using an online sequential learning algorithm of an integrated forgetting mechanism; first, the initial training set block is:
Figure BDA0002507427140000127
the initial model parameters of the first and second ultralimit learning machines are
Figure BDA0002507427140000128
Subscript 0 denotes the block with the initial training set
Figure BDA0002507427140000129
The associated initial vector or matrix, set to input/output data { z (L)0+1),a(L0+1) } arrive, then:
Figure BDA00025074271400001210
updated parameter θ using MP generalized inverse matrix1To write into:
Figure BDA0002507427140000131
wherein,
Figure BDA0002507427140000132
will theta0Is denoted as P1、h(L0 +1) And a (L)0+1), then:
Figure BDA0002507427140000133
wherein,
Figure BDA0002507427140000134
by using the formula of the Woodbury,
P1=((P0)-1+h(L0+1)hT(L0+1))-1
=P0-P0h(L0+1)(I+hT(L0+1)P0h(L0+1))-1hT(L0+1)P0
Figure BDA0002507427140000135
when the m +1 th data { z (L)0+m+1),a(L0+ m +1) } arrive,
Figure BDA0002507427140000136
Figure BDA0002507427140000137
wherein,
Figure BDA0002507427140000138
i.e. according to a known parameter thetamAnd newly arrived data z (L)0+m+1),a(L0+ m +1) }, update the model parameter θm+1
Setting the new input-output data as: { z (L)0+1),a(L0+1)},{z(2) And a (2) represents old input-output data { z (2), a (2) } with respect to new input-output data, then:
Figure BDA0002507427140000139
based on MP generalized inverse matrix and forgetting mechanism, parameter theta1Comprises the following steps:
Figure BDA0002507427140000141
wherein,
Figure BDA0002507427140000142
further:
Figure BDA0002507427140000143
Figure BDA0002507427140000144
when the m +1 th data { z (L)0+m+1),a(L0+ m +1) } arrives, discarding the oldest data { z (m +2), a (m +2) }, then:
Figure BDA0002507427140000145
Figure BDA0002507427140000146
based on the deduction process, the step of updating the model parameters of the first and second ultralimit learning machine models by using an online sequential learning algorithm integrated with a forgetting mechanism specifically comprises the following steps:
s301, initialization: setting the sequence m of the number of training data as zero, and selecting the activation function G of the hidden layer in the first ultralimit learning machine model1Hidden in the second ultralimit learning machine modelActivation function G of a layer2The number N of hidden nodes in the first ultralimit learning machine model1The number N of hidden nodes in the second ultralimit learning machine model2Randomly generating input weights omega for connecting the output node and the hidden node in the first ultralimit learning machine modelσInput weight ω 'connecting the output node and the hidden node in the second ultralimit learner model'δThreshold η of hidden node in first ultralimit learning machine modelσThreshold η 'of hidden node in second ultralimit learner model'δ
Calculating an initial output matrix of a hidden layer of the first ultralimit learning machine model and the second ultralimit learning machine model:
H0=[hT(2) L hT(L0)]Tcalculating initial unknown model parameters: theta.theta.0=P0H0 TA0
Wherein the initial unknown model parameter is the initial output weight of the model, P0=(H0 TH0)-1,A0=[a(2),a(3),...,a(L0)]T
S302, performing online sequential learning of an integrated forgetting mechanism:
when the m +1 th data { z (L)0+m+1),a(L0+ m +1) } arrive, the m +1 activation function is computed:
Figure BDA0002507427140000151
Figure BDA0002507427140000152
s303, calculating the output weight of the model:
Figure BDA0002507427140000153
s304, utilizing the calculated model output weight to evaluate the first ultralimit learning machine model and the second ultralimit learning machine model according to the condition that A is H theta;
s305, increasing the value of m by 1, and returning to the step S302.
And S4, integrating the space sequence with the updated first ultralimit learning machine model and the updated second ultralimit learning machine model to reconstruct an online space-time model.
The expression of the reconstructed online spatio-temporal model is:
Figure BDA0002507427140000154
wherein,
Figure BDA0002507427140000155
represents a time coefficient, φ, in a time series after update by integrating a forgetting mechanism and a dual-ultralimit learning machinei(S) represents a spatial sequence;
Figure BDA0002507427140000156
and representing the space-time output variable of the reconstructed online space-time model.
In addition, when the model parameters of the first ultralimit learning machine model and the second ultralimit learning machine model are updated by utilizing the online sequential learning algorithm of the integrated forgetting mechanism, the training data are received one by one or block by block, and because the training data are received one by one or block by block, if the training data arrive one by block, the training data arrive
Figure BDA0002507427140000161
Figure BDA0002507427140000162
Initializing training data L0The number of the first overrun learning model and the number of the hidden neurons of the second overrun learning model are not less than the sum of the numbers of the hidden neurons of the first overrun learning model and the second overrun learning model, so that the initial regression matrix H is ensured0The rank of (2) is the sum of the number of hidden layers of the two over-limit learning machine models.
In combination with the above proposalsThe specific online space-time modeling method is used for carrying out real-time tests on a curing oven, and comprises the following steps of (1) carrying out real-time tests on the curing oven, wherein the structure diagram of an actual rapid curing oven system is shown in fig. 2, and fig. 3 is a structure diagram of an internal principle framework corresponding to the curing oven, wherein 1 represents a heater, 2 represents a sensor, 3 represents a cavity, and 4 represents a lead frame. Referring to FIG. 3, the top of the curing oven has four identical heaters 1 controlled by Pulse Width Modulation (PWM) signals, see FIG. 4, and S1~S16A total of 16 thermocouples were placed uniformly on the lead frame 4 as sensors to collect data, and these sensors 2 collected about 2100 time-series data for model training and online learning within a fixed sampling interval Δ t of 10 s.
In order to facilitate the verification of the model established by the online spatio-temporal modeling method proposed by the application and the comparison of other modeling methods, the following error indexes are given:
1) space-time prediction error (e)
Figure BDA0002507427140000163
2) Absolute Relative Error (ARE)
Figure BDA0002507427140000164
3) Root Mean Square Error (RMSE)
Figure BDA0002507427140000165
4) Time Normalized Absolute Error (TNAE)
Figure BDA0002507427140000166
In the online spatio-temporal modeling, collected samples are divided into two parts, wherein the first 800 samples are used for constructing an initial spatio-temporal model, and the last 1300 samples are used for online learning.
First, an initial is constructedA spatio-temporal model. Calculating a space sequence BFs by using a KL method, selecting the order to be 3, and then projecting space-time samples onto BFs to obtain a time coefficient
Figure BDA0002507427140000171
The low order temporal model is estimated using the temporal coefficients and the corresponding input signals. And finally, reconstructing an initial space-time model by using a space-time synthesis method.
To evaluate the performance of the initial model, the last 1300 samples were tested and the actual time coefficient and the predicted time coefficient were compared in fig. 5, which showed very satisfactory agreement between the predicted and the measured dynamics. In addition, the predicted temperature distribution and the corresponding ARE index for the 2100 th sample were simulated, the temperature profile is shown in fig. 6, and the profile of the ARE index is shown in fig. 7. As can be seen from fig. 6 and 7, the initial spatio-temporal model is a good approximation of the actual DPS.
In the online learning process, 1300 samples are assumed to arrive block by block, and the size is fixed and set to 10, so that 1300 samples can be divided into 130 continuous blocks. The fixed step is set to 1400 samples, i.e. once the number of training samples reaches 1400, the oldest sample is deleted using a forgetting mechanism, leaving the number of samples equal to 1400. In order to evaluate the model performance of the proposed online method and the initial offline model, in a specific implementation, it was simulated that the distribution of predicted and actual temperature T at the sensor 6 using the initial offline model is shown in fig. 8, the absolute relative error ARE is shown in fig. 9, the distribution of predicted and actual temperature T at the sensor 6 using the online update model is shown in fig. 10, the distribution of predicted and actual absolute relative error ARE at the sensor 6 using the online update model is shown in fig. 11, the distribution of predicted and actual temperature T at the sensor 11 using the initial offline model is shown in fig. 12, the absolute relative error ARE predicted and actual at the sensor 11 using the initial offline model is shown in fig. 13, the distribution of predicted and actual temperature T at the sensor 11 using the online update model is shown in fig. 14, a graph of the absolute relative error ARE predicted and actual on the sensor 11 by using the online updating model is shown in fig. 15, and by comparing the curves, it can be seen that the online updating model provided by the present application can better match the dynamic change of the actual temperature, and the effectiveness of the method provided by the present application is also verified.
In order to verify the superiority of the model establishment method of the integrated forgetting mechanism proposed in the present application, fig. 16 shows an error index ARE distribution diagram of a 2100 th test sample predicted by using the online update model of the present application, and fig. 17 shows a schematic diagram of an absolute relative error ARE predicted by using the spatio-temporal model based on OS-ELM, as can be seen from fig. 16 and 17, both online models exhibit better model performance than the initial offline model, however, the model established by the method of the present application has higher model accuracy than the spatio-temporal model based on OS-ELM.
To further compare the performance of the models constructed by the method of the present application with the OS-ELM model, table 1 shows the performance comparison data of the models constructed by the method of the present application based on TNAE with the OS-ELM model, and table 2 shows the performance comparison data of the models constructed by the method of the present application based on RMSE with the OS-ELM model.
TABLE 1
Figure BDA0002507427140000181
TABLE 2
Figure BDA0002507427140000182
As can be seen from tables 1 and 2, the model constructed by the method provided by the present application has better model performance. Fig. 18 is a schematic diagram comparing simulation time between the method of the present application and the OS-ELM-based method, and it can be seen from fig. 18 that embedding a forgetting mechanism into online space-time learning can improve learning efficiency, save computation, and release memory space.
The terms describing positional relationships in the drawings are for illustrative purposes only and are not to be construed as limiting the patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. This need not be, nor should it be exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (9)

1. An online space-time modeling method integrating a forgetting mechanism and a double-overrun learning machine is characterized by at least comprising the following steps:
s1, decoupling a space-time output variable into a time coefficient sequence and a space sequence according to a nonlinear partial differential equation of a curing heat process of a curing oven of a nonlinear distributed parameter system;
the nonlinear partial differential equation of the curing heat process of the curing oven with the nonlinear distributed parameter system in the step S1 is as follows:
Figure FDA0003517623370000011
the Neumann boundary conditions and initial conditions are:
Figure FDA0003517623370000012
where T (S, T) represents (x, y, z) ∈ (0, S) at time T and position S0) P represents a density, k represents a thermal coefficient, c represents a specific heat coefficient, f (T (S, T)) is an unknown nonlinear thermal dynamic associated with the space-time output variable T, Q (S, T) represents a heat source,
Figure FDA0003517623370000013
representing a Laplace spaceAn interoperator;
the spatial-temporal output variable T (S, T) is decoupled into a time coefficient sequence and a space sequence, and the expression is as follows:
Figure FDA0003517623370000014
wherein, ai(t) the ith time coefficient of the time coefficient series of
Figure FDA0003517623370000015
φi(S) denotes the ith spatial position, the spatial sequence BFSs is
Figure FDA0003517623370000016
The expression for the ith time coefficient is:
ai(t)=gi(ai(t-1))+hi(u(t-1))
wherein, gi(. and h)i(. cndot.) refers to both non-linear functions;
s2, establishing a first ultralimit learning machine model and a second ultralimit learning machine model based on the time coefficient sequence;
s3, updating model parameters of the first ultralimit learning machine model and the second ultralimit learning machine model by utilizing an online sequential learning algorithm of an integrated forgetting mechanism;
and S4, integrating the spatial sequence with the updated first and second ultralimit learning machine models to reconstruct an online space-time model.
2. The online spatiotemporal modeling method integrating a forgetting mechanism and a dual-ultralimit learning machine according to claim 1, characterized in that the expression of the first ultralimit learning machine model in step S2 is:
Figure FDA0003517623370000021
the expression of the second ultralimit learning machine model is:
Figure FDA0003517623370000022
wherein, betaσOutput weight, β, representing the connection of an output node and a hidden node in the first ultralimit learning machine modelδ' represents an output weight, ω, connecting an output node and a hidden node in the second ultralimit learning machine modelσRepresenting input weights, ω 'connecting the output node and the hidden node in the first ultralimit learner model'δRepresenting input weights connecting the output node and the hidden node in the second ultralimit learning machine model, sigma representing the sigma-th hidden node in the first ultralimit learning machine model, N1Representing the number of hidden nodes in the first ultralimit learning machine model, N2Representing the number of hidden nodes in the second ultralimit learning machine model, G1Representing an activation function of a hidden layer in the first ultralimit learning machine model; g2Representing the activation function, η, of the hidden layer in the second ultralimit learning machine modelσThreshold, η, representing hidden nodes in the first ultralimit learning machine modelδ' threshold value representing hidden node in second ultralimit learning machine model, u (t) time coefficient in time coefficient sequence based on which second ultralimit learning machine model is established, gi(. and h)i(. two non-linear functions satisfy: gi(·)=g(·),hi(·)=h(·)。
3. The online spatiotemporal modeling method integrating a forgetting mechanism and a dual ultralimit learning machine according to claim 2, characterized in that based on the expressions of the first ultralimit learning machine model and the second ultralimit learning machine model, the expression of the time coefficient a (t) is further expressed as:
Figure FDA0003517623370000023
at this time, the process of the present invention,input weight omega connecting output node and hidden node in first ultralimit learning machine modelσInput weight ω 'connecting output node and hidden node in second ultralimit learner model'δAnd a threshold eta of a hidden node in the first ultralimit learning machine modelσAnd threshold eta 'of hidden node in second ultralimit learning machine model'δAll generated randomly, independent of the training data and independent of each other, output weights beta connecting the output nodes and the hidden nodes in the first ultralimit learning machine modelσAnd an output weight β 'connecting the output node and the hidden node in the second ultralimit learner model'δDetermining according to the input and output data;
further expressed as: a (t) ═ hT(t)θ
Wherein h (t) represents parameter vectors of the first and second ultralimit learning machine models related to the input-output data, and the expression is as follows:
Figure FDA0003517623370000031
theta represents unknown parameter vectors of the first and second ultralimit learning machine models to be identified,
Figure FDA0003517623370000032
the matrix form of the time coefficients a (t) is:
A=Hθ
wherein a ═ a (2), a (3),.., a (l)]TAn output vector form representing the time coefficient a (t), L representing the data length of the phasor; h ═ HT(2)…hT(L)]TRepresenting a regression matrix;
then
Figure FDA0003517623370000033
Wherein,
Figure FDA0003517623370000034
is a matrix HMoore-Penrose generalized inverse matrix,
Figure FDA0003517623370000035
representing the unknown parameter vector taken from a and H.
4. The online spatiotemporal modeling method integrating a forgetting mechanism and a dual-ultralimit learning machine according to claim 3, characterized in that the initial training set block is:
Figure FDA0003517623370000036
the initial model parameters of the first and second ultralimit learning machines are
Figure FDA0003517623370000037
Subscript 0 denotes the block with the initial training set
Figure FDA0003517623370000038
The associated initial vector or matrix, set to input/output data { z (L)0+1),a(L0+1) } arrival, then:
Figure FDA0003517623370000039
updated parameter θ using MP generalized inverse matrix1To write as:
Figure FDA00035176233700000310
wherein,
Figure FDA00035176233700000311
will theta0Is denoted as P1、h(L0+1) and a (L)0+1), then:
Figure FDA00035176233700000312
wherein,
Figure FDA00035176233700000313
By using the formula of the Woodbury,
P1=((P0)-1+h(L0+1)hT(L0+1))-1
=P0-P0h(L0+1)(I+hT(L0+1)P0h(L0+1))-1hT(L0+1)P0
Figure FDA0003517623370000041
when the m +1 th data { z (L)0+m+1),a(L0+ m +1) } arrive,
Figure FDA0003517623370000042
Figure FDA0003517623370000043
wherein,
Figure FDA0003517623370000044
i.e. according to a known parameter thetamAnd newly arrived data z (L)0+m+1),a(L0+ m +1) }, update the model parameter θm+1
5. The online spatiotemporal modeling method integrated with a forgetting mechanism and a dual-ultralimit learning machine according to claim 4, characterized in that new input-output data are set as follows: { z (L)0+1),a(L0+1), { z (2), a (2) } indicates old input-output data { z (2), a (2) } with respect to new input-output data, then:
Figure FDA0003517623370000045
based on MP generalized inverse matrix and forgetting mechanism, parameter theta1Comprises the following steps:
Figure FDA0003517623370000046
wherein,
Figure FDA0003517623370000047
further:
Figure FDA0003517623370000051
Figure FDA0003517623370000052
when the m +1 th data { z (L)0+m+1),a(L0+ m +1) } arrives, discarding the oldest data { z (m +2), a (m +2) }, then:
Figure FDA0003517623370000053
Figure FDA0003517623370000054
6. the online spatiotemporal modeling method integrating the forgetting mechanism and the dual ultralimit learning machine according to claim 5, wherein the step S3 of updating the model parameters of the first ultralimit learning machine model and the second ultralimit learning machine model by using the online sequential learning algorithm of the forgetting mechanism comprises:
s301, initialization: setting the sequence m of the number of training data as zero, selecting the activation function G of the hidden layer in the first ultralimit learning machine model1Second overrun learning machine modelActivation function G of hidden layer in type2The number N of hidden nodes in the first ultralimit learning machine model1The number N of hidden nodes in the second ultralimit learning machine model2Randomly generating input weights omega for connecting the output node and the hidden node in the first ultralimit learning machine modelσInput weight ω 'connecting the output node and the hidden node in the second ultralimit learner model'δThreshold η of hidden node in first ultralimit learning machine modelσThreshold η 'of hidden node in second ultralimit learner model'δ
Calculating an initial output matrix of a hidden layer of the first ultralimit learning machine model and the second ultralimit learning machine model:
H0=[hT(2)…hT(L0)]Tcalculating initial unknown model parameters: theta.theta.0=P0H0 TA0
Wherein the initial unknown model parameter is the initial output weight of the model, P0=(H0 TH0)-1,A0=[a(2),a(3),...,a(L0)]T
S302, performing online sequential learning of an integrated forgetting mechanism:
when the m +1 th data { z (L)0+m+1),a(L0+ m +1) } arrive, the m +1 activation function is computed:
Figure FDA0003517623370000061
Figure FDA0003517623370000062
s303, calculating the output weight of the model:
Figure FDA0003517623370000063
s304, utilizing the calculated model output weight to evaluate the first ultralimit learning machine model and the second ultralimit learning machine model according to the condition that A is H theta;
s305, increasing the value of m by 1, and returning to the step S302.
7. The online spatiotemporal modeling method integrating a forgetting mechanism and a dual ultralimit learning machine according to claim 6, characterized in that training data is received one by one or block by block when updating model parameters of the first ultralimit learning machine model and the second ultralimit learning machine model by an online sequential learning algorithm of the forgetting mechanism.
8. The method of on-line spatiotemporal modeling with integration of forgetting mechanism and dual ultralimit learning machine according to claim 7, characterized in that training data L is initialized0Is not less than the sum of the number of hidden neurons of the first and second ultralimit learning models.
9. The method for online spatiotemporal modeling with integration of forgetting mechanism and dual ultralimit learning machine according to claim 8, wherein the expression of the reconstructed online spatiotemporal model of step S4 is:
Figure FDA0003517623370000064
wherein,
Figure FDA0003517623370000065
representing a time coefficient, phi, in a time series after updating by integrating a forgetting mechanism and a dual-overrun learning machinei(S) represents a spatial sequence;
Figure FDA0003517623370000066
and representing the space-time output variable of the reconstructed online space-time model.
CN202010450196.XA 2020-05-25 2020-05-25 Online time-space modeling method integrating forgetting mechanism and double ultralimit learning machines Active CN111625995B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010450196.XA CN111625995B (en) 2020-05-25 2020-05-25 Online time-space modeling method integrating forgetting mechanism and double ultralimit learning machines

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010450196.XA CN111625995B (en) 2020-05-25 2020-05-25 Online time-space modeling method integrating forgetting mechanism and double ultralimit learning machines

Publications (2)

Publication Number Publication Date
CN111625995A CN111625995A (en) 2020-09-04
CN111625995B true CN111625995B (en) 2022-06-24

Family

ID=72259050

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010450196.XA Active CN111625995B (en) 2020-05-25 2020-05-25 Online time-space modeling method integrating forgetting mechanism and double ultralimit learning machines

Country Status (1)

Country Link
CN (1) CN111625995B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114548368B (en) * 2022-01-20 2022-10-11 广东产品质量监督检验研究院(国家质量技术监督局广州电气安全检验所、广东省试验认证研究院、华安实验室) Modeling method and prediction method of lithium battery temperature field prediction model based on multilayer nuclear overrun learning machine

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108717505A (en) * 2018-05-29 2018-10-30 广东工业大学 A kind of solidification thermal process space-time modeling method based on K-RVFL
CN108763759A (en) * 2018-05-29 2018-11-06 广东工业大学 A kind of solidification thermal process space-time modeling method based on ISOMAP
CN109145346A (en) * 2018-05-29 2019-01-04 广东工业大学 Solidification thermal process space-time modeling method based on dual least square method supporting vector machine
CN110377942A (en) * 2019-06-10 2019-10-25 广东工业大学 A kind of multi-model space-time modeling method based on limited gauss hybrid models

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108717505A (en) * 2018-05-29 2018-10-30 广东工业大学 A kind of solidification thermal process space-time modeling method based on K-RVFL
CN108763759A (en) * 2018-05-29 2018-11-06 广东工业大学 A kind of solidification thermal process space-time modeling method based on ISOMAP
CN109145346A (en) * 2018-05-29 2019-01-04 广东工业大学 Solidification thermal process space-time modeling method based on dual least square method supporting vector machine
CN110377942A (en) * 2019-06-10 2019-10-25 广东工业大学 A kind of multi-model space-time modeling method based on limited gauss hybrid models

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
具有广义正则化与遗忘机制的在线贯序超限学习机;郭威等;《控制与决策》;20170228(第02期);全文 *
在线增量极限学习机及其性能研究;马致远等;《计算机应用研究》;20171212(第12期);第3533-3534 页 *
基于ELM的芯片固化炉炉温建模方法;张兴宇等;《制造业自动化》;20150510(第09期);全文 *

Also Published As

Publication number Publication date
CN111625995A (en) 2020-09-04

Similar Documents

Publication Publication Date Title
KR100244582B1 (en) Method and apparatus for surface processing of the outer plate of a ship body
US9489620B2 (en) Quick analysis of residual stress and distortion in cast aluminum components
CN111090945B (en) Actuator and sensor fault estimation design method for switching system
CN111158237B (en) Industrial furnace temperature multi-step prediction control method based on neural network
CN110377942B (en) Multi-model space-time modeling method based on finite Gaussian mixture model
CN114117840B (en) Structural performance prediction method based on simulation and test data hybrid drive
CN114548368B (en) Modeling method and prediction method of lithium battery temperature field prediction model based on multilayer nuclear overrun learning machine
JP6730340B2 (en) Causal estimation device, causal estimation method, and program
CN108762072B (en) Prediction control method based on nuclear norm subspace method and augmentation vector method
Castañeda et al. Decentralized neural identifier and control for nonlinear systems based on extended Kalman filter
Fu et al. Improved unscented Kalman filter algorithm-based rapid identification of thermal errors of machine tool spindle for shortening thermal equilibrium time
Huang et al. On-line heat flux estimation of a nonlinear heat conduction system with complex geometry using a sequential inverse method and artificial neural network
CN111625995B (en) Online time-space modeling method integrating forgetting mechanism and double ultralimit learning machines
Akbarian et al. Synchronization in digital twins for industrial control systems
CN115972211A (en) Control strategy offline training method based on model uncertainty and behavior prior
WO2024139626A1 (en) Deformation state analysis method for flexible linear body, and related apparatus
Ćojbašić et al. A real time neural network based finite element analysis of shell structure
Zobeiry et al. Theory-guided machine learning composites processing modelling for manufacturability assessment in preliminary design
CN112318511A (en) Mechanical arm trajectory tracking control method based on data driving
CN116070437B (en) Lithium battery surface temperature modeling method, device, computer equipment and storage medium
JP6919856B2 (en) Reinforcement learning programs, reinforcement learning methods, and reinforcement learning devices
Raj et al. Physics-informed neural networks for solving thermo-mechanics problems of functionally graded material
CN114186477A (en) Elman neural network-based orbit prediction algorithm
Sapkota et al. Surrogate-assisted parametric calibration using design of experiment platform within digital twinning
RADBAKHSH et al. Physics-informed neural network for analyzing elastic beam behavior

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant