CN109190702A - Sparse Bayesian network incremental learning method based on continuous type industrial data - Google Patents

Sparse Bayesian network incremental learning method based on continuous type industrial data Download PDF

Info

Publication number
CN109190702A
CN109190702A CN201811009985.9A CN201811009985A CN109190702A CN 109190702 A CN109190702 A CN 109190702A CN 201811009985 A CN201811009985 A CN 201811009985A CN 109190702 A CN109190702 A CN 109190702A
Authority
CN
China
Prior art keywords
sample
bayesian network
data
regularization
continuous type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811009985.9A
Other languages
Chinese (zh)
Inventor
周春蕾
张友卫
高阳
张天诚
帅云峰
孙栓柱
綦小龙
李逗
李春岩
潘苗
杨晨琛
王其祥
高进
王明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
State Grid Corp of China SGCC
State Grid Jiangsu Electric Power Co Ltd
Jiangsu Fangtian Power Technology Co Ltd
Original Assignee
Nanjing University
State Grid Corp of China SGCC
State Grid Jiangsu Electric Power Co Ltd
Jiangsu Fangtian Power Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University, State Grid Corp of China SGCC, State Grid Jiangsu Electric Power Co Ltd, Jiangsu Fangtian Power Technology Co Ltd filed Critical Nanjing University
Priority to CN201811009985.9A priority Critical patent/CN109190702A/en
Publication of CN109190702A publication Critical patent/CN109190702A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Algebra (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present invention discloses a kind of sparse Bayesian network incremental learning method based on continuous type industrial data.The sparse Bayesian network incremental learning method based on continuous type industrial data includes: step 1, data prediction: training sample is subjected to the pretreatment such as regularization, guarantee mean value be 0, variance 1;Step 2, Structure learning: being based on sliding window, and the incremental learning of structure is carried out in the form of lot sample sheet;Step 3, parameter learning: the coefficient of relationship matrix and regularization sample obtained using Structure learning obtains the mean value and variance of the Gaussian mixtures of each feature under current state, i.e. continuous type Bayesian network parameters;Step 4, network updates: to the sample newly to arrive, being pre-processed using the regularization parameter of population sample, forms the update of this progress of the lot sample of same scale structure and parameter using sliding window.

Description

Sparse Bayesian network incremental learning method based on continuous type industrial data
Technical field
The invention belongs to computer application technologies, are specifically related to a kind of sparse shellfish based on continuous type industrial data This network incremental learning method of leaf.
Background technique
With the development of China's industrial technology, the industrial equipment of more and more modernizations is put into actual production, is State and society creates huge economic value.But with the update of industrial equipment, although having liberated a large amount of artificial Labour, but also the monitoring for a whole set of industrial flow brings no small difficulty.Although currently realizing many large scale industries to set Standby automation, but the normal operation of equipment constantly regulates and controls there is still a need for people.Due to the complexity of equipment, work as equipment There is unit exception situation, needs to learn operational process by heart and just can be carried out fault tracing in the heart, veteran worker.Therefore logical Complicated industrial equipment data are crossed, from study in data to the dependence equipment various components inside, and to abnormal number Precise positioning is carried out according to source, is the direction highly studied.
Bayesian network is the completely model about variable and relationship between variables, general between variable by establishing Rate model provides the dependence between variable and excavates implicit causality from data, therefore can be used to answer each The probability problem of kind relevant variable.Also, Bayesian network, which possesses, extremely strong to be had logic, intelligibly handles uncertain problem Ability, be also based on uncertain information and make inferences, therefore it has a wide range of applications in the field of data mining.
One Bayesian network mainly includes two parts: network structure and network parameter, and wherein network structure is one Directed acyclic graph indicates the dependence between variable, and network parameter indicates probability distribution of the variable relative to its father node. There is the method for comparative maturity for the Bayesian Network Learning of discrete variable, common method has the side based on constraint Method and method based on score function carry out greedy or heuristic search to map space on this basis, acquisition most Parameter learning is carried out in excellent structure, safeguards a conditional probability table for each variable.But these methods are more for feature Data need a large amount of time overhead.And in industrial data, the parameter of an equipment is often more, and data Chang Weilian Ideotype data.Meanwhile for certain special installations, the dependence between device parameter often becomes with time change Change.Based on the above fact, the Bayesian network incremental learning method based on continuous data that we have proposed a kind of.
Summary of the invention
Above-mentioned the deficiencies in the prior art are directed to, the purpose of the present invention is to provide a kind of based on continuous type industrial data Sparse Bayesian network incremental learning method, it is intended to which current operating conditions are established according to large industry equipment running state parameter Dependence and dependent probability model between lower parameters, the causality between intuitive presentation parameter, and can be into One step carries out anomaly source retrospect to abnormal data, effectively reduces the regulation manually for device parameter.
To reach above-mentioned purpose, the present invention adopts the following technical scheme: a kind of sparse shellfish based on continuous type industrial data This network incremental learning method of leaf includes: step 1, and data prediction: training sample is carried out the pretreatment such as regularization, is guaranteed Mean value is 0, variance 1;Step 2, Structure learning: being based on sliding window, and the increment type of structure is carried out in the form of lot sample sheet Study;Step 3, parameter learning: the coefficient of relationship matrix and regularization sample obtained using Structure learning obtains current state The mean value and variance of the Gaussian mixtures of each lower feature, i.e. continuous type Bayesian network parameters;Step 4, network updates: It to the sample newly to arrive, is pre-processed using the regularization parameter of population sample, forms same scale using sliding window The update of this progress of lot sample structure and parameter.
Preferably, step 1 specifically comprises the following steps;Step 1.1: rejecting the abnormal data in initial data;Step 1.2: original sample being carried out to overall regularization, and records the mean value of population sample and the regularization parameter information of variance.
Preferably, step 2 specifically comprises the following steps: step 2.1:, will be original big first, in accordance with the size of sliding window The training lot sample sheet that scale sample decomposition is partially overlapped with each other at several;Step 2.2: initialization is related with sample size and dimension The parameters such as penalty term, the degree of rarefication of regulated and control network structure;Step 2.3: sparse Bayesian network algorithm is utilized, successively to each A node carries out the search of father node, and breadth first search method is added and avoids generating loop.By iteration several times, father's section Point and its corresponding relationship coefficient matrix tend towards stability, and obtain the network structure under current data;Step 2.4: mobile sliding window, Structure fine tuning is carried out using the network structure generated in step 2.2 and step 2.3 and coefficient of relationship matrix as original state.
Preferably, the specific calculating process of step 3 is as follows: step 3.1: the coefficient of relationship matrix obtained according to Structure learning obtains Each node is obtained for the mean value of the Gaussian mixtures of father node;Step 3.2: calculating the association side of any father node combination Difference;Step 3.3: using the coefficient of relationship of father node combination as weight, the covariance that adds up obtains the variance of Gaussian mixtures.
Preferably, step 4 specifically comprises the following steps: step 4.1: using the mean value and variance of population sample to new samples Carry out regularization pretreatment;Step 4.2: new samples and part bulk sample originally being formed by new window according to sequential relationship, original The update of structure and parameter is carried out on network foundation.
Compared to the prior art, technical solution provided by the invention has the following beneficial effects:
The sparse Bayesian network Increment Learning Algorithm based on continuous data is different from most of based on discrete type The Bayesian Network Learning algorithm of data, the present invention substitute simple conditional probability table using Gaussian mixtures, can be more preferable It must adapt to the industrial data of real world complexity.Also, common Bayesian Network Learning algorithm requires first to find candidate parent section Point can omit correct father node in search process, and the present invention does not need to find candidate parent nodes, thus a kind of before eliminating The drawbacks of method.Finally, incremental learning method proposed by the present invention, can not only accelerate convergence speed on big-sample data Degree, additionally it is possible to which there are the flow datas of part concept drift for processing, as the time constantly automatically updates structure.Based on the above content, The present invention is able to solve the problem of industrial data volume is excessive and flow data updates, so more preferable that meet continuous type magnanimity industry Data causality analysis and the demand traced extremely.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present invention, constitutes a part of the invention, this hair Bright illustrative embodiments and their description are used to explain the present invention, and are not constituted improper limitations of the present invention.In the accompanying drawings:
The sparse Bayesian network incremental learning technology ensemble stream based on continuous type industrial data that Fig. 1 present invention is implemented Cheng Tu;
The sparse Bayesian network incremental learning technology based on continuous type industrial data that Fig. 2 present invention is implemented flows in detail Cheng Tu;
What the sparse Bayesian network incremental learning technology based on continuous type industrial data that Fig. 3 present invention is implemented obtained Bayesian network structure figure.
Specific embodiment
In order to be clearer and more clear technical problems, technical solutions and advantages to be solved, tie below Drawings and examples are closed, the present invention will be described in further detail.It should be appreciated that specific embodiment described herein is only used To explain the present invention, it is not intended to limit the present invention.
In claims of the present invention, specification and above-mentioned attached drawing, unless otherwise specifically limited, such as using term " the One ", " second " or " third " etc. are provided to distinguish different objects, be not use to describe a particular order.
In claims of the present invention, specification and above-mentioned attached drawing, unless otherwise specifically limited, for the noun of locality, such as Using term " center ", " transverse direction ", " longitudinal direction ", "horizontal", " vertical ", "top", "bottom", "inner", "outside", "upper", "lower", The indicating positions such as "front", "rear", "left", "right", " clockwise ", " counterclockwise " or positional relationship are orientation based on the figure And positional relationship, and be merely for convenience of the narration present invention and simplify description, rather than the device or member of indication or suggestion meaning Part must have a particular orientation or be constructed and operated in a specific orientation, so can not be interpreted as limiting of the invention specific Protection scope.
In claims of the present invention, specification and above-mentioned attached drawing, unless otherwise specifically limited, such as using term " Gu Connect " or " being fixedly connected ", it should broadly understood, i.e., between the two without any connection side of displacement relation and relative rotation relationship Formula, that is to say, that including be unremovably fixedly connected with, be removably secured connection, be linked together and by other devices or Element is fixedly connected.
In claims of the present invention, specification and above-mentioned attached drawing, such as using term " includes ", " having " and they Deformation, it is intended that " including but not limited to ".
As depicted in figs. 1 and 2, a kind of sparse Bayesian network increment based on continuous type industrial data provided by the invention Formula learning art, comprising the following steps:
Step 1, data prediction: carrying out the pretreatment such as regularization for training sample, guarantee mean value be 0, variance 1.Tool Body includes the following steps:
Step 1.1: the abnormal data in initial data being excluded according to priori knowledge.Abnormal data refers to that equipment is non- Data under data under normal condition, such as shutdown status or malfunction obtain sample data matrix rawD, size For N × p;
Step 1.2: Regularization is carried out to original sample, it is ensured that every one-dimensional data is in bulk sample this on the basis of all mean values It is 1 for 0 and variance, pretreated data is denoted as D, size is also N × p, and mean value and variance for recording bulk sample sheet etc. is just Then change parameter information.
Step 2, Structure learning: being based on sliding window, and the incremental learning of structure is carried out in the form of lot sample sheet.Specifically Include the following steps:
Step 2.1: in SBN algorithm, assuming initially that variable number is p, indicate obtained with the matrix D AG of a p × p Directed acyclic graph, element DAGijThere are directed edge i → j between variable i, j for=1 representative, and directed edge is otherwise not present.Secondly, Average information is provided for training process with the matrix P of p × p dimension, matrix P is represented and (closed between node with the presence or absence of path It is closure), element Pij=1 represents variable i to variable j there are a paths, on the contrary then do not have.Since training needs, finally It can learn to obtain the matrix B of (p-1) × p, record coefficient of relationship of each node with respect to its father node.Due to any One node can not become the father node (there is no from ring) of oneself, so coefficient of relationship matrix B equivalence is by the coefficient of p × p Square matrix diagonal entry is deleted, and lower triangle is spliced upwards.The window size batchSize of lot sample sheet is finally set;
Step 2.2: bayesian network structure learning, the core entirely learnt are optimization following formula:
s.t.βji×Pij=0, i, j=1,2, K, p, i ≠ j
Wherein, xiIt is the corresponding sample vector of variable i and x/iRepresent the later sample matrix of removal variable i, βiIt is relationship The i-th column of coefficient matrix B are the corresponding column of variable i.The optimization aim of above formula is minimized between true value and Coefficient Fitting value Error and L1The sum of canonical penalty term, wherein λ1For controlling the nonzero value number in matrix B, i.e., control network structure is dilute Dredge property.λ1Bigger, the nonzero value in matrix B is fewer, and network structure is more sparse.And it needs to maintain β in an iterative processji×Pij Perseverance is 0, to guarantee the acyclicity of network structure, wherein PijIt is the connectivity of variable i and variable j at this stage, connectivity is sentenced It is disconnected to be solved using BFS (BreadthFirst Search, breadth first search) method.
Above formula is converted using lagrange formula are as follows:
In above formulaThe sample eliminated after variable i is represented, is added to two penalty term λ in the formula1、λ2, Middle λ1It is still L1Regularization penalty term, for controlling sparsity;And λ2So that | βji×Pij| zero is leveled off to, to avoid oriented Ring is formed in figure.By proving, λ is worked as in discovery2When meeting the following conditions, it can be ensured that not will form ring in the training process:
Step 2.3: given λ1、λ2Value after, BCD (BlockCoordinate Descent, BCD) can be passed through Algorithm carrys out calculated relationship coefficient matrix B.BCD algorithm is a kind of piecemeal optimization method, and for matrix B, BCD algorithm fixes remaining institute Some column, successively update βi, that is, successively optimizing fii) until meeting the preset condition of convergence.Specifically, with L2Model Formula variation is used as the condition of convergence less than 0.001.And for each fii) optimization, can using be similar to LASSO The form (being shown below) of (LeastAbsolute Shrinkage and Selection Operator, LASSO) optimization, It is iterated using shooting algorithm:
Finally, the numerical value and symbol of each coefficient, continuous convergence of approximation item are calculated separately by recursive calculation following formula Part:
The numerical value of coefficient, for judging whether node and its father node have relationship:
The symbol of coefficient, for judging the positive negativity of relationship between node and its father node:
This process is repeated for each subsequent window, due to the window study before having passed through a to coefficient Matrix, it is only necessary to can converge to a new coefficient matrix by the iteration of seldom number.Total iteration time when such as initial training It is tens of time or so to reach convergence, and only needing iteration one twice in the incrementally updating stage can restrain.Then, it is only necessary to Numerical value is added in matrix B and is all 0 diagonal line, and sets 1 for nonzero value, can construct the adjoining formed by 0,1 Matrix D AG.
Step 3, parameter learning: the coefficient of relationship matrix and regularization sample obtained using Structure learning obtains current shape The mean value and variance of the Gaussian mixtures of each feature, i.e. continuous type Bayesian network parameters under state.It specifically includes as follows Step:
Step 3.1: the directed acyclic graph DAG and coefficient of relationship matrix B arrived using the acquistion of step 2 middle school is calculated each A node Y is relative to his father's nodes X1,K,XkGaussian mixtures P (Y | X1,K,Xk):N(β01X1L+βkXk2) ginseng Number.For the mean value of Gaussian mixtures, father node and its corresponding parameter are found by Y respective column in coefficient matrix B, to obtain Obtain mean value β01X1L+βkXk
Step 3.2: for the variances sigma of Gaussian mixtures2, by estimating that the parameter of following likelihood function obtains:
In above formula, X indicates current and investigates feature, and x indicates to investigate the corresponding sampling feature vectors of feature, UkIndicate the kth of X The sample values vector of a father node character pair, [m] indicate the numerical value of corresponding m-th of element of vector.The target of parameter learning It is learning parameter θX|U={ β01,K,βk2, wherein { β01,K,βkObtained by step 3.1, and Gaussian mixtures Variances sigma2, by seeking local derviation to possibility predication function and to be set to zero available:
Wherein formula:
CovD[X;X]=ED[X;Y]-ED[X]ED[Y]
The mixed Gaussian variances sigma of each characteristic node is obtained according to above-mentioned formula2.In actual moving process, parameter Study does not need to carry out constantly, when needing to carry out dependent probability analysis and causal reasoning, carries out parameter for specific node Study, can effectively save computing resource.
Step 4, network updates: to the sample newly to arrive, being pre-processed, is utilized using the regularization parameter of population sample Sliding window forms the update of this progress of the lot sample of same scale structure and parameter.Specifically comprise the following steps:
Step 4.1: for the sample rawS newly to arrive, the mean value obtained when using step 1.2 population sample regularization and Variance generates pretreated sample S, and S is put into the end of sample;
Step 4.2: sliding window is one mobile, form new batch data.Structure several times is carried out under new data The iteration of habit can converge to the coefficient of relationship matrix B of stable state on the basis of prototype structuret+1, and generate corresponding adjoining Matrix D AGt+1
Next with specific embodiment, the present invention is further elaborated:
In order to clearly demonstrate implementation process, supplied as shown in figure 3, choosing 9 power generations of a 1000MW coal heating unit Thermal parameter is illustrated for annual 1 minute 2017 for the historical data being spaced.
1, data prediction
1.1, noise data is rejected
There are two types of situations for noise data in initial data, first is that full attribute is 0 or the thick-and-thin variable category of numerical value Property, second is that the shutdown time data that generator power is 0 records.In order to illustrate conveniently, this example extracts on the basis of initial data Part sample and attribute, only illustrate implementation process, the part sample data after cancelling noise is as shown in table 1.
Part sample data after 1 example unit cancelling noise of table
1.2, data regularization
To the data after cancelling noise carry out mean value be 0, the regularization that variance is 1, obtain that the results are shown in Table 2.
2 example unit sample data regularization result of table
2, Structure learning
This example has chosen 1000 datas record as training sample, therefore, it is necessary to be arranged 9 × 9 matrix D AG and 8 × 9 coefficient of relationship matrix B.After training, obtained bayesian network structure figure such as Fig. 3.In order to preferably express pass It is coefficient matrix B, the matrix that the diagonal entry that addition numerical value is all 0 in matrix B is extended to 9 × 9 is as shown in table 3, Diagonal entry (extensible element) is indicated with "-" in table, is not had these elements in former coefficient of relationship matrix, is merely to illustrate section The coefficient of relationship of point and its father node.
The coefficient of relationship matrix that 3 example set structure of table learns (after extension)
3, parameter learning
3.1, mixed Gaussian mean value computation
The mean value of mixed Gaussian is directly obtained by coefficient of relationship matrix B.By taking the 1st node (feature) as an example, square has been corresponded to 0th column of battle array, therefore its mixed Gaussian mean value is indicated as X0=0.3157X1-0.2702X2-0.3049X4+0.1792X5, with This analogizes linear expression of other the available nodes relative to its father node.
3.2, mixed Gaussian variance calculates
Mixed Gaussian variance calculates the formula for using step 3.2, is directed to each node, calculates separately its variance.9 The mixed Gaussian variance of node (feature) is as shown in table 4.
The mixed Gaussian variance that 4 example unit parameter of table learns
4, network updates
4.1, flow data sample regularization
For the sample rawS newly to arrive, regularization is carried out to it, obtains this regularization of bulk sample mean value and standard deviation such as table 5 It is shown.
5 example unit new samples regularization mean value of table, standard deviation
4.2, incrementally updating
In this example, sliding window size is 1000.When new data arrives, sliding window is moved along a sample This.Due to the particularity of algorithm itself, it is also required to recalculate each coefficient in incrementally updating.By new sample After this recalculate, network structure is not adjusted, and (table 6 be the relationship after extending to the adjustment of coefficient of relationship matrix B as shown in table 6 Coefficient matrix).
The coefficient of relationship matrix that 6 example unit of table updates after new samples arrival through network (after extension)
As can be seen from Table 6, the variation of coefficient of relationship matrix is smaller, this is because single sample is relative to this very little of bulk sample, So being capable of the preferable stability that must keep network structure.Although unobvious for network structure update, work as flow data in a steady stream Constantly enter, structure can slowly be updated towards target direction.
The preferred embodiment of the present invention has shown and described in above description, as previously described, it should be understood that the present invention is not office Be limited to form disclosed herein, should not be regarded as an exclusion of other examples, and can be used for various other combinations, modification and Environment, and can be changed within that scope of the inventive concept describe herein by the above teachings or related fields of technology or knowledge It is dynamic.And changes and modifications made by those skilled in the art do not depart from the spirit and scope of the present invention, then it all should be appended by the present invention In scope of protection of the claims.

Claims (5)

1. a kind of sparse Bayesian network incremental learning method based on continuous type industrial data characterized by comprising
Step 1, data prediction: carrying out the pretreatment such as regularization for training sample, guarantee mean value be 0, variance 1;
Step 2, Structure learning: being based on sliding window, and the incremental learning of structure is carried out in the form of lot sample sheet;
Step 3, parameter learning: the coefficient of relationship matrix and regularization sample obtained using Structure learning is obtained under current state The mean value and variance of the Gaussian mixtures of each feature, i.e. continuous type Bayesian network parameters;
Step 4, network updates: to the sample newly to arrive, being pre-processed using the regularization parameter of population sample, utilizes sliding Window forms the update of this progress of the lot sample of same scale structure and parameter.
2. a kind of sparse Bayesian network incremental learning method based on continuous type industrial data as described in claim 1, It is characterized by: step 1 specifically comprises the following steps;
Step 1.1: rejecting the abnormal data in initial data;
Step 1.2: original sample is carried out to overall regularization, and the regularization parameters such as mean value and variance for recording population sample Information.
3. a kind of sparse Bayesian network incremental learning method based on continuous type industrial data as described in claim 1, It is characterized by: step 2 specifically comprises the following steps:
Step 2.1: first, in accordance with the size of sliding window, original extensive sample decomposition being partially overlapped with each other at several Training lot sample sheet;
Step 2.2: the parameters such as initialization penalty term related with sample size and dimension, the degree of rarefication of regulated and control network structure;
Step 2.3: sparse Bayesian network (Sparse Bayesian Network, SBN) algorithm is utilized, successively to each Node carries out the search of father node, and breadth first search method is added and avoids generating loop.By iteration several times, father node And its corresponding relationship coefficient matrix tends towards stability, and obtains the network structure under current data;
Step 2.4: mobile sliding window, using the network structure generated in step 2.2 and step 2.3 and coefficient of relationship matrix as Original state carries out structure fine tuning.
4. a kind of sparse Bayesian network incremental learning method based on continuous type industrial data as described in claim 1, It is characterized by: the specific calculating process of step 3 is as follows:
Step 3.1: the coefficient of relationship matrix obtained according to Structure learning obtains the mixed Gaussian that each node is directed to father node The mean value of distribution;
Step 3.2: calculating the covariance of any father node combination;
Step 3.3: using the coefficient of relationship of father node combination as weight, the covariance that adds up obtains the variance of Gaussian mixtures.
5. a kind of sparse Bayesian network incremental learning method based on continuous type industrial data as described in claim 1, It is characterized by: step 4 specifically comprises the following steps:
Step 4.1: regularization pretreatment being carried out to new samples using the mean value and variance of population sample;
Step 4.2: new samples and part bulk sample originally being formed by new window according to sequential relationship, are carried out on the basis of primitive network The update of structure and parameter.
CN201811009985.9A 2018-08-31 2018-08-31 Sparse Bayesian network incremental learning method based on continuous type industrial data Pending CN109190702A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811009985.9A CN109190702A (en) 2018-08-31 2018-08-31 Sparse Bayesian network incremental learning method based on continuous type industrial data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811009985.9A CN109190702A (en) 2018-08-31 2018-08-31 Sparse Bayesian network incremental learning method based on continuous type industrial data

Publications (1)

Publication Number Publication Date
CN109190702A true CN109190702A (en) 2019-01-11

Family

ID=64917630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811009985.9A Pending CN109190702A (en) 2018-08-31 2018-08-31 Sparse Bayesian network incremental learning method based on continuous type industrial data

Country Status (1)

Country Link
CN (1) CN109190702A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110928922A (en) * 2019-11-27 2020-03-27 开普云信息科技股份有限公司 Public policy analysis model deployment method and system based on big data mining
WO2020191722A1 (en) * 2019-03-28 2020-10-01 日本电气株式会社 Method and system for determining causal relationship, and computer program product

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020191722A1 (en) * 2019-03-28 2020-10-01 日本电气株式会社 Method and system for determining causal relationship, and computer program product
US11537910B2 (en) 2019-03-28 2022-12-27 Nec Corporation Method, system, and computer program product for determining causality
CN110928922A (en) * 2019-11-27 2020-03-27 开普云信息科技股份有限公司 Public policy analysis model deployment method and system based on big data mining

Similar Documents

Publication Publication Date Title
He et al. Variational quantum compiling with double Q-learning
Ortiz et al. Dimensional synthesis of mechanisms using differential evolution with auto-adaptive control parameters
CN107357757B (en) Algebraic application problem automatic solver based on deep reinforcement learning
CN109190702A (en) Sparse Bayesian network incremental learning method based on continuous type industrial data
CN117077671B (en) Interactive data generation method and system
Lu et al. A new hybrid algorithm for bankruptcy prediction using switching particle swarm optimization and support vector machines
Rodríguez-Fdez et al. An instance selection algorithm for regression and its application in variance reduction
CN110502739A (en) The building of the machine learning model of structuring input
CN108536844B (en) Text-enhanced network representation learning method
Li et al. A novel adaptive weight algorithm based on decomposition and two-part update strategy for many-objective optimization
CN113988418A (en) Visualization method for energy load prediction
Liu et al. Gradient‐Sensitive Optimization for Convolutional Neural Networks
CN111311000B (en) User consumption behavior prediction model training method, device, equipment and storage medium
CN116415177A (en) Classifier parameter identification method based on extreme learning machine
Nitti Hybrid probabilistic logic programming
CN111813837B (en) Method for intelligently detecting data quality
CN113867724A (en) Method and system for automatically generating GUI (graphical user interface) code, server and medium
Goodwin et al. A vector quantization approach to scenario generation for stochastic NMPC
McFall An artificial neural network method for solving boundary value problems with arbitrary irregular boundaries
Briol et al. Rejoinder: Probabilistic integration: A role in statistical computation?
Dou et al. Research on Power Network Regulation Mechanism Based on Knowledge Mapping
Sun et al. Remote supervision relation extraction method of power safety regulations knowledge graph based on ResPCNN-ATT
US20240220688A1 (en) Numerical Simulation Method By Deep Learning And Associated Recurrent Neural Network
Nishino et al. A sparse parameter learning method for probabilistic logic programs
CN111506962B (en) Complex system reliability calculation method based on BN and UGF

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190111

RJ01 Rejection of invention patent application after publication