CN113222046B - Feature alignment self-encoder fault classification method based on filtering strategy - Google Patents
Feature alignment self-encoder fault classification method based on filtering strategy Download PDFInfo
- Publication number
- CN113222046B CN113222046B CN202110575512.0A CN202110575512A CN113222046B CN 113222046 B CN113222046 B CN 113222046B CN 202110575512 A CN202110575512 A CN 202110575512A CN 113222046 B CN113222046 B CN 113222046B
- Authority
- CN
- China
- Prior art keywords
- encoder
- self
- labeled
- model
- unlabeled
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000001914 filtration Methods 0.000 title claims abstract description 25
- 238000012549 training Methods 0.000 claims abstract description 42
- 238000013145 classification model Methods 0.000 claims abstract description 24
- 238000009826 distribution Methods 0.000 claims description 25
- 238000004519 manufacturing process Methods 0.000 claims description 17
- 238000000605 extraction Methods 0.000 claims description 12
- 239000013598 vector Substances 0.000 claims description 12
- 238000013528 artificial neural network Methods 0.000 claims description 11
- 238000004422 calculation algorithm Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 9
- 230000002159 abnormal effect Effects 0.000 claims description 8
- 238000001514 detection method Methods 0.000 claims description 4
- 239000010754 BS 2869 Class F Substances 0.000 claims description 3
- 238000005065 mining Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 12
- 230000008859 change Effects 0.000 description 5
- 239000000498 cooling water Substances 0.000 description 4
- 238000003745 diagnosis Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000003889 chemical engineering Methods 0.000 description 1
- 238000001311 chemical methods and process Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000003292 glue Substances 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003908 quality control method Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a fault classification method of a feature alignment self-encoder based on a filtering strategy, which comprises the steps of firstly using labeled data to carry out reconstruction pre-training on a stacked self-encoder, carrying out filtering operation on non-labeled data with a reconstruction error obviously larger than that of the labeled data, and then using the labeled data and the filtered non-labeled data to construct a feature alignment self-encoder classification model. The cross entropy training loss function based on the Sinkhorn distance is designed for the feature alignment self-encoder classification model, and the function enables the model to use labeled data and unlabeled data at the fine tuning stage, so that not only can deep mining of data information be realized, but also the generalization capability of a network model can be improved. Meanwhile, due to the introduction of a filtering strategy, the robustness of the model is obviously improved.
Description
Technical Field
The invention belongs to the field of industrial process control, and particularly relates to a feature alignment self-encoder fault classification method based on a filtering strategy.
Background
Modern industrial processes are moving towards large scale, complex processes. How to ensure the safety of the production process is one of key problems which are focused on and need to be solved in the field of industrial process control. The fault diagnosis is a key technology for guaranteeing the safe operation of the industrial process, and has important significance for improving the product quality and the production efficiency. The fault classification belongs to a link in fault diagnosis, and automatic identification and judgment of fault types are realized by learning from historical fault information, so that production personnel are helped to quickly locate and repair the faults, and further loss caused by the faults is avoided. With the continuous development and progress of modern measurement means, a great deal of data is accumulated in the industrial production process. The data describes the actual conditions of each production stage of the manufacturing, provides valuable data resources for reading, analyzing and optimizing the manufacturing process, and is an intelligent source for realizing intelligent manufacturing. Therefore, how to reasonably utilize the data information accumulated in the manufacturing process to establish a data-driven intelligent analysis model to better serve the intelligent decision and quality control of the manufacturing process is a hot point of great concern in the industry. The data-driven fault classification method utilizes intelligent analysis technologies such as machine learning and deep learning to deeply mine, model and analyze industrial data and provide a data-driven fault diagnosis mode for users and industries. Most of the existing data-driven fault classification methods belong to supervised learning methods, and when sufficient labeled data can be obtained, the model can obtain excellent performance. However, it is difficult to obtain large, sufficient tagged data in certain industrial scenarios. Thus, there is often a large amount of unlabeled data and a small amount of labeled data. In order to effectively utilize the unlabeled data to improve the classification performance of the model, a fault classification method based on semi-supervised learning is gradually receiving attention. However, most existing semi-supervised fault classification methods mostly rely on certain data assumptions, such as semi-supervised learning methods based on statistical learning, semi-supervised learning methods based on graphs, and other methods for labeling unlabeled data based on cooperative training, self-training, etc., which all rely on one assumption, namely: the labeled and unlabeled swatches belong to the same distribution. However, this assumption has its limitation, data collected by an industrial process often include a large amount of noise and abnormal points, and may drift working conditions, labeled data is often manually screened and labeled by experts in the process field, while unlabeled samples are not screened, so that there is a high possibility that abnormal data different from the labeled data may occur in the unlabeled data. When the distribution of the non-labeled data is inconsistent with that of the labeled data, the performance of the semi-supervised algorithm is reduced and is even lower than that of the supervised algorithm which only uses the labeled data for training. Therefore, it is desirable to provide a robust semi-supervised learning method, so that the model can still accurately implement fault classification when the labeled data and the unlabeled data have inconsistent distribution.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a feature alignment self-encoder fault classification method based on a filtering strategy, which comprises the following steps:
a feature alignment self-encoder fault classification method based on a filtering strategy comprises the following steps:
the method comprises the following steps: collecting normal operating condition data of industrial processAnd various fault data to obtain a training data set for modeling: sample set with labelsAnd unlabeled sample setWherein x represents an input sample, y represents a sample label, m represents the number of labeled samples, and n represents the number of unlabeled samples;
step two: constructing a stacking self-encoder model for reconstruction, and training the stacking self-encoder model by using a labeled sample set;
step three: filtering the label-free sample set by using the trained stacked self-encoder model, and constructing a feature alignment self-encoder classification model;
step four: and acquiring field working data, inputting the feature alignment self-encoder classification model, and outputting a corresponding fault category.
Further, the second step is specifically divided into the following sub-steps:
(2.1) constructing a stacked self-encoder model for reconstruction, comprising a multi-layer encoder and a decoder, wherein the output of the model is the reconstruction of the input, and the calculation formula is as follows:
wherein x represents the input, zkRepresenting the extracted k-th layer features, k representing the k-th layer of the stacked self-encoder,andare respectively provided withRepresenting the weight vector and the disparity vector of the encoder and decoder,reconstruction of the input by the representative model;
(2.2) training the stacked self-encoder model by adopting the labeled sample set constructed in the step one and adopting a random gradient descent algorithm, wherein a model training loss function is defined as an input reconstruction error, and the reconstruction error is represented by the following formula:
wherein,representing the ith labeled input sample,representing the reconstruction of the stacked auto-encoder;
(2.3) calculating the reconstruction error of the labeled sample by using the trained stacked self-encoder modelWherein the reconstruction error of a single sample is calculated with reference to the following formula:
further, the third step is specifically divided into the following sub-steps:
(3.1) reconstruction error E based on labeled exemplarslEstimating χ2Distribution parameters g and h
g·h=mean(El) (5)
2g2·h=variance(El) (6)
(3.2) computing unlabeled for filtering ExceptionsThe detection statistic of the sample according to the χ2Distribution parameters g and h, by examining χ2Inquiring a threshold q of a reconstruction error under a certain confidence degree by a distribution table;
(3.3) calculating reconstruction error of unlabeled exemplarThe reconstruction error calculation formula of a single sample is the same as the formula (4);
(3.4) filtering samples with reconstruction errors larger than a threshold q in the non-label data set to obtain a filtered non-label sample set Suf,r is the number of unlabeled samples left;
and (3.5) constructing a feature alignment self-encoder classification model, and training the feature alignment self-encoder classification model by adopting a labeled sample set and a filtered unlabeled sample set. The training process comprises the following steps: unsupervised pre-training and supervised fine tuning. In the unsupervised pre-training stage, a stack self-encoder is trained by adopting the labeled sample and the filtered unlabeled sample together, and the unsupervised pre-training method is the same as the steps (2.1) - (2.3); the supervised fine tuning is formed by adding a fully-connected neural network layer on a stacked self-encoder obtained by unsupervised pre-training and using the fully-connected neural network layer as output of categories, so as to obtain deep extraction features and category labels of the labeled samples and deep extraction features and predicted category label output of the unlabeled samples, and a specific calculation formula is as follows:
wherein,represents the deep-extracted features of the ith labeled sample,class label representing predicted ith labeled sample, { wc,bcRepresenting weight vectors and deviation vectors of the fully connected neural network layer;represents a deep extraction feature of the unlabeled exemplar,a class label output representing a prediction;
(3.6) assuming the number of classes as F, obtaining deep extraction features of labeled exemplars and unlabeled exemplars corresponding to each class F e FAnd
(3.7) calculating a training loss function of the feature alignment self-encoder classification model by adopting the following formula:
wherein, crossentropy represents a cross entropy loss function,representing a Sinkhorn distance function for measuring distances of labeled and unlabeled data feature distributions belonging to the same class, alpha being a weight of the Sinkhorn distance,l being a network parameter2The regularization penalty term, β is its weight.
The invention has the following beneficial effects:
the method comprises the steps of firstly carrying out filtering operation on abnormal non-label data inconsistent with the distribution of labeled samples, and then constructing a semi-supervised classification model of the feature alignment self-encoder by using the labeled data and the filtered non-label data. The method improves the robustness of the model and reduces the problem of performance reduction of the classification model caused by inconsistent distribution of the samples. In addition, the generalization ability and the classification performance of the semi-supervised deep learning network model are improved by designing a new training loss function with good generalization ability.
Drawings
FIG. 1 is a schematic diagram of a stacked self-encoder;
FIG. 2 is a TE process flow diagram;
FIG. 3 is a schematic diagram of data log reconstruction errors;
FIG. 4 is a diagram illustrating classification accuracy for different algorithms.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and preferred embodiments, and the objects and effects of the present invention will become more apparent, it being understood that the specific embodiments described herein are merely illustrative of the present invention and are not intended to limit the present invention.
The invention discloses a feature alignment self-encoder fault classification method based on a filtering strategy. Then, filtering operation is carried out on abnormal non-label data with reconstruction error obviously larger than q. Further, constructing a feature alignment self-encoder classification model by using the labeled data set and the filtered unlabeled data set. The cross entropy training loss function based on the Sinkhorn distance is designed for the feature alignment self-encoder classification model, and the function enables the model to use labeled data and unlabeled data at the fine tuning stage, so that not only can deep mining of data information be realized, but also the generalization capability of a network model can be improved. Meanwhile, due to the introduction of a filtering strategy, the robustness of the model is obviously improved.
The method comprises the following specific steps:
the method comprises the following steps: collecting normal working condition data and various fault data of an industrial process to obtain a training data set for modeling: sample set with labelsAnd unlabeled sample setWherein x represents an input sample, y represents a sample label, m represents the number of labeled samples, and n represents the number of unlabeled samples;
step two: constructing a stacking self-encoder model for reconstruction, and training the stacking self-encoder model by utilizing a labeled sample set; the method is specifically divided into the following substeps:
(2.1) constructing a stacked self-encoder model for reconstruction, comprising a multi-layer encoder and a decoder, wherein the output of the model is the reconstruction of the input, and the calculation formula is as follows:
wherein x represents the input, zkRepresenting the extracted k-th layer features, k representing the heapIs stacked from the k-th layer of the encoder,andrepresenting the weight vector and the disparity vector of the encoder and decoder respectively,reconstruction of the input by the representative model;
(2.2) training the stacked self-encoder model by adopting the labeled sample set constructed in the step one and adopting a random gradient descent algorithm, wherein a model training loss function is defined as an input reconstruction error, and the reconstruction error is represented by the following formula:
wherein,representing the ith labeled input sample,representing the reconstruction of the stacked auto-encoder;
(2.3) calculating the reconstruction error of the labeled sample by using the trained stacked self-encoder modelWherein the reconstruction error of a single sample is calculated with reference to the following formula:
step three: filtering the label-free sample set by using the trained stacked self-encoder model, and constructing a feature alignment self-encoder classification model;
the third step is specifically divided into the following substeps:
(3.1) reconstruction error E based on labeled exemplarslEstimating χ2Distribution parameters g and h
g·h=mean(El) (5)
2g2·h=variance(El) (6)
(3.2) calculating the detection statistic for filtering abnormal label-free samples according to the χ2Distribution parameters g and h, by examining χ2Inquiring a threshold q of a reconstruction error under a certain confidence degree by a distribution table;
(3.3) calculating reconstruction error of unlabeled exemplarThe reconstruction error calculation formula of a single sample is the same as the formula (4);
(3.4) filtering samples with reconstruction errors larger than a threshold value q in the unlabeled data set to obtain a filtered unlabeled sample set Suf,r is the number of unlabeled samples left;
and (3.5) constructing a feature alignment self-encoder classification model, and training the feature alignment self-encoder classification model by adopting a labeled sample set and a filtered unlabeled sample set. The training process can be divided into: unsupervised pre-training and supervised fine tuning:
in the unsupervised pre-training stage, labeled samples and filtered unlabeled samples are used together to train a stacked self-encoder. And (3) constructing a stacking self-encoder model for reconstruction, and then training the stacking self-encoder by using the labeled samples and the unlabeled samples.
The supervised fine tuning is formed by adding a full-connection neural network layer on a stacked self-encoder obtained by unsupervised pre-training and using the full-connection neural network layer as output of categories, so that deep extraction features and category labels of the labeled samples and deep extraction features and predicted category label output of the unlabeled samples are obtained, and a specific calculation formula is as follows:
wherein,represents the deep-extracted features of the ith labeled sample,class label representing predicted ith labeled sample, { wc,bcRepresenting weight vectors and deviation vectors of the fully connected neural network layer;deep extraction features representing unlabeled exemplars anda class label output representing a prediction;
(3.6) assuming the number of classes as F, deep-extraction features of labeled and unlabeled exemplars corresponding to each class F E F are obtained according to the following formulaAnd
(3.7) calculating a training loss function of the feature-aligned self-coder classification model using the following formula:
wherein, crossentropy represents a cross entropy loss function;a representative Sinkhorn distance function for measuring distances of labeled data feature distributions and unlabeled data feature distributions belonging to the same class; α is the weight of the Sinkhorn distance;l being a network parameter2A regularization penalty term; beta is its weight. The objective of the newly designed training loss function based on the Sinkhorn distance is to align the labeled data and unlabeled data belonging to the same class in the fine tuning stage by stacking the features extracted from the encoder so that their distributions are close.
Step four: and acquiring field working data, inputting the feature alignment self-encoder classification model, and outputting a corresponding fault category.
The validity of the method of the invention is verified below with a specific industrial process example. All data are collected on a Tennessee-Eastman (TE) chemical engineering experiment simulation platform in the United states, and the platform is widely applied to the field of fault diagnosis and fault classification as a typical chemical process research object. The TE process is illustrated in FIG. 2, and its main equipment includes a continuous stirred tank reactor, a gas-liquid separation column, a centrifugal compressor, a dephlegmator and a reboiler. The modeled process data contained 16 process variables and 10 fault categories, and the detailed process variable and fault information descriptions are shown in tables 1 and 2, respectively.
TABLE 1
Numbering | Name of variable | Numbering | Name of variable |
1 | A |
11 | Product separator temperature |
2 | D flow rate of |
13 | Product separator pressure |
3 | E feed rate | 14 | Product separator bottoms flow |
4 | Total feed flow | 16 | Stripper pressure |
5 | Flow rate of recirculation | 18 | |
6 | Reactor feed flow | 19 | Stripper flow |
9 | Reactor temperature | 21 | Reactor cooling |
10 | Discharge velocity | 22 | Outlet temperature of condenser cooling water |
TABLE 2
Fault numbering | Description of the invention | Type of failure |
1 | A/C describes the feed flow ratio variation (stream 4) | Step change |
5 | Condenser cooling water inlet temperature change | Step change |
7 | Material C pressure loss (stream 4) | |
10 | Temperature Change of Material C (stream 4) | Random variable |
14 | Cooling water valve of reactor | Viscous glue |
The collected data contains a total of 3600 samples from 6 classes, 600 samples for each class. The collected data was divided into training data (containing 300 labeled data and 3000 unlabeled data) and test data (containing 300 labeled data). In order to simulate the situation that the distribution of the non-tag data is inconsistent with that of the tag data, Gaussian noise is added into the original non-tag data according to a certain proportion.
Fig. 3 shows log reconstruction errors of labeled data, normal unlabeled data, and abnormal unlabeled data that are not in accordance with the distribution of the labeled data under the stacked self-encoder reconstruction model. As is apparent from fig. 3, the reconstruction errors of the labeled data and the normal unlabeled data are relatively close, while the reconstruction error of the abnormal unlabeled data is significantly larger than the reconstruction errors of the labeled data and the normal unlabeled data. This is the basis for the feature alignment based filtering strategy to distribute unlabeled data from the encoder detection anomalies.
Fig. 4 shows the classification accuracy of the three algorithms under different labeled and unlabeled data distribution inconsistent ratios. The MLP method is a supervised neural network classification model, the Tri-tracking method is a neural network classification model obtained based on cooperative training, and the Filtered FA-SAE method is a feature alignment self-encoder classification model based on a filtering strategy provided by the invention. Tri-tracking and Filtered FA-SAE belong to semi-supervised deep learning network models. As can be seen from the figure, the classification performance of most semi-supervised learning algorithms is superior to that of supervised algorithms; in addition, with the gradual expansion of the distribution inconsistency ratio of the labeled data and the unlabeled data, the performance of the semi-supervised algorithm is reduced, wherein when the distribution inconsistency reaches 90%, the classification precision of the Tri-tracking method is even lower than that of the supervised MLP method. In contrast, the Filtered FA-SAE method provided by the invention has better classification performance than MLP and Tri-tracking methods under different degrees of distribution inconsistency rates.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and although the invention has been described in detail with reference to the foregoing examples, it will be apparent to those skilled in the art that various changes in the form and details of the embodiments may be made and equivalents may be substituted for elements thereof. All modifications, equivalents and the like which come within the spirit and principle of the invention are intended to be included within the scope of the invention.
Claims (2)
1. A feature alignment self-encoder fault classification method based on a filtering strategy is characterized by comprising the following steps:
the method comprises the following steps: collecting normal working condition data and various fault data of an industrial process to obtain a training data set for modeling: sample set with labelsAnd unlabeled sample setWherein x represents an input sample, y represents a sample label, m represents the number of labeled samples, and n represents the number of unlabeled samples;
step two: constructing a stacking self-encoder model for reconstruction, and training the stacking self-encoder model by utilizing a labeled sample set;
step three: filtering the label-free sample set by using the trained stacked self-encoder model, and constructing a feature alignment self-encoder classification model;
the third step is specifically divided into the following substeps:
(3.1) reconstruction error E based on labeled exemplarslEstimating χ2Distribution parameters g and h
g·h=mean(El)
2g2·h=variance(El)
(3.2) calculating a detection statistic for filtering abnormal non-labeled samples according to the χ2Distribution parameters g and h, by examining χ2Inquiring a threshold q of a reconstruction error under a certain confidence degree by a distribution table;
(3.3) calculating reconstruction error of unlabeled exemplarThe reconstruction error calculation formula for a single sample is as follows:
(3.4) filtering samples with reconstruction errors larger than a threshold value q in the unlabeled data set to obtain a filtered unlabeled sample set Suf,r is left freeThe number of label samples;
(3.5) constructing a feature alignment self-encoder classification model, and training the feature alignment self-encoder classification model by adopting a labeled sample set and a filtered unlabeled sample set; the training process comprises the following steps: unsupervised pre-training and supervised fine tuning; in the unsupervised pre-training stage, a stack self-encoder is trained by adopting the labeled sample and the filtered unlabeled sample; the supervised fine tuning is formed by adding a full-connection neural network layer on a stacked self-encoder obtained by unsupervised pre-training and using the full-connection neural network layer as output of categories, so that deep extraction features and category labels of the labeled samples and deep extraction features and predicted category label output of the unlabeled samples are obtained, and a specific calculation formula is as follows:
wherein,represents the deep-extracted features of the ith labeled sample,class label representing predicted ith labeled sample, { wc,bcDenotes a fully connected neural network layerA weight vector and a bias vector;represents a deep extraction feature of the unlabeled exemplar,a class label output representing a prediction;
(3.6) the number of classes is F, and deep extraction features of labeled samples and unlabeled samples corresponding to each class F epsilon F are obtainedAnd
(3.7) calculating a training loss function of the feature alignment self-encoder classification model by adopting the following formula:
wherein, crossentropy represents a cross entropy loss function,representing a Sinkhorn distance function for measuring distances of labeled and unlabeled data feature distributions belonging to the same class, alpha being a weight of the Sinkhorn distance,l being a network parameter2A regularization penalty term, β being its weight;
step four: and acquiring field working data, inputting the feature alignment self-encoder classification model, and outputting a corresponding fault category.
2. The method for classifying the fault of the feature-aligned self-encoder based on the filtering strategy as claimed in claim 1, wherein the second step is specifically divided into the following sub-steps:
(2.1) constructing a stacked self-encoder model for reconstruction, comprising a multi-layer encoder and a decoder, wherein the output of the model is the reconstruction of the input, and the calculation formula is as follows:
wherein x represents the input, zkRepresenting the extracted k-th layer features, k representing the k-th layer of the stacked self-encoder,andweight vectors and bias vectors representing the encoder and decoder, respectively;
(2.2) training the stacked self-encoder model by adopting the labeled sample set constructed in the step one and adopting a random gradient descent algorithm, wherein a model training loss function is defined as an input reconstruction error, and the reconstruction error is represented by the following formula:
wherein,representing the ith labeled input sample,representing the reconstruction of the stacked auto-encoder;
(2.3) calculating the reconstruction error of the labeled sample by using the trained stacked self-encoder modelWherein the reconstruction error of a single sample is calculated with reference to the following formula:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110575512.0A CN113222046B (en) | 2021-05-26 | 2021-05-26 | Feature alignment self-encoder fault classification method based on filtering strategy |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110575512.0A CN113222046B (en) | 2021-05-26 | 2021-05-26 | Feature alignment self-encoder fault classification method based on filtering strategy |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113222046A CN113222046A (en) | 2021-08-06 |
CN113222046B true CN113222046B (en) | 2022-06-24 |
Family
ID=77098593
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110575512.0A Active CN113222046B (en) | 2021-05-26 | 2021-05-26 | Feature alignment self-encoder fault classification method based on filtering strategy |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113222046B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114298220B (en) * | 2021-12-28 | 2022-09-16 | 浙江大学 | Fault classification method based on context attention dynamic feature extractor |
CN114819108B (en) * | 2022-06-22 | 2022-10-04 | 中国电力科学研究院有限公司 | Fault identification method and device for comprehensive energy system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111026058A (en) * | 2019-12-16 | 2020-04-17 | 浙江大学 | Semi-supervised deep learning fault diagnosis method based on Watherstein distance and self-encoder |
CN112183581A (en) * | 2020-09-07 | 2021-01-05 | 华南理工大学 | Semi-supervised mechanical fault diagnosis method based on self-adaptive migration neural network |
-
2021
- 2021-05-26 CN CN202110575512.0A patent/CN113222046B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111026058A (en) * | 2019-12-16 | 2020-04-17 | 浙江大学 | Semi-supervised deep learning fault diagnosis method based on Watherstein distance and self-encoder |
CN112183581A (en) * | 2020-09-07 | 2021-01-05 | 华南理工大学 | Semi-supervised mechanical fault diagnosis method based on self-adaptive migration neural network |
Non-Patent Citations (2)
Title |
---|
Semi-Supervised Bearing Fault Diagnosis and Classification Using Variational Autoencoder-Based Deep Generative Models;Shen Zhang et al.;《IEEE SENSORS JOURNAL》;20210301;第6476-6486页 * |
基于循环神经网络的半监督动态软测量建模方法;邵伟明等;《电子测量与仪器学报》;20191115(第11期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113222046A (en) | 2021-08-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113222045B (en) | Semi-supervised fault classification method based on weighted feature alignment self-encoder | |
CN113222046B (en) | Feature alignment self-encoder fault classification method based on filtering strategy | |
CN103914064B (en) | Based on the commercial run method for diagnosing faults that multi-categorizer and D-S evidence merge | |
CN101169623B (en) | Non-linear procedure fault identification method based on kernel principal component analysis contribution plot | |
CN106649789B (en) | It is a kind of based on the industrial process Fault Classification for integrating semi-supervised Fei Sheer and differentiating | |
CN102361014B (en) | State monitoring and fault diagnosis method for large-scale semiconductor manufacture process | |
CN113642754B (en) | Complex industrial process fault prediction method based on RF noise reduction self-coding information reconstruction and time convolution network | |
CN105955219A (en) | Distributed dynamic process fault detection method based on mutual information | |
CN108375965A (en) | A kind of nongausian process monitoring method rejected based on changeable gauge block crossing dependency | |
CN106843195A (en) | Based on the Fault Classification that the integrated semi-supervised Fei Sheer of self adaptation differentiates | |
CN109507972B (en) | Industrial production process fault monitoring method based on layered non-Gaussian monitoring algorithm | |
CN102880809A (en) | Polypropylene melt index on-line measurement method based on incident vector regression model | |
CN107657274A (en) | A kind of y-bend SVM tree unbalanced data industry Fault Classifications based on k means | |
CN112904810A (en) | Process industry nonlinear process monitoring method based on effective feature selection | |
CN111026058A (en) | Semi-supervised deep learning fault diagnosis method based on Watherstein distance and self-encoder | |
CN108388234A (en) | A kind of fault monitoring method dividing changeable gauge block pca model based on correlation | |
CN114757269A (en) | Complex process refined fault detection method based on local subspace-neighborhood preserving embedding | |
CN108445867A (en) | A kind of nongausian process monitoring method based on distributing ICR models | |
CN112580693A (en) | Petrochemical process fault diagnosis method based on self-help resampling neighborhood preserving embedding | |
CN113362313A (en) | Defect detection method and system based on self-supervision learning | |
CN115096627A (en) | Method and system for fault diagnosis and operation and maintenance in manufacturing process of hydraulic forming intelligent equipment | |
CN114879628B (en) | Multi-mode industrial process fault diagnosis method based on antagonism local maximum mean difference | |
CN112363462A (en) | Static-dynamic cooperative sensing complex industrial process running state evaluation method | |
CN105425777A (en) | Chemical process fault monitoring method based on active learning | |
CN113159225B (en) | Multivariable industrial process fault classification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |