CN111985571A - Low-voltage intelligent monitoring terminal fault prediction method, device, medium and equipment based on improved random forest algorithm - Google Patents

Low-voltage intelligent monitoring terminal fault prediction method, device, medium and equipment based on improved random forest algorithm Download PDF

Info

Publication number
CN111985571A
CN111985571A CN202010872318.4A CN202010872318A CN111985571A CN 111985571 A CN111985571 A CN 111985571A CN 202010872318 A CN202010872318 A CN 202010872318A CN 111985571 A CN111985571 A CN 111985571A
Authority
CN
China
Prior art keywords
sample
characteristic
value
prediction
samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010872318.4A
Other languages
Chinese (zh)
Other versions
CN111985571B (en
Inventor
邓威
唐海国
朱吉然
张帝
游金梁
彭涛
康童
叶丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Hunan Electric Power Co Ltd
State Grid Hunan Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Hunan Electric Power Co Ltd
State Grid Hunan Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Electric Power Research Institute of State Grid Hunan Electric Power Co Ltd, State Grid Hunan Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202010872318.4A priority Critical patent/CN111985571B/en
Publication of CN111985571A publication Critical patent/CN111985571A/en
Application granted granted Critical
Publication of CN111985571B publication Critical patent/CN111985571B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Tourism & Hospitality (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a low-voltage intelligent monitoring terminal fault prediction method, a device, a medium and equipment based on an improved random forest algorithm. And adaptively adjusting the optimal voting weight value a according to the correct sample proportion of the prediction accuracy rate of 100%, and weighting the voting result by using the optimal value a so as to achieve the purpose of optimal prediction accuracy.

Description

Low-voltage intelligent monitoring terminal fault prediction method, device, medium and equipment based on improved random forest algorithm
Technical Field
The invention belongs to the field of intelligent low-voltage fault prediction, and particularly relates to a low-voltage intelligent monitoring terminal fault prediction method, device, medium and equipment based on an improved random forest algorithm.
Background
The random forest algorithm belongs to a supervised learning algorithm in an artificial intelligence algorithm, a distribution network emergency repair fault amount prediction method based on the random forest algorithm is already provided, and the predicted fault amount is used as a reasonable distribution basis of emergency repair resources and teams. It has been proposed in the prior art to use gray projections to improve random forest algorithms to predict the short term load of the system. The field of power distribution network fault amount prediction provides a power distribution network fault prediction model based on a gray projection random forest classification algorithm, the number of faults in different areas and different voltage classes is predicted based on information such as historical faults, loads, weather and area classification, reference basis is provided for reasonable distribution of emergency repair resources and emergency repair teams of each unit, and prediction accuracy needs to be further improved.
Disclosure of Invention
The invention provides a low-voltage intelligent monitoring terminal fault prediction method, a device, a medium and equipment based on an improved random forest algorithm, wherein the accuracy is kept to be optimal 100% accurate by adopting the improved random forest algorithm to adaptively adjust the weight and the voting weight of an incidence matrix, and the prediction accuracy is effectively improved.
The technical scheme provided by the invention is as follows:
on one hand, the method for predicting the fault of the low-voltage intelligent monitoring terminal based on the improved random forest algorithm comprises the following steps:
step 1: obtaining A, B, C historical characteristic values of three-phase voltage and current, and constructing a historical sample characteristic set;
the feature vector of the historical sample is Si,Si=[si1,si2,...,sim]1,2, ·, n; m represents the number of eigenvalues contained in each sample, n represents the size of the historical sample, simAn mth feature value representing an ith history sample;
step 2: calculating a correlation judgment matrix Z with the size of n x m by calculating the correlation between the historical sample feature vector and the sample feature vector to be predicted;
Figure BDA0002651500650000011
wherein Z isijThe correlation between the jth eigenvalue of the ith to nth history samples and the jth column eigenvalue of the sample set to be predicted is shown, i is 2, …, n, j is 1, …, m,
Figure BDA0002651500650000021
wherein n represents the number of historical samples, m represents the number of sample characteristic values, and l represents the number of samples to be predicted; x is the number ofejRepresents the jth characteristic value, y, of the e-th history samplekjRepresenting the jth characteristic value of the kth sample to be predicted;
Figure BDA0002651500650000022
respectively averaging jth eigenvalue of all samples in the historical sample characteristic set and the sample characteristic set to be predicted;
and step 3: constructing a characteristic weight matrix W, W ═ W1,W2,...Wi,...,Wn]TWherein W isi=[w1,w2,...wj...,wm],wjThe weight of the jth characteristic value is taken as the initial value, the initial value is a random value, and the weight value is more than or equal to 0 and less than or equal to 1;
and 4, step 4: performing dot product calculation on the characteristic weight matrix W and the association judgment matrix Z to obtain an association decision matrix U;
Figure BDA0002651500650000023
and 5: randomly selecting d characteristics from a historical sample characteristic set as a training sample set to obtain a weighted voting value a of a decision tree,
Figure BDA0002651500650000024
wherein, λ is a parameter adjusting factor, the initial value is a random value, and the random value is more than or equal to 0 and less than or equal to 1; r isjRepresenting the correlation between the j-th column of the feature vector in the training sample set and the j-th column of the feature vector in the sample set to be predicted; x is the number offjRepresents the jth characteristic value of the f training sample, ykjRepresents the j characteristic value of the kth sample to be predicted, f is 1,2, and t; k is 1,2,. l; j is 1,2,. said, m;
Figure BDA0002651500650000025
Figure BDA0002651500650000026
respectively averaging jth characteristic values of all samples in the training sample set and the sample set to be predicted; t represents the number of training samples;
step 6: selecting a similar historical sample feature set from the historical sample feature vector set by utilizing a U;
adaptively adjusting the threshold according to the number of the set similar historical samples, and setting a threshold eta for each column of characteristic valuesqQ 1, 2.. times, m, if U { S }iThe elements in the matrix are more than or equal to a set threshold value etaqI.e. zijwjsij≥ηqI ═ 2.., n; j 1,2, m, in U { S }iMoment of the designIn the first g rows of the array, g is more than or equal to t, in the first v columns, v is less than or equal to m, and z is satisfied through accumulative selectionijwjsij≥ηqT x m elements of (1) to form a similar historical sample feature set Su
It is possible that less than m elements of each row satisfy the condition, and several rows are selected so that each row has m elements, and finally t × m elements satisfying the condition are formed to form a matrix of t × m.
Figure BDA0002651500650000031
The number of similar samples is equal to the number of t × m matrix elements, and the threshold value etaqAdaptive adjustment according to t, i.e. threshold ηqThe criterion for adjustment is to keep in the matrix U { Si } taking v elements per row and g rows greater than a threshold ηqThe number of similar samples is equal to the number of t × m matrix elements;
and 7: training a decision tree in the random forest by using the selected similar historical sample feature set and the corresponding fault category to obtain a trained random forest;
and 8: weighting the fault prediction result of each decision tree in the random forest by using the weighted voting value a of the decision tree and the associated decision matrix U, and adjusting a to obtain a final prediction model with the prediction accuracy as a target of 100%;
and step 9: inputting the characteristic vector of the sample to be predicted into a final prediction model to obtain a final fault prediction result;
substituting f into the initial values a and I (DEG) with the prediction accuracy of 100 percentRF(X), calculating U, and updating W in an adaptive mode based on the Z value obtained in the step 2;
Figure BDA0002651500650000032
fRF(X) represents the final prediction model, I (. cndot.) represents the number of expressions in parentheses, and fl tree(X) ═ i denotes the failure prediction of the l-th decision tree in a trained random forestThe result is i, c represents the number of failure prediction result categories of the whole random forest,
Figure BDA0002651500650000033
will f isl treeAnd (X) i, the number of times of correct prediction is used as the number of samples with correct final fault prediction, and the number of characteristic vectors of the prediction samples is used as the number of prediction samples.
Further, the process of training the decision tree in the random forest is as follows:
(1) setting parameters;
taking the number of historical sample characteristics as the characteristic dimension of each decision tree, taking the number of times that voltage and current characteristic values need to be judged as the number of decision trees, taking the set number of decision time intervals as the level of each decision tree node, and setting the minimum number of samples on the node as the number of sampling times in one day;
the actual number of samples is the number of eigenvalues multiplied by the number of sampling times; the minimum information gain on the nodes is 1, and the root node of each decision tree corresponds to the fault amount of the similar historical sample characteristic set;
(2) selecting a sample;
selecting training subset X from historical sample set XiSamples as root nodes;
(3) dividing the characteristics;
if the current node reaches the termination condition, namely the current node is trained or marked as a leaf node, no more node characteristic values are used for decision making, the current node is set as the leaf node, and the prediction output of the leaf node is the class c with the maximum number in the sample set of the current nodeiWith a probability piDenotes ciThe proportion of the class in the current sample set;
if the current node does not reach the termination condition, randomly selecting a feature Z from the Z-dimensional features without being put back; using the z dimension characteristic to search the one dimension characteristic k with best classification effect and threshold value t thereofh
When the one-dimensional characteristic k with the best classification effect is calculated, the optimal threshold values of various discrimination types are determined, the k-th dimension characteristic value of the sample on the current node is smaller than the characteristic threshold value of the corresponding discrimination type, the k-th dimension characteristic value is divided into left nodes, and the rest are divided into right nodes. Then, continuously training other nodes to obtain a weak classifier;
for example, voltage loss, undervoltage, overvoltage, overcurrent, undercurrent, overload, reversal, phase failure, residual current fault and normal power failure are used as the types of judgment, the phase voltage value of the phase B and the phase voltage value of the phase C of the phase A and the phase C of the phase B are used as characteristic values, and the threshold value is set to be the characteristic threshold value of the voltage loss, the undervoltage, the overvoltage, the overcurrent, the undercurrent, the overload, the reversal, the phase failure, the residual current fault and the normal power failure;
(4) continuously dividing;
repeating steps (2) and (3) until all nodes are trained or labeled as leaf nodes;
(5) outputting the prediction;
outputting a predicted value to each left leaf node and each right leaf node of the t trees, wherein the predicted value is c with the maximum sum of prediction probabilities in all the treesiAccumulating class probabilities; and when the existing weak classifiers reach a certain number, obtaining the strong classifiers through a voting strategy to obtain the decision tree in the random forest.
Further, the existing weak classifiers reaching a certain number means that the weak classifiers reach the boundary function.
A random forest is a collection comprising a plurality of tree classifiers, defining h (x, theta)i),i=1,2,3...}
Wherein, h (x, theta)i) For the meta classifier of the model, a classification regression tree which is constructed by the CART algorithm and is not subjected to pruning operation is used, x represents a training data set constructed by a random forest, is a multi-dimensional vector set, and thetaiThe data vector set is randomly extracted from x by using a bagging algorithm, and is an independent and identically distributed random vector set. ThetaiThe classification capability of the corresponding decision tree is determined.
The random forest model can be described as one such weak classifier:
{h1(x),h2(x),...,hk(x)}
the method is a classifier set consisting of k (k >1) sub-classifiers, a prediction vector x is input to obtain a prediction output result y, and a boundary function is defined for a sample data set (x, y) as follows:
margin(x,y)=avkI(hk(x)=y)-maxj≠yavkI(hk(x)=j)
i (func) is an exemplary function which is taken when a func description condition is satisfied
1, otherwise, take 0, avk(. indicates averaging the set. The boundary function calculates the prediction of a weak classifier on a certain sample vector, predicts the correct average vote number and the maximum vote number under the condition of wrong prediction, and calculates the difference value of the two indexes. Obviously, the larger the value of the boundary function, the stronger the prediction capability of the classifier set is, and the higher the confidence is.
On the other hand, a low pressure intelligent monitoring terminal fault prediction device based on improve random forest algorithm, its characterized in that includes:
a historical sample feature set construction unit: obtaining A, B, C historical characteristic values of three-phase voltage and current, and constructing a historical sample characteristic set;
the feature vector of the historical sample is Si,Si=[si1,si2,...,sim]1,2, ·, n; m represents the number of eigenvalues contained in each sample, n represents the size of the historical sample, simAn mth feature value representing an ith history sample;
an association judgment matrix calculation unit: calculating a correlation judgment matrix Z with the size of n x m by calculating the correlation between the historical sample feature vector and the sample feature vector to be predicted;
Figure BDA0002651500650000051
wherein Z isijThe correlation between the jth eigenvalue of the ith to nth history samples and the jth column eigenvalue of the sample set to be predicted is shown, i is 2, …, n, j is 1, …, m,
Figure BDA0002651500650000052
wherein n represents the number of historical samples, m represents the number of sample characteristic values, and l represents the number of samples to be predicted; x is the number ofejRepresents the jth characteristic value, y, of the e-th history samplekjRepresenting the jth characteristic value of the kth sample to be predicted;
Figure BDA0002651500650000061
respectively averaging jth eigenvalue of all samples in the historical sample characteristic set and the sample characteristic set to be predicted;
the characteristic weight matrix construction unit is used for constructing a weight matrix unit by utilizing the weight of each characteristic value;
W=[W1,W2,...Wi,...,Wn]Twherein W isi=[w1,w2,...wj...,wm],wjThe weight of the jth characteristic value is obtained, the initial value is a random value, and the weight value is more than or equal to 0 and less than or equal to 1;
an association decision matrix calculation unit: performing dot product calculation on the characteristic weight matrix W and the association judgment matrix Z to obtain an association decision matrix U;
a weighted vote value calculation unit of the decision tree: randomly selecting d characteristics from a historical sample characteristic set as a training sample set, obtaining a weighted voting value a of a decision tree,
Figure BDA0002651500650000062
wherein, λ is a parameter adjusting factor, the initial value is a random value, and the random value is more than or equal to 0 and less than or equal to 1; r isjRepresenting the correlation between the j-th column of the feature vector in the training sample set and the j-th column of the feature vector in the sample set to be predicted; x is the number offjRepresents the jth characteristic value of the f training sample, ykjRepresents the j characteristic value of the kth sample to be predicted, wherein f is 1, 2. k is 1,2,. l; j is 1,2,. said, m;
Figure BDA0002651500650000064
Figure BDA0002651500650000065
respectively averaging jth characteristic values of all samples in the training sample set and the sample set to be predicted; t represents the number of training samples;
a similar historical sample feature set selection unit: selecting a similar historical sample feature set from the historical sample feature vector set by using a U;
adaptively adjusting the threshold according to the number of the set similar historical samples, and setting a threshold eta for each column of characteristic valuesqQ 1, 2.. times, m, if U { S }iThe elements in the matrix are more than or equal to a set threshold value etaqI.e. zijwjsij≥ηqI ═ 2.., n; j 1,2, m, in U { S }iIn the first g rows of the matrix, g is more than or equal to t, in the first v columns, v is less than or equal to m, and z is met through accumulative selectionijwjsij≥ηqT x m elements of (1) to form a similar historical sample feature set Su
A random forest training unit: training a decision tree in the random forest by using the selected similar historical sample feature set and the corresponding fault category to obtain a trained random forest;
a prediction model acquisition unit: weighting the fault prediction result of each decision tree in the random forest by using the weighted voting value a of the decision tree and the associated decision matrix U, and adjusting a to obtain a final prediction model with the prediction accuracy as a target of 100%;
a result prediction unit: inputting the characteristic vector of the sample to be predicted into a final prediction model to obtain a final fault prediction result;
substituting f into the initial values a and I (DEG) with the prediction accuracy of 100 percentRF(X) calculating U, and updating W in a self-adaptive manner based on the Z value obtained by the association judgment matrix calculation unit;
Figure BDA0002651500650000071
fRF(X) represents the final prediction model, I (. cndot.) represents the number of expressions in parentheses, and fl treeI represents the fault prediction result of the first decision tree in the trained random forest as i, c represents the fault prediction result category number of the whole random forest,
Figure BDA0002651500650000072
will f isl treeAnd (X) i, the number of times of correct prediction is used as the number of samples with correct final fault prediction, and the number of characteristic vectors of the prediction samples is used as the number of prediction samples.
In one aspect, a computer storage medium includes a computer program, and the computer program is implemented, when executed by a processor, to implement the method for predicting the fault of the low voltage intelligent monitoring terminal based on the improved random forest algorithm.
On one hand, the low-voltage intelligent monitoring terminal fault prediction device based on the improved random forest algorithm comprises a processor and a memory;
the memory is used for storing computer programs, and the processor is used for executing the computer programs stored by the memory so as to enable the low-voltage intelligent monitoring terminal fault prediction device based on the improved random forest algorithm to execute the low-voltage intelligent monitoring terminal fault prediction method based on the improved random forest algorithm.
Advantageous effects
The invention provides a low-voltage intelligent monitoring terminal fault prediction method, a device, a medium and equipment based on an improved random forest algorithm. And adaptively adjusting the optimal voting weight value a according to the correct sample proportion of the prediction accuracy rate of 100%, and weighting the voting result by using the optimal value a so as to achieve the purpose of optimal prediction accuracy.
Drawings
FIG. 1 is a schematic flow diagram of a process according to an embodiment of the present invention.
Detailed Description
The invention will be further described with reference to the following figures and examples.
As shown in fig. 1, a method for predicting a fault of a low-voltage intelligent monitoring terminal based on an improved random forest algorithm includes:
step 1: obtaining A, B, C historical characteristic values of three-phase voltage and current, and constructing a historical sample characteristic set;
the feature vector of the historical sample is Si,Si=[si1,si2,...,sim]1,2, ·, n; m represents the number of eigenvalues contained in each sample, n represents the size of the historical sample, simAn mth feature value representing an ith history sample;
step 2: calculating a correlation judgment matrix Z with the size of n x m by calculating the correlation between the historical sample feature vector and the sample feature vector to be predicted;
Figure BDA0002651500650000081
wherein Z isijThe correlation between the jth eigenvalue of the ith to nth history samples and the jth column eigenvalue of the sample set to be predicted is shown, i is 2, …, n, j is 1, …, m,
Figure BDA0002651500650000082
wherein n represents the number of historical samples, m represents the number of sample characteristic values, and l represents the number of samples to be predicted; x is the number ofejRepresents the jth characteristic value, y, of the e-th history samplekjRepresenting the jth characteristic value of the kth sample to be predicted;
Figure BDA0002651500650000083
respectively averaging jth eigenvalue of all samples in the historical sample characteristic set and the sample characteristic set to be predicted;
and step 3: constructing a characteristic weight matrix W, W ═ W1,W2,...Wi,...,Wn]TWherein W isi=[w1,w2,...wj...,wm],wjThe weight of the jth characteristic value is taken as the initial value, the initial value is a random value, and the weight value is more than or equal to 0 and less than or equal to 1;
and 4, step 4: performing dot product calculation on the characteristic weight matrix W and the association judgment matrix Z to obtain an association decision matrix U;
Figure BDA0002651500650000084
and 5: randomly selecting d characteristics from a historical sample characteristic set as a training sample set to obtain a weighted voting value a of a decision tree,
Figure BDA0002651500650000091
wherein, λ is a parameter adjusting factor, the initial value is a random value, and the random value is more than or equal to 0 and less than or equal to 1; r isjRepresenting the correlation between the j-th column of the feature vector in the training sample set and the j-th column of the feature vector in the sample set to be predicted; x is the number offjRepresents the jth characteristic value of the f training sample, ykjRepresents the j characteristic value of the kth sample to be predicted, f is 1,2, and t; k is 1,2,. l; j is 1,2,. said, m;
Figure BDA0002651500650000092
Figure BDA0002651500650000093
respectively averaging jth characteristic values of all samples in the training sample set and the sample set to be predicted; t represents the number of training samples;
step 6: selecting a similar historical sample feature set from the historical sample feature vector set by utilizing a U;
adaptively adjusting the threshold according to the number of the set similar historical samples, and setting a threshold eta for each column of characteristic valuesqQ 1, 2.. times, m, if U { S }iThe elements in the matrix are more than or equal to a set threshold value etaqI.e. zijwjsij≥ηqI ═ 2.., n; j 1,2, m, in U { S }iIn the first g rows of the matrix, g is more than or equal to t, in the first v columns, v is less than or equal to m, and z is met through accumulative selectionijwjsij≥ηqT x m elements of (1) to form a similar historical sample feature set Su
It is possible that less than m elements of each row satisfy the condition, and several rows are selected so that each row has m elements, and finally t × m elements satisfying the condition are formed to form a matrix of t × m.
Figure BDA0002651500650000095
The number of similar samples is equal to the number of t × m matrix elements, and the threshold value etaqAdaptive adjustment according to t, i.e. threshold ηqThe criterion for adjustment is to remain at U x SiTaking v elements from each row in the matrix, and taking g rows larger than a threshold etaqThe number of similar samples is equal to the number of t × m matrix elements;
and 7: training a decision tree in the random forest by using the selected similar historical sample feature set and the corresponding fault category to obtain a trained random forest;
and 8: weighting the fault prediction result of each decision tree in the random forest by using the weighted voting value a of the decision tree and the associated decision matrix U, and adjusting a to obtain a final prediction model with the prediction accuracy as a target of 100%;
and step 9: inputting the characteristic vector of the sample to be predicted into a final prediction model to obtain a final fault prediction result;
substituting f into the initial values a and I (DEG) with the prediction accuracy of 100 percentRF(X), calculating U, and updating W in an adaptive mode based on the Z value obtained in the step 2;
Figure BDA0002651500650000101
fRF(X) represents the final prediction model, I (. cndot.) represents the number of expressions in parentheses, and fl tree(X) I represents the fault prediction result of the ith decision tree in the trained random forest as i, c represents the fault prediction result category number of the whole random forest,
Figure BDA0002651500650000102
will f isl treeAnd (X) i, the number of times of correct prediction is used as the number of samples with correct final fault prediction, and the number of characteristic vectors of the prediction samples is used as the number of prediction samples.
The process of training the decision tree in the random forest is as follows:
(1) setting parameters;
taking the number of historical sample characteristics as the characteristic dimension of each decision tree, taking the number of times that voltage and current characteristic values need to be judged as the number of decision trees, taking the set number of decision time intervals as the level of each decision tree node, and setting the minimum number of samples on the node as the number of sampling times in one day;
the actual number of samples is the number of eigenvalues multiplied by the number of sampling times; the minimum information gain on the nodes is 1, and the root node of each decision tree corresponds to the fault amount of the similar historical sample characteristic set;
(2) selecting a sample;
selecting training subset X from historical sample set XiSamples as root nodes;
(3) dividing the characteristics;
if the current node reaches the termination condition, namely the current node is trained or marked as a leaf node, no more node characteristic values are used for decision making, the current node is set as the leaf node, and the prediction output of the leaf node is the class c with the maximum number in the sample set of the current nodeiWith a probability piDenotes ciThe proportion of the class in the current sample set;
if the current node does not reach the termination condition, randomly selecting a feature Z from the Z-dimensional features without being put back; using the z dimension characteristic to search the one dimension characteristic k with best classification effect and threshold value t thereofh
When the one-dimensional characteristic k with the best classification effect is calculated, the optimal threshold values of various discrimination types are determined, the k-th dimension characteristic value of the sample on the current node is smaller than the characteristic threshold value of the corresponding discrimination type, the k-th dimension characteristic value is divided into left nodes, and the rest are divided into right nodes. Then, continuously training other nodes to obtain a weak classifier;
for example, voltage loss, undervoltage, overvoltage, overcurrent, undercurrent, overload, reversal, phase failure, residual current fault and normal power failure are used as the types of judgment, the phase voltage value of the phase B and the phase voltage value of the phase C of the phase A and the phase C of the phase B are used as characteristic values, and the threshold value is set to be the characteristic threshold value of the voltage loss, the undervoltage, the overvoltage, the overcurrent, the undercurrent, the overload, the reversal, the phase failure, the residual current fault and the normal power failure;
(4) continuously dividing;
repeating steps (2) and (3) until all nodes are trained or labeled as leaf nodes;
(5) outputting the prediction;
outputting a predicted value to each left leaf node and each right leaf node of the t trees, wherein the predicted value is c with the maximum sum of prediction probabilities in all the treesiAccumulating class probabilities; and when the existing weak classifiers reach a certain number, obtaining the strong classifiers through a voting strategy to obtain the decision tree in the random forest.
Wherein, the existing weak classifiers reaching a certain number means that the weak classifiers reach the boundary function.
A random forest is a collection comprising a plurality of tree classifiers, defining h (x, theta)i),i=1,2,3...}
Wherein, h (x, theta)i) For the meta classifier of the model, a classification regression tree which is constructed by the CART algorithm and is not subjected to pruning operation is used, x represents a training data set constructed by a random forest, is a multi-dimensional vector set, and thetaiThe data vector set is randomly extracted from x by using a bagging algorithm, and is an independent and identically distributed random vector set. ThetaiThe classification capability of the corresponding decision tree is determined.
The random forest model can be described as one such weak classifier:
{h1(x),h2(x),...,hk(x)}
the method is a classifier set consisting of k (k >1) sub-classifiers, a prediction vector x is input to obtain a prediction output result y, and a boundary function is defined for a sample data set (x, y) as follows:
margin(x,y)=avkI(hk(x)=y)-maxj≠yavkI(hk(x)=j)
i (func) is an exemplary function which is taken when a func description condition is satisfied
1, otherwise, take 0, avk(. indicates averaging the set. The boundary function calculates the prediction of a weak classifier on a certain sample vector, predicts the correct average vote number and the maximum vote number under the condition of wrong prediction, and calculates the difference value of the two indexes. Obviously, the larger the value of the boundary function, the stronger the prediction capability of the classifier set is, and the higher the confidence is.
A low pressure intelligent monitoring terminal fault prediction device based on improve random forest algorithm includes:
a historical sample feature set construction unit: obtaining A, B, C historical characteristic values of three-phase voltage and current, and constructing a historical sample characteristic set;
the feature vector of the historical sample is Si,Si=[si1,si2,...,sim]1,2, ·, n; m represents the number of eigenvalues contained in each sample, n represents the size of the historical sample, simAn mth feature value representing an ith history sample;
an association judgment matrix calculation unit: calculating a correlation judgment matrix Z with the size of n x m by calculating the correlation between the historical sample feature vector and the sample feature vector to be predicted;
Figure BDA0002651500650000121
wherein Z isijThe correlation between the jth eigenvalue of the ith to nth history samples and the jth column eigenvalue of the sample set to be predicted is shown, i is 2, …, n, j is 1, …, m,
Figure BDA0002651500650000122
wherein n represents the number of historical samples, m represents the number of sample characteristic values, and l represents the number of samples to be predicted; x is the number ofejRepresents the jth characteristic value, y, of the e-th history samplekjRepresenting the jth characteristic value of the kth sample to be predicted;
Figure BDA0002651500650000123
respectively averaging jth eigenvalue of all samples in the historical sample characteristic set and the sample characteristic set to be predicted;
the characteristic weight matrix construction unit is used for constructing a weight matrix unit by utilizing the weight of each characteristic value;
W=[W1,W2,...Wi,...,Wn]Twherein W isi=[w1,w2,...wj...,wm],wjThe weight of the jth characteristic value is obtained, the initial value is a random value, and the weight value is more than or equal to 0 and less than or equal to 1;
an association decision matrix calculation unit: performing dot product calculation on the characteristic weight matrix W and the association judgment matrix Z to obtain an association decision matrix U;
a weighted vote value calculation unit of the decision tree: randomly selecting d characteristics from a historical sample characteristic set as a training sample set, obtaining a weighted voting value a of a decision tree,
Figure BDA0002651500650000124
wherein, λ is a parameter adjusting factor, the initial value is a random value, and the random value is more than or equal to 0 and less than or equal to 1; r isjRepresenting the correlation between the j-th column of the feature vector in the training sample set and the j-th column of the feature vector in the sample set to be predicted; x is the number offjRepresents the jth characteristic value of the f training sample, ykjRepresents the j characteristic value of the kth sample to be predicted, wherein f is 1, 2. k is 1,2,. l; j is 1,2,. said, m;
Figure BDA0002651500650000125
Figure BDA0002651500650000126
respectively averaging jth characteristic values of all samples in the training sample set and the sample set to be predicted; t represents the number of training samples;
a similar historical sample feature set selection unit: selecting a similar historical sample feature set from the historical sample feature vector set by using a U;
adaptively adjusting the threshold according to the number of the set similar historical samples, and setting a threshold eta for each column of characteristic valuesqQ is 1, 2.. times, m, if U × Si matrix element is greater than or equal to a set threshold ηqI.e. zijwjsij≥ηqI ═ 2.., n; j is 1,2, m, g is more than or equal to t in the first g rows of the matrix of U { Si }, v is less than or equal to m in the first v columns, and z is satisfied through accumulative selectionijwjsij≥ηqT x m elements of (1) to form a similar historical sample feature set Su
A random forest training unit: training a decision tree in the random forest by using the selected similar historical sample feature set and the corresponding fault category to obtain a trained random forest;
a prediction model acquisition unit: weighting the fault prediction result of each decision tree in the random forest by using the weighted voting value a of the decision tree and the associated decision matrix U, and adjusting a to obtain a final prediction model with the prediction accuracy as a target of 100%;
a result prediction unit: inputting the characteristic vector of the sample to be predicted into a final prediction model to obtain a final fault prediction result;
substituting f into the initial values a and I (DEG) with the prediction accuracy of 100 percentRF(X) calculating U, and updating W in a self-adaptive manner based on the Z value obtained by the association judgment matrix calculation unit;
Figure BDA0002651500650000132
fRF(X) represents the final prediction model, I (. cndot.) represents the number of expressions in parentheses, and fl treeI represents the fault prediction result of the first decision tree in the trained random forest as i, c represents the fault prediction result category number of the whole random forest,
Figure BDA0002651500650000133
will f isl treeAnd (X) i, the number of times of correct prediction is used as the number of samples with correct final fault prediction, and the number of characteristic vectors of the prediction samples is used as the number of prediction samples.
It should be understood that the functional unit modules in the embodiments of the present invention may be integrated into one processing unit, or each unit module may exist alone physically, or two or more unit modules are integrated into one unit module, and may be implemented in the form of hardware or software.
The embodiment of the invention also provides a computer storage medium which comprises a computer program, and the computer program is executed by a processor to realize the low-voltage intelligent monitoring terminal fault prediction method based on the improved random forest algorithm. The beneficial effects of the method are referred to in the section of the method, and are not described in detail herein.
The embodiment of the invention also provides low-voltage intelligent monitoring terminal fault prediction equipment based on the improved random forest algorithm, which comprises a processor and a memory;
the memory is used for storing a computer program, and the processor is used for executing the computer program stored in the memory, so that the low-voltage intelligent monitoring terminal fault prediction device based on the improved random forest algorithm executes the low-voltage intelligent monitoring terminal fault prediction method based on the improved random forest algorithm.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Although the invention has been described above with reference to various embodiments, it should be understood that many changes and modifications may be made without departing from the scope of the invention. It is therefore intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is the following claims, including all equivalents, that are intended to define the spirit and scope of this invention. The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (6)

1. A low-voltage intelligent monitoring terminal fault prediction method based on an improved random forest algorithm is characterized by comprising the following steps:
step 1: obtaining A, B, C historical characteristic values of three-phase voltage and current, and constructing a historical sample characteristic set;
the feature vector of the historical sample is Si,Si=[si1,si2,...,sim]1,2, ·, n; m represents the number of eigenvalues contained in each sample, n represents the size of the historical sample, simM-th representing the ith history sampleA characteristic value;
step 2: calculating a correlation judgment matrix Z with the size of n x m by calculating the correlation between the historical sample feature vector and the sample feature vector to be predicted;
Figure RE-FDA0002734501580000011
wherein Z isijThe correlation between the jth eigenvalue of the ith to nth history samples and the jth column eigenvalue of the sample set to be predicted is shown, i is 2, …, n, j is 1, …, m,
Figure RE-FDA0002734501580000012
wherein n represents the number of historical samples, m represents the number of sample characteristic values, and l represents the number of samples to be predicted; x is the number ofejRepresents the jth characteristic value, y, of the e-th history samplekjRepresenting the jth characteristic value of the kth sample to be predicted;
Figure RE-FDA0002734501580000013
respectively averaging jth eigenvalue of all samples in the historical sample characteristic set and the sample characteristic set to be predicted;
and step 3: constructing a characteristic weight matrix W, W ═ W1,W2,...Wi,...,Wn]TWherein W isi=[w1,w2,...wj...,wm],wjThe weight of the jth characteristic value is taken as the initial value, the initial value is a random value, and the weight value is more than or equal to 0 and less than or equal to 1;
and 4, step 4: performing dot product calculation on the characteristic weight matrix W and the association judgment matrix Z to obtain an association decision matrix U;
Figure RE-FDA0002734501580000014
and 5: from history as a set of training samplesRandomly selecting d characteristics in the sample characteristic set to obtain a weighted voting value a of the decision tree,
Figure RE-FDA0002734501580000021
wherein, λ is a parameter adjusting factor, the initial value is a random value, and the random value is more than or equal to 0 and less than or equal to 1; r isjRepresenting the correlation between the j-th column of the feature vector in the training sample set and the j-th column of the feature vector in the sample set to be predicted; x is the number offjRepresents the jth characteristic value of the f training sample, ykjRepresents the j characteristic value of the kth sample to be predicted, f is 1,2, and t; k is 1,2,. l; j is 1,2,. said, m;
Figure RE-FDA0002734501580000022
Figure RE-FDA0002734501580000023
respectively averaging jth characteristic values of all samples in the training sample set and the sample set to be predicted; t represents the number of training samples;
step 6: selecting a similar historical sample feature set from the historical sample feature vector set by utilizing a U;
adaptively adjusting the threshold according to the number of the set similar historical samples, and setting a threshold eta for each column of characteristic valuesqQ 1, 2.. times, m, if U { S }iThe elements in the matrix are more than or equal to a set threshold value etaqI.e. zijwjsij≥ηqI ═ 2.., n; j 1,2, m, in U { S }iIn the first g rows of the matrix, g is more than or equal to t, in the first v columns, v is less than or equal to m, and z is met through accumulative selectionijwjsij≥ηqT x m elements of (1) to form a similar historical sample feature set Su
And 7: training a decision tree in the random forest by using the selected similar historical sample feature set and the corresponding fault category to obtain a trained random forest;
and 8: weighting the fault prediction result of each decision tree in the random forest by using the weighted voting value a of the decision tree and the associated decision matrix U, and adjusting a to obtain a final prediction model with the prediction accuracy as a target of 100%;
and step 9: inputting the characteristic vector of the sample to be predicted into a final prediction model to obtain a final fault prediction result;
substituting f into the initial values a and I (DEG) with the prediction accuracy of 100 percentRF(X), calculating U, and updating W in an adaptive mode based on the Z value obtained in the step 2;
Figure RE-FDA0002734501580000024
fRF(X) represents the final prediction model, I (. cndot.) represents the number of expressions in parentheses, and fl treeI represents the fault prediction result of the first decision tree in the trained random forest as i, c represents the fault prediction result category number of the whole random forest,
Figure RE-FDA0002734501580000025
will f isl treeAnd (X) i, namely, the number of times of accurate prediction is used as the number of samples with accurate final fault prediction, and the number of characteristic vectors of the prediction samples is used as the number of prediction samples.
2. The method for predicting the fault of the low-voltage intelligent monitoring terminal based on the improved random forest algorithm is characterized in that the process of training the decision tree in the random forest is as follows:
(1) setting parameters;
taking the number of historical sample characteristics as the characteristic dimension of each decision tree, taking the frequency of judging voltage and current characteristic values as the number of decision trees, taking the set decision time interval frequency as the level of each decision tree node, and setting the minimum sample number on the node as the sampling frequency in one day;
the actual number of samples is the number of eigenvalues multiplied by the number of sampling times; the minimum information gain on the nodes is 1, and the root node of each decision tree corresponds to the fault amount of the similar historical sample feature set;
(2) selecting a sample;
selecting training subset X from historical sample set XiSamples as root nodes;
(3) dividing the characteristics;
if the current node reaches the termination condition, namely the current node is trained or marked as a leaf node, no more node characteristic values are used for decision making, the current node is set as the leaf node, and the prediction output of the leaf node is the class c with the maximum number in the sample set of the current nodeiWith a probability piDenotes ciThe proportion of the class in the current sample set;
if the current node does not reach the termination condition, randomly selecting a feature Z from the Z-dimensional features without being put back; using the z-dimension feature to search the one-dimension feature k with the best classification effect and the threshold value t thereofh
When the one-dimensional characteristic k with the best classification effect is calculated, the optimal threshold values of various discrimination types are determined, the k-th dimension characteristic value of the sample on the current node is smaller than the characteristic threshold value of the corresponding discrimination type, the k-th dimension characteristic value is divided into left nodes, and the rest are divided into right nodes. Then, continuously training other nodes to obtain a weak classifier;
(4) continuously dividing;
repeating steps (2) and (3) until all nodes are trained or labeled as leaf nodes;
(5) outputting the prediction;
outputting a predicted value to each left leaf node and each right leaf node of the t trees, wherein the predicted value is c with the maximum sum of prediction probabilities in all the treesiAccumulating class probabilities; and when the existing weak classifiers reach a certain number, obtaining the strong classifiers through a voting strategy to obtain the decision tree in the random forest.
3. The method for predicting the fault of the low-voltage intelligent monitoring terminal based on the improved random forest algorithm as claimed in claim 2, wherein the weak classifiers reaching a certain number refer to the weak classifiers reaching a boundary function.
4. The utility model provides a low pressure intelligent monitoring terminal fault prediction device based on improve random forest algorithm which characterized in that includes:
a historical sample feature set construction unit: obtaining A, B, C historical characteristic values of three-phase voltage and current, and constructing a historical sample characteristic set;
the feature vector of the historical sample is Si,Si=[si1,si2,...,sim]1,2, ·, n; m represents the number of eigenvalues contained in each sample, n represents the size of the historical sample, simAn mth feature value representing an ith history sample;
an association judgment matrix calculation unit: calculating a correlation judgment matrix Z with the size of n x m by calculating the correlation between the historical sample feature vector and the sample feature vector to be predicted;
Figure RE-FDA0002734501580000041
wherein Z isijThe correlation between the jth eigenvalue of the ith to nth history samples and the jth column eigenvalue of the sample set to be predicted is shown, i is 2, …, n, j is 1, …, m,
Figure RE-FDA0002734501580000042
wherein n represents the number of historical samples, m represents the number of sample characteristic values, and l represents the number of samples to be predicted; x is the number ofejRepresents the jth characteristic value, y, of the e-th history samplekjRepresenting the jth characteristic value of the kth sample to be predicted;
Figure RE-FDA0002734501580000043
respectively averaging jth eigenvalue of all samples in the historical sample characteristic set and the sample characteristic set to be predicted;
the characteristic weight matrix construction unit is used for constructing a weight matrix unit by utilizing the weight of each characteristic value;
W=[W1,W2,...Wi,...,Wn]Twherein W isi=[w1,w2,...wj...,wm],wjThe weight of the jth characteristic value is taken as the initial value, the initial value is a random value, and the weight value is more than or equal to 0 and less than or equal to 1;
an association decision matrix calculation unit: performing dot product calculation on the characteristic weight matrix W and the association judgment matrix Z to obtain an association decision matrix U;
a weighted vote value calculation unit of the decision tree: randomly selecting d characteristics from a historical sample characteristic set as a training sample set to obtain a weighted voting value a of a decision tree,
Figure RE-FDA0002734501580000044
wherein, λ is a parameter adjusting factor, the initial value is a random value, and the random value is more than or equal to 0 and less than or equal to 1; r isjRepresenting the correlation between the j-th column of the feature vector in the training sample set and the j-th column of the feature vector in the sample set to be predicted; x is the number offjRepresents the jth characteristic value of the f training sample, ykjRepresents the j characteristic value of the kth sample to be predicted, wherein f is 1, 2. k is 1,2,. l; j is 1,2,. said, m;
Figure RE-FDA0002734501580000045
Figure RE-FDA0002734501580000051
respectively averaging jth characteristic values of all samples in the training sample set and the sample set to be predicted; t represents the number of training samples;
a similar historical sample feature set selection unit: selecting a similar historical sample feature set from the historical sample feature vector set by utilizing a U;
adaptively adjusting the threshold according to the number of the set similar historical samples, and setting a threshold eta for each column of characteristic valuesqQ 1, 2.. times, m, if U { S }iThe elements in the matrix are more than or equal to a set threshold value etaqI.e. zijwjsij≥ηqI ═ 2.., n; j 1,2, m, in U { S }iFront of the matrixIn g rows, g is more than or equal to t, in the former v column, v is less than or equal to m, and z is satisfied by accumulative selectionijwjsij≥ηqT x m elements of (1) to form a similar historical sample feature set Su
A random forest training unit: training a decision tree in the random forest by using the selected similar historical sample feature set and the corresponding fault category to obtain a trained random forest;
a prediction model acquisition unit: weighting the fault prediction result of each decision tree in the random forest by using the weighted voting value a of the decision tree and the associated decision matrix U, and adjusting a to obtain a final prediction model with the prediction accuracy as a target of 100%;
a result prediction unit: inputting the characteristic vector of the sample to be predicted into a final prediction model to obtain a final fault prediction result;
substituting f into the initial values a and I (DEG) with the prediction accuracy of 100 percentRF(X) calculating U, and updating W in a self-adaptive manner based on the Z value obtained by the association judgment matrix calculating unit;
Figure RE-FDA0002734501580000052
fRF(X) represents the final prediction model, I (. cndot.) represents the number of expressions in parentheses, and fl treeI represents the fault prediction result of the first decision tree in the trained random forest as i, c represents the fault prediction result category number of the whole random forest,
Figure RE-FDA0002734501580000053
will f isl treeAnd (X) i, namely, the number of times of accurate prediction is used as the number of samples with accurate final fault prediction, and the number of characteristic vectors of the prediction samples is used as the number of prediction samples.
5. A computer storage medium comprising a computer program, wherein the computer program is executed by a processor to implement the method for predicting the fault of the low voltage intelligent monitoring terminal based on the improved random forest algorithm according to any one of claims 1 to 3.
6. A low-voltage intelligent monitoring terminal fault prediction device based on an improved random forest algorithm is characterized by comprising a processor and a memory;
the memory is used for storing computer programs, and the processor is used for executing the computer programs stored by the memory to enable the low-voltage intelligent monitoring terminal fault prediction device based on the improved random forest algorithm to execute the low-voltage intelligent monitoring terminal fault prediction method based on the improved random forest algorithm according to any one of claims 1 to 3.
CN202010872318.4A 2020-08-26 2020-08-26 Low-voltage intelligent monitoring terminal fault prediction method, device, medium and equipment Active CN111985571B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010872318.4A CN111985571B (en) 2020-08-26 2020-08-26 Low-voltage intelligent monitoring terminal fault prediction method, device, medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010872318.4A CN111985571B (en) 2020-08-26 2020-08-26 Low-voltage intelligent monitoring terminal fault prediction method, device, medium and equipment

Publications (2)

Publication Number Publication Date
CN111985571A true CN111985571A (en) 2020-11-24
CN111985571B CN111985571B (en) 2022-09-09

Family

ID=73440950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010872318.4A Active CN111985571B (en) 2020-08-26 2020-08-26 Low-voltage intelligent monitoring terminal fault prediction method, device, medium and equipment

Country Status (1)

Country Link
CN (1) CN111985571B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112733903A (en) * 2020-12-30 2021-04-30 许昌学院 Air quality monitoring and alarming method, system, device and medium based on SVM-RF-DT combination
CN113361607A (en) * 2021-06-08 2021-09-07 云南电网有限责任公司电力科学研究院 Medium-voltage distribution network line problem analysis method and device
CN114912372A (en) * 2022-06-17 2022-08-16 山东黄金矿业科技有限公司充填工程实验室分公司 High-precision filling pipeline fault early warning method based on artificial intelligence algorithm
CN114912721A (en) * 2022-07-18 2022-08-16 国网江西省电力有限公司经济技术研究院 Method and system for predicting energy storage peak shaving demand
CN115184674A (en) * 2022-07-01 2022-10-14 苏州清研精准汽车科技有限公司 Insulation test method and device, electronic terminal and storage medium
CN116910668A (en) * 2023-09-11 2023-10-20 国网浙江省电力有限公司余姚市供电公司 Lightning arrester fault early warning method, device, equipment and storage medium
CN117408574A (en) * 2023-12-13 2024-01-16 南通至正电子有限公司 Chip production monitoring management method, device, equipment and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080170554A1 (en) * 2005-03-02 2008-07-17 Zte Corporation Method and Equipment for Realizing Smart Antenna in Wcdma System
US20080211576A1 (en) * 2007-02-26 2008-09-04 Harris Corporation Linearization of RF Power Amplifiers Using an Adaptive Subband Predistorter
CN110210381A (en) * 2019-05-30 2019-09-06 盐城工学院 A kind of adaptive one-dimensional convolutional neural networks intelligent failure diagnosis method of domain separation
CN111046931A (en) * 2019-12-02 2020-04-21 北京交通大学 Turnout fault diagnosis method based on random forest

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080170554A1 (en) * 2005-03-02 2008-07-17 Zte Corporation Method and Equipment for Realizing Smart Antenna in Wcdma System
US20080211576A1 (en) * 2007-02-26 2008-09-04 Harris Corporation Linearization of RF Power Amplifiers Using an Adaptive Subband Predistorter
CN110210381A (en) * 2019-05-30 2019-09-06 盐城工学院 A kind of adaptive one-dimensional convolutional neural networks intelligent failure diagnosis method of domain separation
CN111046931A (en) * 2019-12-02 2020-04-21 北京交通大学 Turnout fault diagnosis method based on random forest

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JUN LIU ET AL.: "Identification of sunflower leaf diseases based on random forest algorithm", 《2019 INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING, AUTOMATION AND SYSTEMS (ICICAS)》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112733903A (en) * 2020-12-30 2021-04-30 许昌学院 Air quality monitoring and alarming method, system, device and medium based on SVM-RF-DT combination
CN112733903B (en) * 2020-12-30 2023-11-17 许昌学院 SVM-RF-DT combination-based air quality monitoring and alarming method, system, device and medium
CN113361607A (en) * 2021-06-08 2021-09-07 云南电网有限责任公司电力科学研究院 Medium-voltage distribution network line problem analysis method and device
CN113361607B (en) * 2021-06-08 2023-01-20 云南电网有限责任公司电力科学研究院 Medium-voltage distribution network line problem analysis method and device
CN114912372A (en) * 2022-06-17 2022-08-16 山东黄金矿业科技有限公司充填工程实验室分公司 High-precision filling pipeline fault early warning method based on artificial intelligence algorithm
CN114912372B (en) * 2022-06-17 2024-01-26 山东黄金矿业科技有限公司充填工程实验室分公司 High-precision filling pipeline fault early warning method based on artificial intelligence algorithm
CN115184674A (en) * 2022-07-01 2022-10-14 苏州清研精准汽车科技有限公司 Insulation test method and device, electronic terminal and storage medium
CN114912721A (en) * 2022-07-18 2022-08-16 国网江西省电力有限公司经济技术研究院 Method and system for predicting energy storage peak shaving demand
CN114912721B (en) * 2022-07-18 2022-12-13 国网江西省电力有限公司经济技术研究院 Method and system for predicting energy storage peak shaving demand
CN116910668A (en) * 2023-09-11 2023-10-20 国网浙江省电力有限公司余姚市供电公司 Lightning arrester fault early warning method, device, equipment and storage medium
CN116910668B (en) * 2023-09-11 2024-04-02 国网浙江省电力有限公司余姚市供电公司 Lightning arrester fault early warning method, device, equipment and storage medium
CN117408574A (en) * 2023-12-13 2024-01-16 南通至正电子有限公司 Chip production monitoring management method, device, equipment and readable storage medium

Also Published As

Publication number Publication date
CN111985571B (en) 2022-09-09

Similar Documents

Publication Publication Date Title
CN111985571B (en) Low-voltage intelligent monitoring terminal fault prediction method, device, medium and equipment
US10999247B2 (en) Density estimation network for unsupervised anomaly detection
CN109800875A (en) Chemical industry fault detection method based on particle group optimizing and noise reduction sparse coding machine
CN111695611B (en) Bee colony optimization kernel extreme learning and sparse representation mechanical fault identification method
CN114925612A (en) Transformer fault diagnosis method for optimizing hybrid kernel extreme learning machine based on sparrow search algorithm
CN112766603A (en) Traffic flow prediction method, system, computer device and storage medium
CN114118596A (en) Photovoltaic power generation capacity prediction method and device
CN112884149A (en) Deep neural network pruning method and system based on random sensitivity ST-SM
CN111275074B (en) Power CPS information attack identification method based on stacked self-coding network model
CN117310533A (en) Service life acceleration test method and system for proton exchange membrane fuel cell
Li The hybrid credit scoring strategies based on knn classifier
CN116720038A (en) Fault diagnosis method, equipment and storage medium for ball screw of train door
CN116599211A (en) Control method of intelligent switch module based on multi-sensor information fusion
CN115713144A (en) Short-term wind speed multi-step prediction method based on combined CGRU model
CN116108975A (en) Method for establishing short-term load prediction model of power distribution network based on BR-SOM clustering algorithm
CN113205182B (en) Real-time power load prediction system based on sparse pruning method
CN111985524A (en) Improved low-voltage transformer area line loss calculation method
CN114091183A (en) Tri-Training and deep learning based fault diagnosis method for high-power direct-current charging equipment of electric automobile
Shah Greedy Pruning for Continually Adapting Networks
Remeikis et al. Text categorization using neural networks initialized with decision trees
CN117998364B (en) XGBoost WSN intrusion detection system based on mixed feature selection
Li et al. Approximate policy iteration with unsupervised feature learning based on manifold regularization
Nojima et al. Multiobjective evolutionary data mining for performance improvement of evolutionary multiobjective optimization
CN117805607B (en) DC level difference matching test method for power plant DC system
CN115587644B (en) Photovoltaic power station performance parameter prediction method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant