CN111209434A - Substation equipment inspection system and method based on multi-source heterogeneous data fusion - Google Patents
Substation equipment inspection system and method based on multi-source heterogeneous data fusion Download PDFInfo
- Publication number
- CN111209434A CN111209434A CN202010022973.0A CN202010022973A CN111209434A CN 111209434 A CN111209434 A CN 111209434A CN 202010022973 A CN202010022973 A CN 202010022973A CN 111209434 A CN111209434 A CN 111209434A
- Authority
- CN
- China
- Prior art keywords
- equipment
- acquisition module
- data acquisition
- data
- classification result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 43
- 238000007689 inspection Methods 0.000 title claims abstract description 18
- 238000000034 method Methods 0.000 title claims description 18
- 238000013528 artificial neural network Methods 0.000 claims abstract description 25
- 238000013500 data storage Methods 0.000 claims abstract description 7
- 238000013527 convolutional neural network Methods 0.000 claims description 36
- 238000012544 monitoring process Methods 0.000 claims description 14
- 238000013480 data collection Methods 0.000 claims description 10
- 238000004458 analytical method Methods 0.000 claims description 8
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000003062 neural network model Methods 0.000 claims description 5
- 230000002159 abnormal effect Effects 0.000 claims description 4
- 238000012937 correction Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 3
- 239000000779 smoke Substances 0.000 claims 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 2
- 238000012549 training Methods 0.000 description 24
- 238000001514 detection method Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 9
- 238000000354 decomposition reaction Methods 0.000 description 5
- 238000000605 extraction Methods 0.000 description 5
- 238000012360 testing method Methods 0.000 description 5
- 238000003745 diagnosis Methods 0.000 description 4
- 230000010354 integration Effects 0.000 description 4
- 238000011176 pooling Methods 0.000 description 4
- 238000012795 verification Methods 0.000 description 4
- 238000003825 pressing Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000013138 pruning Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/75—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/254—Fusion techniques of classification results, e.g. of results related to same input data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/02—Computing arrangements based on specific mathematical models using fuzzy logic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/20—Administration of product repair or maintenance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Business, Economics & Management (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Economics (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Strategic Management (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Computational Linguistics (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Automation & Control Theory (AREA)
- Mathematical Analysis (AREA)
- Operations Research (AREA)
- Entrepreneurship & Innovation (AREA)
- Fuzzy Systems (AREA)
- Algebra (AREA)
- Computational Mathematics (AREA)
- Quality & Reliability (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Multimedia (AREA)
Abstract
The invention discloses a transformer substation equipment inspection system based on multi-source heterogeneous data fusion, which comprises a multi-source heterogeneous data acquisition module, a video image data acquisition module and a sensing data acquisition module, wherein the multi-source heterogeneous data acquisition module comprises a video image data acquisition module and a sensing data acquisition module; the multi-source heterogeneous data storage module is used for converting and storing various data acquired by the video image data acquisition module and the sensing data acquisition module into formats; a multi-source heterogeneous data fusion module comprising: the invention relates to a deep neural network classification module and a classification result fusion module, wherein a plurality of cameras are used for collecting various video data, a plurality of sensors are used for collecting various sensing data, a plurality of data sources of each device are formed, then two deep neural networks are used for identifying and classifying the video data and the sensing data for the various data sources, two identification and classification results are fused through a fuzzy integral algorithm, and a better identification effect is obtained by combining various data and the respective characteristics and advantages of the two identification and classification results.
Description
Technical Field
The invention relates to the field of power equipment detection, in particular to a transformer substation equipment inspection system and a transformer substation equipment inspection method based on multi-source heterogeneous data fusion.
Background
The inspection of substation equipment and environment is an important method for ensuring the normal operation of a substation, the number of unattended substations is continuously increased along with the development of an intelligent power grid, the existing inspection robot of the substation carries various sensors, instruments, cameras and the like, outdoor high-voltage equipment and environment are inspected in the unattended substations, various data of main equipment and environment of the substation are obtained in real time, defects and abnormity of the electric power equipment can be found in time by using a background analysis algorithm, and necessary guarantee is provided for the safe operation of the substation.
Generally, real scenes of a transformer substation are complex, the difference between targets is large when the same target is under the conditions of different illumination, shielding, angles and the like, and the current mainstream technical means can only provide good identification performance under a simple environment, so that the target detection has the conditions of false detection, missing detection and the like. In monitoring, in order to improve the detection precision of a target, a plurality of state parameters are often required to be described, and on the basis, a plurality of measured parameters are comprehensively analyzed, so that the redundancy and complementarity of information are improved, and the improvement of the detection accuracy is a problem which is urgently needed to be solved at present.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a transformer substation equipment inspection system and a transformer substation equipment inspection method based on multi-source heterogeneous data fusion, and the technical scheme is as follows: the method comprises the steps of acquiring multi-aspect parameter information of each device of the total station by adopting multiple cameras and multiple sensors, solving the problems of false detection and missed detection of fault information caused by insufficient sensors, preliminarily diagnosing the information acquired by the cameras and the sensors by adopting classifiers, determining the possibility that the fault to be diagnosed belongs to different faults, and performing decision fusion diagnosis by adopting a fuzzy integral fusion method on the basis of fully considering the correlation degree of each classifier and different fault types, thereby improving the detection accuracy.
The specific scheme provided by the invention is as follows:
a transformer substation equipment inspection method based on multi-source heterogeneous data fusion comprises the following specific steps:
(1) multi-source heterogeneous data acquisition, namely, comprehensively monitoring all equipment in the total station by adopting a monitoring camera to obtain a plurality of video image data of each equipment; the method comprises the steps that a sensor is adopted to obtain sensing data of all equipment in the total station from different aspects, and the sensing data are used for obtaining multi-aspect parameters of each equipment;
(2) the multi-source heterogeneous data storage is used for converting and storing various data acquired by the video image data acquisition module and the sensing data acquisition module into formats;
(3) multi-source heterogeneous data fusion, comprising: and the deep neural network classification and classification result fusion are used for carrying out data feature extraction and target state identification on the collected video image data and the collected sensing data, fusing all identification results by utilizing fuzzy integration, and obtaining a final equipment state identification result.
The deep neural network classification is used for establishing a convolutional neural network and a deep random configuration network, and network model parameters are set as initial values; determining the layer number and a loss function in a network model, wherein the loss function is used for network correction model parameters during back propagation until the loss function is converged, and establishing the model; and acquiring data samples according to preset moments, recording the true values of the identification states at corresponding moments, and inputting the data samples into the convolutional neural network and the deep random configuration network respectively to obtain a first classification result and a second classification result correspondingly.
And the classification result fusion is to record the identification result of the equipment state obtained by the data acquired by the video image data acquisition module as a first classification result, record the identification result of the equipment state obtained by the data acquired by the sensing data acquisition module as a second classification result, and fuse the first classification result and the second classification result by adopting a fuzzy integral algorithm to obtain a final equipment state identification result.
Further, the convolutional neural network specifically comprises the following steps:
(1) dividing video image data into a training set, a test set and a verification set according to a proportion, and making labels of the data sets;
(2) designing a structure of the deep convolutional neural network, and setting parameters of a convolutional layer, a pooling layer and a full-link layer, wherein the pooling method uses max-posing, the classifier uses softmax regression classification, the number of network model layers is set to four layers, the first layer and the third layer are convolutional layers, and the other two layers are pooling operation layers;
(3) taking the data of the training set as the input of the convolutional neural network, training the convolutional neural network by adopting a BP algorithm with random gradient descent, optimizing the loss function of the whole model, terminating the training when the error of the convolutional neural network on the training set is completely converged, and storing the parameters of each layer of the convolutional neural network;
(4) and setting a category confidence threshold, inputting the data of the test set and the verification set into a convolutional neural network, and finely adjusting the convolutional neural network to obtain output.
Further, the deep random configuration network specifically comprises the following steps:
(1) dividing the sensing acquisition data into a training set, a test set and a verification set according to a proportion, and performing label manufacturing on the data set;
(2) designing a structure of the deep convolutional neural network, setting parameters of a convolutional layer, a pooling layer and a full-link layer, and selecting an activation function to adopt LeakyReLU;
(3) taking the data of the training set as the input of the convolutional neural network, training the convolutional neural network by adopting a random gradient descent method with self-adaptive learning rate, optimizing the loss function of the whole model, terminating the training when the error of the convolutional neural network on the training set is completely converged, and storing the parameters of each layer of the convolutional neural network;
specifically, a low-rank decomposition strategy is adopted, the number of output characteristic graphs of the first convolutional layer after each low-rank decomposition is calculated by knowing the calculation complexity of the convolutional layer before the low-rank decomposition and the multiple needing acceleration, and the convolutional neural network is subjected to layer-by-layer low-rank decomposition training to reduce the calculation amount;
specifically, a network pruning strategy is adopted, the convolutional neural network model after low-rank decomposition is retrained, k-means clustering is carried out on the weight of the residual connection of each layer of the convolutional neural network, fine adjustment is carried out on the clustering result, and the convolutional layer of the convolutional neural network and the redundant connection of the full connection layer are removed to reduce the storage capacity.
(4) And setting a category confidence threshold, inputting the data of the test set and the verification set into a convolutional neural network, and finely adjusting the convolutional neural network to obtain output.
Further, the fuzzy integration algorithm specifically comprises the following steps:
(1) preprocessing the first classification result and the second classification result by utilizing a fuzzy technology, performing primary fault diagnosis, determining a possible fault classification result, and taking the possible fault classification result as a fault candidate type. The selected membership degrees are as follows:
wherein x is input data to be processed, y is a processed numerical value, and e is a natural constant.
(2) Forming a fault candidate type set D ═ D according to possible fault classification results1,d2,…,dND is a fault candidate type;
(3) forming a direct association type set D of each fault candidate type according to possible fault classification resultsi-direct={dm,…,dnD with next-level association type seti-inairect={dk,…,di};
(4) Determining the density of the blur, i.e. gi=g({xi},i=1,2,…,n,giThe density of the ambiguity of the ith information, i.e., its weight;
g(x1)=g({x1}) and g (x)1)=g({xi})+g(xi-1)+λg({xi})g(xi-1) I 2, …, n, and find the fuzzy measure g, λiIs an intermediate number;
(6) forming a set F of the support degree of the direct association type to the fault candidate type according to the topology information and the diagnosis conclusion of each typei-direct={fm,…,fnF set of supporting degree of the next level association type to the fault candidate typei-indirect={fk,…,fl};
(7) According to the formulaCalculating a fuzzy integral value ei,eiThat is, the fault type probability index given by the comprehensive diagnosis forms a fault type probability index set E ═ E of the fault candidate types1,e2,…,eNAnd determining the fault type according to the fault type possibility index set, thereby giving a final classification result.
The utility model provides a transformer substation equipment system of patrolling and examining based on heterogeneous data fusion of multisource, includes:
the multi-source heterogeneous data acquisition module comprises a video image data acquisition module and a sensing data acquisition module, wherein the video image data acquisition module adopts a monitoring camera to comprehensively monitor all equipment in the total station to acquire a plurality of video image data of each equipment; the sensing data acquisition module acquires sensing data from different aspects of all the equipment of the total station by adopting a sensor and is used for acquiring multi-aspect parameters of each equipment;
the multi-source heterogeneous data storage module is used for converting and storing various data acquired by the video image data acquisition module and the sensing data acquisition module into formats;
a multi-source heterogeneous data fusion module comprising: the deep neural network classification module and the classification result fusion module are used for acquiring the recognition result of the equipment state;
the deep neural network classification module extracts characteristics of data acquired by the video image data acquisition module and the sensing data acquisition module through a deep neural network model respectively to acquire an identification result of each equipment state;
the classification result fusion module records the identification result of the equipment state obtained by the data acquired by the video image data acquisition module as a first classification result, records the identification result of the equipment state obtained by the data acquired by the sensing data acquisition module as a second classification result, and fuses the first classification result and the second classification result by adopting a fuzzy integral algorithm to obtain a final equipment state identification result.
The transformer substation equipment inspection system and method based on multi-source heterogeneous data fusion have the following beneficial effects:
the method comprises the steps of adopting a plurality of cameras to collect various video data, adopting a plurality of sensors to collect various sensing data, forming various data sources of each device, then adopting two deep neural networks to respectively identify and classify the video data and the sensing data for the various data sources, fusing two identification and classification results through a fuzzy integral algorithm, combining various data and combining respective characteristics and advantages of the two identification and classification results, and obtaining a better identification effect.
Drawings
FIG. 1 is a flow chart of a substation equipment inspection system based on multi-source heterogeneous data fusion according to the present invention;
FIG. 2 is a block diagram of a data processing flow of a substation equipment inspection system based on multi-source heterogeneous data fusion according to the present invention;
detailed description of the preferred embodiments
The substation equipment inspection system and method based on multi-source heterogeneous data fusion according to the present invention will be further explained and explained with reference to the drawings and specific embodiments of the specification, but the explanation and explanation do not limit the technical solution of the present invention.
Referring to fig. 1-2, a transformer substation equipment inspection system based on multisource heterogeneous data fusion includes:
the multi-source heterogeneous data acquisition module comprises a video image data acquisition module and a sensing data acquisition module, wherein the video image data acquisition module adopts a monitoring camera to comprehensively monitor all equipment in the total station to acquire a plurality of video image data of each equipment; the sensing data acquisition module acquires sensing data from different aspects of all the equipment of the total station by adopting a sensor and is used for acquiring multi-aspect parameters of each equipment;
the multi-source heterogeneous data storage module is used for converting and storing various data acquired by the video image data acquisition module and the sensing data acquisition module into formats;
a multi-source heterogeneous data fusion module comprising: the deep neural network classification module and the classification result fusion module are used for acquiring the recognition result of the equipment state;
the deep neural network classification module extracts characteristics of data acquired by the video image data acquisition module and the sensing data acquisition module through a deep neural network model respectively to acquire an identification result of each equipment state;
the classification result fusion module records the identification result of the equipment state obtained by the data acquired by the video image data acquisition module as a first classification result, records the identification result of the equipment state obtained by the data acquired by the sensing data acquisition module as a second classification result, and fuses the first classification result and the second classification result by adopting a fuzzy integral algorithm to obtain a final equipment state identification result.
As a further optimization of the above scheme, the monitoring camera adopts multi-azimuth and multi-angle setting for the equipment, and is used for acquiring multi-angle video data of each equipment; the sensor comprises sensing data such as a substation SF6 pressure gauge, an oil level gauge, a device panel liquid crystal display and an indicator light, and is used for acquiring various sensing information of each device.
In order to acquire more comprehensive video data, the monitoring camera is divided into a three-level structure, namely a close-range level, an intermediate-range level and a far-range level, the close-range level camera is used for acquiring close-range data of the equipment, the range of the data acquired by the intermediate-range level camera is the range of the area acquired by the close-range level cameras in a preset number, and the range of the data acquired by the far-range level camera is the range of the area acquired by the intermediate-range level cameras in the preset number.
In order to integrate various collected data and obtain the connection among a large amount of data so as to identify the state of equipment, the data are classified by adopting a deep neural network, wherein the deep neural network comprises a convolutional neural network and a deep random configuration network, the convolutional neural network is used for carrying out characteristic target identification and target state identification on the data collected by a video image data collection module, and the deep random configuration network is used for carrying out state identification on the data collected by a sensing data collection module.
As a further optimization of the above scheme, the deep neural network classification module specifically includes:
establishing a convolutional neural network and a deep random configuration network, and setting network model parameters as initial values;
determining the layer number and a loss function in a network model, wherein the loss function is used for network correction model parameters during back propagation until the loss function is converged, and establishing the model;
and acquiring data samples according to preset moments, recording the true values of the identification states at corresponding moments, and inputting the data samples into the convolutional neural network and the deep random configuration network respectively to obtain a first classification result and a second classification result correspondingly.
In the classification result fusion module, two device state recognition results obtained by a convolutional neural network and a deep random configuration network are fused, so that an optimized classification result is obtained, a fuzzy integral algorithm is adopted, and the method specifically comprises the following steps:
respectively establishing a matrix according to the first classification result and the second classification result, wherein each column of each matrix represents the identification result of different samples to the same equipment, the equipment state is normal and is recorded as 0, the equipment state is abnormal and is recorded as 1, the equipment state is open and is recorded as 0, the equipment state is closed and is recorded as 1, and each row of each matrix represents the identification result of different equipment in the same sample;
the recognition results of a plurality of samples of each deep neural network are arranged into a table form, as shown in table 1:
recognizing a state | Device 1 | Device 1 | ...... | Device n |
Sample 1 | ||||
Sample 2 | ||||
...... | ||||
Sample m |
TABLE 1 deep neural network output data collation results
Obtaining the probability that the real value is i when the classification result is j in the first class classification result and the second class classification result of each device according to a formula, wherein i and j represent the device state, the value range is 0 and 1, and the value range is marked as pij;S1 ij
The process of using the fuzzy integration algorithm for the state recognition result of a device is described as follows:
TABLE 2 data collation results for a device
In this embodiment, SijResult of (2) is 0 or 1, corresponding to pijResult of (A) has p01Indicates the probability that the true value of the device state is 0 in the case where the classification result is 1, p11Representing the probability of the true value of the device state being 1 in the case of a classification result of 1, p00Indicates the probability that the true value of the device state is 1 in the case where the classification result is 0, p01Indicating the probability that the true value of the device state is 0 in the case where the classification result is 1,
and acquiring lambda in the fuzzy integral according to the credibility of each classification result of each device, wherein the formula is as follows:
in the present embodiment, p is the classification result for two kinds of deep neural network recognition of the device 100Representing the probability, i.e. confidence, that the recognition classification result is correct when it is 011If the recognition and classification result is 1, the correct probability, i.e., the reliability is represented, and the formula for obtaining the intermediate number λ is as follows: λ +1 ═ 1+ p00)(1+p11)。
Acquiring a fuzzy measurement value g in fuzzy integration, wherein the formula is as follows:
g(A1)=g({yi})=g1
g(Ai)=gi+g(ai-1)+λgig(ai-1),1<i≤n;
acquiring a fuzzy integral value e, namely a final classification result, wherein the formula is as follows:
the invention also discloses a transformer substation equipment inspection method based on multi-source heterogeneous data fusion, which comprises the following steps:
the multi-source heterogeneous data acquisition comprises a video image data acquisition module and a sensing data acquisition module, wherein the video image data acquisition module adopts a monitoring camera to comprehensively monitor all equipment in the total station to acquire a plurality of video image data of each equipment; the sensing data acquisition module acquires sensing data from different aspects of all the equipment of the total station by adopting a sensor and is used for acquiring multi-aspect parameters of each equipment;
the multi-source heterogeneous data storage is used for converting and storing various data acquired by the video image data acquisition module and the sensing data acquisition module into formats;
multi-source heterogeneous data fusion, comprising: the deep neural network classification and the classification result are fused,
the deep neural network classification is to extract characteristics of the collected video image data and the collected sensing data through a deep neural network model respectively and obtain the recognition result of each equipment state;
the classification result fusion module records the identification result of the equipment state obtained by the data acquired by the video image data acquisition module as a first classification result, records the identification result of the equipment state obtained by the data acquired by the sensing data acquisition module as a second classification result, and fuses the first classification result and the second classification result by adopting a fuzzy integral algorithm to obtain a final equipment state identification result.
The deep neural network classification comprises a convolutional neural network and a deep random configuration network, wherein the convolutional neural network is used for carrying out characteristic target identification and target state identification on data collected by a video image data collection module, and the deep random configuration network is used for carrying out state identification on the data collected by a sensing data collection module.
In this embodiment, for the characteristics of a large data volume, high feature dimension and short search response time of a work ticket video image, a video image multi-layer compression feature extraction algorithm based on a structure adaptive convolutional neural network is adopted, and the method includes the following steps: (1) reading a training sample image and a test sample image; (2) normalizing the size, the posture and the illumination of all the images; (3) randomly selecting the training images after the treatment to generate a new training sample by weighted average so as to amplify the sample; (4) initializing the network structure of the convolutional neural network, and setting two index values for controlling the network growth: the average error of the system and the recognition rate of the training samples; (5) and sending the processed training sample into an initial network, judging whether the initial network has a convergence trend within a specified training frequency, if the network does not converge, adding a global branch on the basis of the initial network, fixing the original network structure, and only training the newly added branch. If the newly added branch is still not converged, adding a new branch, and so on until a certain branch meets the convergence condition, training the whole network until the average error of the system reaches an expected value, and completing the expansion of the global network; if the initial network has a convergence trend, not expanding the global expansion, and training the initial network until the average error of the system reaches an expected value; (6) after the global network learning is finished, if the recognition rate of the training sample does not reach the expected value, local expansion is expanded, namely a local branch is added, the original global network structure is fixed, only the newly added local branch is trained until the recognition rate of the training sample reaches the expected value, and the global network learning is finished; if the recognition rate of the training samples reaches the expected value after the global network learning is finished, the local expansion is not expanded, and the image compression feature extraction under the premise of ensuring the recognition rate is realized.
In this embodiment, a picture sample of a meter on the operation site of the primary substation equipment is collected, the current situation of a typical application scene of meters is analyzed, the configuration conditions, forms and visual characteristics of a pressure meter, an oil level meter and the like of the substation SF6 are investigated, and different numerical values and state reading schemes of a numerical meter and a scale meter are analyzed and provided. And determining normal working intervals and abnormal state intervals of different meters according to equipment management operation regulations, and providing support for training intelligent image recognition algorithms of the meters.
The secondary equipment comprises a transformer substation SF6 pressure gauge, an oil level gauge, a device panel liquid crystal display and indicator lamp, a switching handle, an air switch and a secondary pressing plate. The technical embodiment is similar to the primary equipment technical embodiment. The secondary press plate studied in this project presented several figures: 1. the dual-position open-close type 2, the dual-position plug-in type 3 and the three-position switching type realize reading and checking of the state of the pressing plate through comparison of the shape and the state of the pressing plate.
In the embodiment, the analysis and identification of video face images comprise the steps of image pickup, face detection and positioning, image and processing, feature extraction and identification and the like, which are realized in the following ways that ① uses a camera to collect face images or uses photos to form face image files, the image files are processed to generate a face database to serve as the identification basis, ② uses the camera to acquire videos of monitoring scenes, a face detection algorithm is adopted in a video stream to obtain the face images and record corresponding position, time and other information, ③ uses the current face images to compare with stored data in a database, wherein the most critical part is in feature extraction and classification in face identification.
The present invention is not limited to the above-described embodiments, and those skilled in the art will be able to make various modifications without creative efforts from the above-described conception, and fall within the scope of the present invention.
Claims (9)
1. The utility model provides a transformer substation equipment system of patrolling and examining based on heterogeneous data fusion of multisource which includes:
the multi-source heterogeneous data acquisition module comprises a video image data acquisition module and a sensing data acquisition module, wherein the video image data acquisition module adopts a monitoring camera to comprehensively monitor all equipment in the total station to acquire a plurality of video image data of each equipment; the sensing data acquisition module acquires sensing data from different aspects of all the equipment of the total station by adopting a sensor and is used for acquiring multi-aspect parameters of each equipment;
the multi-source heterogeneous data storage module is used for converting and storing various data acquired by the video image data acquisition module and the sensing data acquisition module into formats;
a multi-source heterogeneous data fusion module comprising: a deep neural network classification module and a classification result fusion module,
the deep neural network classification module extracts characteristics of data acquired by the video image data acquisition module and the sensing data acquisition module through a deep neural network model respectively to acquire an identification result of each equipment state;
the classification result fusion module records the identification result of the equipment state obtained by the data acquired by the video image data acquisition module as a first classification result, records the identification result of the equipment state obtained by the data acquired by the sensing data acquisition module as a second classification result, and fuses the first classification result and the second classification result by adopting a fuzzy integral algorithm to obtain a final equipment state identification result.
2. The substation equipment state analysis system based on multi-source heterogeneous data fusion of claim 1, wherein the monitoring camera adopts multi-azimuth and multi-angle setting for equipment, and is used for acquiring multi-angle video data of each equipment; the sensors comprise fire-fighting sensing equipment (a flame detector, a fire-fighting pipeline pressure sensor, an independent smoke and fire detector and the like), environment sensors (a temperature and humidity sensor, a water sensor, an SF6 sensor and the like), and sensors (a vibration monitoring sensor, a current sensor and the like) in a transformer, and are used for acquiring various sensing information of each equipment.
3. The substation equipment state analysis system based on multi-source heterogeneous data fusion of claim 2, wherein the monitoring camera is of a three-level structure and is divided into a close-range level, an intermediate-range level and a far-range level, the close-range level camera is used for collecting close-range data of equipment, the data collection range of the intermediate-range level camera is an area range collected by a preset number of close-range level cameras, and the data collection range of the far-range level camera is an area range collected by a preset number of intermediate-range level cameras.
4. The substation equipment state analysis system based on multi-source heterogeneous data fusion of claim 1, wherein the deep neural network classification module comprises a convolutional neural network and a deep stochastic configuration network, the convolutional neural network is used for performing feature target recognition and target state recognition on data collected by the video image data collection module, and the deep stochastic configuration network is used for performing state recognition on data collected by the sensing data collection module.
5. The substation equipment state analysis system based on multi-source heterogeneous data fusion of claim 1, wherein the equipment is divided into a transformer, a wire and cable and a switch type equipment, the equipment states of the transformer and the wire and cable are normal and abnormal, and the equipment states of the switch type equipment are open and closed.
6. The substation equipment state analysis system based on multi-source heterogeneous data fusion of claim 1, wherein the classification result fusion module specifically comprises:
(6.1) respectively establishing a matrix according to the first classification result and the second classification result, wherein each column of each matrix represents the identification result of different samples on the same device, the device state is normal and is recorded as 0, the device state is abnormal and is recorded as 1, the device state is open and is recorded as 0, the device state is closed and is recorded as 1, and each row of each matrix represents the identification result of different devices in the same sample;
(6.2) obtaining the probability that the real value is i when the classification result is j in the first class classification result and the second class classification result of each device according to a formula, wherein i and j represent the device state, the value ranges are 0 and 1, and the value ranges are marked as pij;
(6.3) obtaining lambda in the fuzzy integral according to the credibility of the first classification result of each device, wherein the formula is as follows:
(6.3) acquiring a g value in the fuzzy integral, wherein the formula is as follows:
g(A1)=g({yi})=g1
g(Ai)=gi+g(ai-1)+λgig(ai-1),1<i≤n;
(6.4) acquiring a fuzzy integral value, namely a final classification result, wherein the formula is as follows:
7. the substation equipment state analysis system based on multi-source heterogeneous data fusion according to claim 1, wherein the deep neural network classification module specifically comprises:
7.1, establishing a convolutional neural network and a deep random configuration network, and setting network model parameters as initial values;
7.2, determining the layer number and the loss function in the network model, wherein the loss function is used for network correction model parameters during back propagation until the loss function is converged, and establishing the model;
and 7.3, acquiring data samples according to preset moments, recording the true values of the identification states at corresponding moments, and inputting the data samples into the convolutional neural network and the deep random configuration network respectively to obtain a first classification result and a second classification result correspondingly.
8. A transformer substation equipment inspection method based on multi-source heterogeneous data fusion is characterized by comprising the following steps:
the multi-source heterogeneous data acquisition comprises a video image data acquisition module and a sensing data acquisition module, wherein the video image data acquisition module adopts a monitoring camera to comprehensively monitor all equipment in the total station to acquire a plurality of video image data of each equipment; the sensing data acquisition module acquires sensing data from different aspects of all the equipment of the total station by adopting a sensor and is used for acquiring multi-aspect parameters of each equipment;
the multi-source heterogeneous data storage is used for converting and storing various data acquired by the video image data acquisition module and the sensing data acquisition module into formats;
multi-source heterogeneous data fusion, comprising: the deep neural network classification and the classification result are fused,
the deep neural network classification is to extract characteristics of the collected video image data and the collected sensing data through a deep neural network model respectively and obtain the recognition result of each equipment state;
the classification result fusion module records the identification result of the equipment state obtained by the data acquired by the video image data acquisition module as a first classification result, records the identification result of the equipment state obtained by the data acquired by the sensing data acquisition module as a second classification result, and fuses the first classification result and the second classification result by adopting a fuzzy integral algorithm to obtain a final equipment state identification result.
9. The substation equipment inspection method based on multi-source heterogeneous data fusion of claim 8, wherein the deep neural network classification comprises a convolutional neural network and a deep random configuration network, the convolutional neural network is used for performing characteristic target identification and target state identification on data collected by a video image data collection module, and the deep random configuration network is used for performing state identification on data collected by a sensing data collection module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010022973.0A CN111209434B (en) | 2020-01-09 | 2020-01-09 | Substation equipment inspection system and method based on multi-source heterogeneous data fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010022973.0A CN111209434B (en) | 2020-01-09 | 2020-01-09 | Substation equipment inspection system and method based on multi-source heterogeneous data fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111209434A true CN111209434A (en) | 2020-05-29 |
CN111209434B CN111209434B (en) | 2024-02-13 |
Family
ID=70786095
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010022973.0A Active CN111209434B (en) | 2020-01-09 | 2020-01-09 | Substation equipment inspection system and method based on multi-source heterogeneous data fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111209434B (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111582399A (en) * | 2020-05-15 | 2020-08-25 | 吉林省森祥科技有限公司 | Multi-sensor information fusion method for sterilization robot |
CN112082094A (en) * | 2020-08-12 | 2020-12-15 | 苏州聆听智能科技有限公司 | Heterogeneous sensor-based pipeline abnormal signal positioning detection method and device |
CN112184678A (en) * | 2020-09-30 | 2021-01-05 | 国网北京市电力公司 | Image recognition method, image recognition device, computer-readable storage medium and processor |
CN112465030A (en) * | 2020-11-28 | 2021-03-09 | 河南大学 | Multi-source heterogeneous information fusion fault diagnosis method based on two-stage transfer learning |
CN112667717A (en) * | 2020-12-23 | 2021-04-16 | 贵州电网有限责任公司电力科学研究院 | Transformer substation inspection information processing method and device, computer equipment and storage medium |
CN112700840A (en) * | 2021-02-03 | 2021-04-23 | 山东中医药大学 | Multi-mode human body action recognition scheme based on dual-channel heterogeneous neural network |
CN112766795A (en) * | 2021-01-29 | 2021-05-07 | 长兴云尚科技有限公司 | Pipe gallery intelligent information management method and system based on cloud processing |
CN113076991A (en) * | 2021-03-30 | 2021-07-06 | 中国人民解放军93114部队 | Multi-target information comprehensive processing method and device based on nonlinear integral algorithm |
CN113110351A (en) * | 2021-04-28 | 2021-07-13 | 广东省科学院智能制造研究所 | Industrial production field heterogeneous state data acquisition system and method |
CN113129285A (en) * | 2021-04-20 | 2021-07-16 | 国网山东省电力公司安丘市供电公司 | Method and system for verifying regional protection pressing plate |
CN113297972A (en) * | 2021-05-25 | 2021-08-24 | 国网湖北省电力有限公司检修公司 | Transformer substation equipment defect intelligent analysis method based on data fusion deep learning |
CN113327219A (en) * | 2021-06-21 | 2021-08-31 | 易成功(厦门)信息科技有限公司 | Image processing method and system based on multi-source data fusion |
CN113438451A (en) * | 2021-06-21 | 2021-09-24 | 易成功(厦门)信息科技有限公司 | Unified standardization processing platform and method for multi-terminal multi-source data |
CN114697771A (en) * | 2022-03-16 | 2022-07-01 | 电子科技大学 | Multi-sensor heterogeneous data synchronous acquisition and matching system |
CN114755528A (en) * | 2022-03-04 | 2022-07-15 | 重庆邮电大学 | Method for positioning faults at two ends of power distribution network |
CN114863324A (en) * | 2022-04-18 | 2022-08-05 | 山东浪潮科学研究院有限公司 | Fire monitoring method, equipment and medium for new energy power equipment |
CN116109599A (en) * | 2023-02-17 | 2023-05-12 | 湖北清江水电开发有限责任公司 | Carbon brush sparking monitoring system of generator |
CN117312828A (en) * | 2023-09-28 | 2023-12-29 | 光谷技术有限公司 | Public facility monitoring method and system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102819239A (en) * | 2011-06-08 | 2012-12-12 | 同济大学 | Intelligent fault diagnosis method of numerical control machine tool |
CN103630244A (en) * | 2013-12-18 | 2014-03-12 | 重庆大学 | Equipment fault diagnosis method and system of electric power system |
US20190114481A1 (en) * | 2017-10-18 | 2019-04-18 | The Trustees Of Columbia University In The City Of New York | Methods and systems for pattern characteristic detection |
-
2020
- 2020-01-09 CN CN202010022973.0A patent/CN111209434B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102819239A (en) * | 2011-06-08 | 2012-12-12 | 同济大学 | Intelligent fault diagnosis method of numerical control machine tool |
CN103630244A (en) * | 2013-12-18 | 2014-03-12 | 重庆大学 | Equipment fault diagnosis method and system of electric power system |
US20190114481A1 (en) * | 2017-10-18 | 2019-04-18 | The Trustees Of Columbia University In The City Of New York | Methods and systems for pattern characteristic detection |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111582399A (en) * | 2020-05-15 | 2020-08-25 | 吉林省森祥科技有限公司 | Multi-sensor information fusion method for sterilization robot |
CN112082094A (en) * | 2020-08-12 | 2020-12-15 | 苏州聆听智能科技有限公司 | Heterogeneous sensor-based pipeline abnormal signal positioning detection method and device |
CN112184678A (en) * | 2020-09-30 | 2021-01-05 | 国网北京市电力公司 | Image recognition method, image recognition device, computer-readable storage medium and processor |
CN112465030A (en) * | 2020-11-28 | 2021-03-09 | 河南大学 | Multi-source heterogeneous information fusion fault diagnosis method based on two-stage transfer learning |
CN112667717A (en) * | 2020-12-23 | 2021-04-16 | 贵州电网有限责任公司电力科学研究院 | Transformer substation inspection information processing method and device, computer equipment and storage medium |
CN112766795A (en) * | 2021-01-29 | 2021-05-07 | 长兴云尚科技有限公司 | Pipe gallery intelligent information management method and system based on cloud processing |
CN112766795B (en) * | 2021-01-29 | 2024-05-07 | 长兴云尚科技有限公司 | Cloud processing-based pipe gallery intelligent information management method and system |
CN112700840A (en) * | 2021-02-03 | 2021-04-23 | 山东中医药大学 | Multi-mode human body action recognition scheme based on dual-channel heterogeneous neural network |
CN113076991A (en) * | 2021-03-30 | 2021-07-06 | 中国人民解放军93114部队 | Multi-target information comprehensive processing method and device based on nonlinear integral algorithm |
CN113076991B (en) * | 2021-03-30 | 2024-03-08 | 中国人民解放军93114部队 | Nonlinear integration algorithm-based multi-target information comprehensive processing method and device |
CN113129285A (en) * | 2021-04-20 | 2021-07-16 | 国网山东省电力公司安丘市供电公司 | Method and system for verifying regional protection pressing plate |
CN113110351A (en) * | 2021-04-28 | 2021-07-13 | 广东省科学院智能制造研究所 | Industrial production field heterogeneous state data acquisition system and method |
CN113297972B (en) * | 2021-05-25 | 2022-03-22 | 国网湖北省电力有限公司检修公司 | Transformer substation equipment defect intelligent analysis method based on data fusion deep learning |
CN113297972A (en) * | 2021-05-25 | 2021-08-24 | 国网湖北省电力有限公司检修公司 | Transformer substation equipment defect intelligent analysis method based on data fusion deep learning |
CN113438451A (en) * | 2021-06-21 | 2021-09-24 | 易成功(厦门)信息科技有限公司 | Unified standardization processing platform and method for multi-terminal multi-source data |
CN113438451B (en) * | 2021-06-21 | 2022-04-19 | 易成功(厦门)信息科技有限公司 | Unified standardization processing platform and method for multi-terminal multi-source data |
CN113327219B (en) * | 2021-06-21 | 2022-01-28 | 易成功(厦门)信息科技有限公司 | Image processing method and system based on multi-source data fusion |
CN113327219A (en) * | 2021-06-21 | 2021-08-31 | 易成功(厦门)信息科技有限公司 | Image processing method and system based on multi-source data fusion |
CN114755528A (en) * | 2022-03-04 | 2022-07-15 | 重庆邮电大学 | Method for positioning faults at two ends of power distribution network |
CN114697771A (en) * | 2022-03-16 | 2022-07-01 | 电子科技大学 | Multi-sensor heterogeneous data synchronous acquisition and matching system |
CN114863324A (en) * | 2022-04-18 | 2022-08-05 | 山东浪潮科学研究院有限公司 | Fire monitoring method, equipment and medium for new energy power equipment |
CN114863324B (en) * | 2022-04-18 | 2024-05-07 | 山东浪潮科学研究院有限公司 | Fire monitoring method, equipment and medium for new energy power equipment |
CN116109599A (en) * | 2023-02-17 | 2023-05-12 | 湖北清江水电开发有限责任公司 | Carbon brush sparking monitoring system of generator |
CN116109599B (en) * | 2023-02-17 | 2024-06-11 | 湖北清江水电开发有限责任公司 | Carbon brush sparking monitoring system of generator |
CN117312828A (en) * | 2023-09-28 | 2023-12-29 | 光谷技术有限公司 | Public facility monitoring method and system |
CN117312828B (en) * | 2023-09-28 | 2024-06-14 | 光谷技术有限公司 | Public facility monitoring method and system |
Also Published As
Publication number | Publication date |
---|---|
CN111209434B (en) | 2024-02-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111209434A (en) | Substation equipment inspection system and method based on multi-source heterogeneous data fusion | |
CN110598736B (en) | Power equipment infrared image fault positioning, identifying and predicting method | |
CN110827251A (en) | Power transmission line locking pin defect detection method based on aerial image | |
CN108711148B (en) | Tire defect intelligent detection method based on deep learning | |
CN112734692A (en) | Transformer equipment defect identification method and device | |
CN112070134A (en) | Power equipment image classification method and device, power equipment and storage medium | |
CN115270965A (en) | Power distribution network line fault prediction method and device | |
CN112697798B (en) | Infrared image-oriented diagnosis method and device for current-induced thermal defects of power transformation equipment | |
CN115395646B (en) | Intelligent operation and maintenance system of digital twin traction substation | |
CN112016772A (en) | Natural disaster early warning system and method | |
CN116681962A (en) | Power equipment thermal image detection method and system based on improved YOLOv5 | |
CN114743089A (en) | Image recognition GIS fault diagnosis device and method based on SSA-SVM | |
CN112419243A (en) | Power distribution room equipment fault identification method based on infrared image analysis | |
CN117032165A (en) | Industrial equipment fault diagnosis method | |
CN117277566A (en) | Power grid data analysis power dispatching system and method based on big data | |
CN115880472A (en) | Intelligent diagnosis and analysis system for electric power infrared image data | |
CN115830302A (en) | Multi-scale feature extraction and fusion power distribution network equipment positioning identification method | |
CN116244600A (en) | Method, system and equipment for constructing GIS intermittent discharge mode identification model | |
CN110045691A (en) | A kind of multitasking fault monitoring method of multi-source heterogeneous big data | |
CN115311601A (en) | Fire detection analysis method based on video analysis technology | |
CN114220084A (en) | Distribution equipment defect identification method based on infrared image | |
CN115598459A (en) | Power failure prediction method for 10kV feeder line fault of power distribution network | |
Han et al. | Research on Portable Infrared Diagnosis Device of Substation Equipment | |
CN117612047B (en) | Unmanned aerial vehicle inspection image recognition method for power grid based on AI large model | |
CN113780224B (en) | Unmanned inspection method and system for transformer substation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |