CN111209434B - Substation equipment inspection system and method based on multi-source heterogeneous data fusion - Google Patents

Substation equipment inspection system and method based on multi-source heterogeneous data fusion Download PDF

Info

Publication number
CN111209434B
CN111209434B CN202010022973.0A CN202010022973A CN111209434B CN 111209434 B CN111209434 B CN 111209434B CN 202010022973 A CN202010022973 A CN 202010022973A CN 111209434 B CN111209434 B CN 111209434B
Authority
CN
China
Prior art keywords
equipment
classification result
data
acquisition module
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010022973.0A
Other languages
Chinese (zh)
Other versions
CN111209434A (en
Inventor
王晨麟
陆烨
陈少达
万涛
范叶平
杨彬彬
肖学权
郝战
王浩
王天鹏
周钰山
付饶
滕松
孙鹏
杨德胜
马冬
郭瑞祥
尚守卫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Xuzhou Power Supply Co
State Grid Information and Telecommunication Co Ltd
Anhui Jiyuan Software Co Ltd
Original Assignee
State Grid Xuzhou Power Supply Co
State Grid Information and Telecommunication Co Ltd
Anhui Jiyuan Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Xuzhou Power Supply Co, State Grid Information and Telecommunication Co Ltd, Anhui Jiyuan Software Co Ltd filed Critical State Grid Xuzhou Power Supply Co
Priority to CN202010022973.0A priority Critical patent/CN111209434B/en
Publication of CN111209434A publication Critical patent/CN111209434A/en
Application granted granted Critical
Publication of CN111209434B publication Critical patent/CN111209434B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/02Computing arrangements based on specific mathematical models using fuzzy logic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Algebra (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Pure & Applied Mathematics (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Primary Health Care (AREA)

Abstract

The invention discloses a substation equipment inspection system based on multi-source heterogeneous data fusion, which comprises a multi-source heterogeneous data acquisition module, a video image data acquisition module and a sensing data acquisition module, wherein the multi-source heterogeneous data acquisition module is used for acquiring video image data; the multi-source heterogeneous data storage module is used for converting and storing various data acquired by the video image data acquisition module and the sensing data acquisition module; a multi-source heterogeneous data fusion module comprising: the invention adopts a plurality of cameras to collect a plurality of video data and adopts a plurality of sensors to collect a plurality of sensing data, so as to form a plurality of data sources of each device, then the video data and the sensing data are respectively identified and classified by adopting two deep neural networks for the plurality of data sources, the two identification and classification results are fused through a fuzzy integral algorithm, and the better identification effect is obtained by combining the plurality of data and the respective characteristics and advantages of the two identification and classification results.

Description

Substation equipment inspection system and method based on multi-source heterogeneous data fusion
Technical Field
The invention relates to the field of power equipment detection, in particular to a substation equipment inspection system and method based on multi-source heterogeneous data fusion.
Background
Substation equipment and environment inspection are important methods for ensuring normal operation of the substation, along with development of intelligent power grids, the number of unattended substations is continuously increased, an existing substation inspection robot is provided with various sensors, instruments, cameras and the like, outdoor high-voltage equipment and environment are inspected in the unattended substations, various data of main equipment and environment of the substation are acquired in real time, defects and anomalies of power equipment can be found in time by using a background analysis algorithm, and necessary guarantee is provided for safe operation of the substation.
Under the general condition, the real scene of the transformer substation is generally complex, the difference between the targets is large under the conditions of different illumination, shielding, angles and the like of the same target, and the current mainstream technical means can only provide better recognition performance under a simple environment, so that the conditions of false detection, omission detection and the like of the target detection exist. In monitoring, in order to improve the detection accuracy of a target, a plurality of state parameters are often required to be described, on the basis, a plurality of measured parameters are comprehensively analyzed, the redundancy and complementarity of information are improved, and the improvement of the detection accuracy is a problem which is needed to be solved at present.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a substation equipment inspection system and a method based on multi-source heterogeneous data fusion, and the technical scheme is as follows: the multi-camera and multi-sensor are adopted to acquire multi-aspect parameter information of each device of the total station, the problems of false detection and missing detection of fault information caused by insufficient sensors are solved, the information acquired by the cameras and the sensors is subjected to preliminary diagnosis by using the classifier, the possibility that the fault to be diagnosed belongs to different faults is determined, and the decision fusion diagnosis is carried out by using a fuzzy integral fusion method on the basis of fully considering the association degree of each classifier and different fault types, so that the detection accuracy is improved.
The specific scheme provided by the invention is as follows:
a substation equipment inspection method based on multi-source heterogeneous data fusion comprises the following specific steps:
(1) The method comprises the steps of collecting multi-source heterogeneous data, comprehensively monitoring all equipment of a total station by adopting a monitoring camera, and obtaining a plurality of video image data of each equipment; acquiring sensing data from different aspects of all equipment of the total station by adopting a sensor, and acquiring multiple-aspect parameters of each equipment;
(2) The multi-source heterogeneous data storage is used for converting and storing various data acquired by the video image data acquisition module and the sensing data acquisition module;
(3) A multi-source heterogeneous data fusion comprising: the deep neural network classification and classification result fusion are used for extracting data characteristics of the collected video image data and the collected sensing data and identifying the target state, and the fuzzy integration is utilized to fuse all the identification results so as to obtain a final equipment state identification result.
The deep neural network classification is used for establishing a convolutional neural network and a deep random configuration network, and network model parameters are set as initial values; determining the layer number and a loss function in a network model, wherein the loss function is used for correcting model parameters by the network during back propagation until the loss function converges and the model establishment is completed; and acquiring data samples according to preset moments, and recording the true values of the setting states at the corresponding moments, wherein the data samples are respectively input into a convolutional neural network and a deep random configuration network, and a first classification result and a second classification result are correspondingly obtained.
The classification result fusion is carried out, the identification result of the equipment state obtained by the data acquired by the video image data acquisition module is recorded as a first classification result, the identification result of the equipment state obtained by the data acquired by the sensing data acquisition module is recorded as a second classification result, and the first classification result and the second classification result are fused by adopting a fuzzy integration algorithm to obtain a final equipment state identification result.
Further, the convolutional neural network specifically comprises the following steps:
(1) Dividing video image data into a training set, a test set and a verification set according to proportion, and making a label of the data set;
(2) Designing the structure of the deep convolutional neural network, setting parameters of a convolutional layer, a pooling layer and a full-connection layer, using max-pooling in the pooling method, using softmax regression classification in a classifier, and setting the number of layers of a network model as four, wherein the first layer and the third layer are the convolutional layers, and the other two layers are pooling operation layers;
(3) Taking the data of the training set as the input of the convolutional neural network, training the convolutional neural network by adopting a BP algorithm with random gradient descent, optimizing the loss function of the whole model, stopping training when the error of the convolutional neural network on the training set is completely converged, and storing the parameters of each layer of the convolutional neural network;
(4) Setting a category confidence threshold, inputting the data of the test set and the verification set into the convolutional neural network, and performing fine adjustment on the convolutional neural network to obtain output.
Further, the specific steps of the deep random configuration network are as follows:
(1) Dividing sensing acquisition data into a training set, a testing set and a verification set according to a proportion, and making a label of the data set;
(2) Designing the structure of the deep convolutional neural network, setting parameters of a convolutional layer, a pooling layer and a full-connection layer, and selecting an activation function to adopt a LeakyReLU;
(3) The data of the training set is used as the input of the convolutional neural network, the convolutional neural network is trained by adopting a learning rate self-adaptive random gradient descent method, the loss function of the whole model is optimized, when the error of the convolutional neural network on the training set is completely converged, the training is stopped, and the parameters of all layers of the convolutional neural network are saved;
specifically, a strategy of low-rank decomposition is adopted, the number of output feature graphs of a first convolution layer after each low-rank decomposition is calculated through knowing the calculation complexity of the convolution layer before the low-rank decomposition and the multiple of acceleration required, and the layer-by-layer low-rank decomposition training is performed on the convolution neural network so as to reduce the calculated amount;
specifically, a network pruning strategy is adopted, the convolutional neural network model after low-rank decomposition is retrained, k-means clustering is conducted on weights of the remaining connections of all layers of the convolutional neural network, fine adjustment is conducted on results obtained through clustering, and redundant connections of a convolutional layer and a full-connection layer of the convolutional neural network are removed to reduce storage.
(4) Setting a category confidence threshold, inputting the data of the test set and the verification set into the convolutional neural network, and performing fine adjustment on the convolutional neural network to obtain output.
Further, the fuzzy integration algorithm specifically comprises the following steps:
(1) Preprocessing the first classification result and the second classification result by using a fuzzy technology, performing primary fault diagnosis, determining possible fault classification results, and taking the possible fault classification results as fault candidate types. The membership degree selected is as follows:
wherein x is input data to be processed, y is a processed numerical value, and e is a natural constant.
(2) Forming a fault candidate type set D= { D according to a possible fault classification result 1 ,d 2 ,…,d N -wherein d is a fault candidate type;
(3) Forming a direct association type set D of each fault candidate type according to possible fault classification results i-direct ={d m ,…,d n Related type set D of level-separated i-inairect ={d k ,…,d i };
(4) Determination of blur density, i.e. g i =g({x i },i=1,2,…,n,g i I.e. the ith letterThe fuzzy density of the information, namely the weight thereof;
(5) According toDetermining lambda i Then according to
g(x 1 )=g({x 1 }) and g (x) 1 )=g({x i })+g(x i-1 )+λg({x i })g(x i-1 ) I=2, …, n, find the fuzzy measure g, λ i Is an intermediate number;
(6) Forming a set F of the support degree of the direct association type to the fault candidate type according to the topology information and the diagnosis conclusion of each type i-direct ={f m ,…,f n Set F of support degree of the fault candidate type by the inter-level association type i-indirect ={f k ,…,f l };
(7) According toCalculating the fuzzy integral value e i ,e i Namely, the fault type possibility indexes given by comprehensive diagnosis form a fault candidate type fault type possibility index set E= { E 1 ,e 2 ,…,e N And determining the fault type according to the fault type probability index set, thereby giving a final classification result.
Substation equipment inspection system based on heterogeneous data fusion of multisource includes:
the multi-source heterogeneous data acquisition module comprises a video image data acquisition module and a sensing data acquisition module, wherein the video image data acquisition module adopts a monitoring camera to comprehensively monitor all the devices of the total station, and acquires a plurality of video image data of each device; the sensing data acquisition module acquires sensing data from different aspects of all equipment of the total station by adopting a sensor and is used for acquiring multiple-aspect parameters of each equipment;
the multi-source heterogeneous data storage module is used for converting and storing various data acquired by the video image data acquisition module and the sensing data acquisition module;
a multi-source heterogeneous data fusion module comprising: the deep neural network classification module and the classification result fusion module are used for acquiring the recognition result of the equipment state;
the deep neural network classification module extracts characteristics of the data acquired by the video image data acquisition module and the sensing data acquisition module through the deep neural network model respectively to acquire the identification result of each equipment state;
the classification result fusion module is used for marking the recognition result of the equipment state obtained by the data acquired by the video image data acquisition module as a first classification result, marking the recognition result of the equipment state obtained by the data acquired by the sensing data acquisition module as a second classification result, and fusing the first classification result and the second classification result by adopting a fuzzy integration algorithm to obtain a final equipment state recognition result.
The substation equipment inspection system and method based on multi-source heterogeneous data fusion have the following beneficial effects:
multiple video data are collected by multiple cameras, multiple sensing data are collected by multiple sensors, multiple data sources of each device are formed, the video data and the sensing data are respectively identified and classified by two deep neural networks for the multiple data sources, two identification and classification results are fused through a fuzzy integration algorithm, and better identification effects are obtained by combining the multiple data and the characteristics and advantages of the two identification and classification results.
Drawings
Fig. 1 is a flow block diagram of a substation equipment inspection system based on multi-source heterogeneous data fusion;
fig. 2 is a data processing flow block diagram of a substation equipment inspection system based on multi-source heterogeneous data fusion;
description of the embodiments
The substation equipment inspection system and method based on multi-source heterogeneous data fusion according to the present invention will be further explained and illustrated with reference to the accompanying drawings and specific embodiments, however, the explanation and illustration should not limit the technical scheme of the present invention.
Referring to fig. 1-2, a substation equipment inspection system based on multi-source heterogeneous data fusion includes:
the multi-source heterogeneous data acquisition module comprises a video image data acquisition module and a sensing data acquisition module, wherein the video image data acquisition module adopts a monitoring camera to comprehensively monitor all the devices of the total station, and acquires a plurality of video image data of each device; the sensing data acquisition module acquires sensing data from different aspects of all equipment of the total station by adopting a sensor and is used for acquiring multiple-aspect parameters of each equipment;
the multi-source heterogeneous data storage module is used for converting and storing various data acquired by the video image data acquisition module and the sensing data acquisition module;
a multi-source heterogeneous data fusion module comprising: the deep neural network classification module and the classification result fusion module are used for acquiring the recognition result of the equipment state;
the deep neural network classification module extracts characteristics of the data acquired by the video image data acquisition module and the sensing data acquisition module through the deep neural network model respectively to acquire the identification result of each equipment state;
the classification result fusion module is used for marking the recognition result of the equipment state obtained by the data acquired by the video image data acquisition module as a first classification result, marking the recognition result of the equipment state obtained by the data acquired by the sensing data acquisition module as a second classification result, and fusing the first classification result and the second classification result by adopting a fuzzy integration algorithm to obtain a final equipment state recognition result.
As a further optimization of the scheme, the monitoring camera adopts multi-azimuth and multi-angle setting for the equipment, and is used for acquiring multi-angle video data of each equipment; the sensor comprises sensing data such as a transformer substation SF6 pressure gauge, an oil level gauge, a device panel liquid crystal display and an indicator lamp and the like and is used for acquiring various sensing information of each device.
In order to acquire more comprehensive video data, the monitoring cameras are divided into three-level structures, namely a near view level, a middle view level and a far view level, the near view level cameras are used for acquiring near view data of equipment, the range of the near view level cameras for acquiring the data is the range of the area acquired by a preset number of near view level cameras, and the range of the far view level cameras for acquiring the data is the range of the area acquired by a preset number of middle view level cameras.
In order to integrate a plurality of acquired data, acquiring a large amount of links among the data so as to identify the state of the equipment, classifying the data by adopting a deep neural network, wherein the deep neural network comprises a convolutional neural network and a deep random configuration network, the convolutional neural network is used for carrying out characteristic target identification and target state identification on the data acquired by the video image data acquisition module, and the deep random configuration network is used for carrying out state identification on the data acquired by the sensing data acquisition module.
As a further optimization of the above scheme, the deep neural network classification module specifically includes:
establishing a convolutional neural network and a deep random configuration network, wherein network model parameters are set as initial values;
determining the layer number and a loss function in a network model, wherein the loss function is used for correcting model parameters by the network during back propagation until the loss function converges and the model establishment is completed;
and acquiring data samples according to preset moments, and recording the true values of the setting states at the corresponding moments, wherein the data samples are respectively input into a convolutional neural network and a deep random configuration network, and a first classification result and a second classification result are correspondingly obtained.
In the classification result fusion module, fusion is carried out on two equipment state recognition results obtained by a convolutional neural network and a deep random configuration network, so as to obtain an optimized classification result, and a fuzzy integral algorithm is adopted, and the method specifically comprises the following steps:
respectively establishing a matrix according to the first classification result and the second classification result, wherein each column of each matrix represents the identification result of different samples on the same equipment, wherein the equipment state is normally recorded as 0, the equipment state is abnormally recorded as 1, the equipment state is opened and recorded as 0, the equipment state is closed and recorded as 1, and each row of each matrix represents the identification result of different equipment in the same sample;
the recognition results of the multiple samples of each deep neural network are organized into a table format, as shown in table 1:
identifying a state Apparatus 1 Apparatus 1 ...... Device n
Sample 1
Sample 2
......
Sample m
TABLE 1 deep neural network output data sort out results
Obtaining the probability of the true value i when the classification result is j in the first class classification result and the second class classification result of each device according to a formula, wherein i and j represent the device state, the value range is 0 and 1, and the value range is recorded as p ij ;S 1 ij
The process of using the fuzzy integration algorithm for the state recognition result of one device is described as follows:
TABLE 2 data arrangement results for an apparatus
In the present embodiment, S ij As a result of 0 or 1, corresponding to p ij Has the result of p 01 Representing the probability that the true value of the device state is 0 in the case that the classification result is 1, p 11 Representing the probability that the true value of the equipment state is 1 in the case that the classification result is 1, p 00 Representing the condition that the classification result is 0Probability of the true value of the device state being 1, p 01 The probability that the true value of the device state is 0 in the case where the classification result is 1 is represented,
lambda in the fuzzy integral is obtained according to the credibility of each classification result of each device, and the formula is as follows:
in the present embodiment, p is found out in the classification result of the two deep neural network identifications for the device 1 00 Representing the probability, i.e. the credibility, p of correctness when the recognition classification result is 0 11 The formula for acquiring the intermediate number lambda is as follows, which represents the correct probability or reliability when the identification classification result is 1: lambda+1= (1+p) 00 )(1+p 11 )。
Obtaining a fuzzy measure value g in fuzzy integration, wherein the formula is as follows:
g(A 1 )=g({y i })=g 1
g(A i )=g i +g(a i-1 )+λg i g(a i-1 ),1<i≤n;
obtaining a fuzzy integral value e, namely a final classification result, wherein the formula is as follows:
the invention also discloses a substation equipment inspection method based on multi-source heterogeneous data fusion, which comprises the following steps:
the multi-source heterogeneous data acquisition comprises a video image data acquisition module and a sensing data acquisition module, wherein the video image data acquisition module adopts a monitoring camera to comprehensively monitor all the devices of the total station, and acquires a plurality of video image data of each device; the sensing data acquisition module acquires sensing data from different aspects of all equipment of the total station by adopting a sensor and is used for acquiring multiple-aspect parameters of each equipment;
the multi-source heterogeneous data storage is used for converting and storing various data acquired by the video image data acquisition module and the sensing data acquisition module;
a multi-source heterogeneous data fusion comprising: the deep neural network classification and the fusion of classification results,
the deep neural network is used for classifying, extracting characteristics of the collected video image data and the collected sensing data through a deep neural network model respectively, and obtaining a recognition result of each equipment state;
the classification result fusion module is used for marking the recognition result of the equipment state obtained by the data acquired by the video image data acquisition module as a first classification result, marking the recognition result of the equipment state obtained by the data acquired by the sensing data acquisition module as a second classification result, and fusing the first classification result and the second classification result by adopting a fuzzy integration algorithm to obtain a final equipment state recognition result.
The deep neural network classification comprises a convolutional neural network and a deep random configuration network, wherein the convolutional neural network is used for carrying out characteristic target identification and target state identification on data acquired by the video image data acquisition module, and the deep random configuration network is used for carrying out state identification on the data acquired by the sensing data acquisition module.
In this embodiment, aiming at the characteristics of large data size, high feature dimension and short retrieval time, a video image multi-level compression feature extraction algorithm based on a structure adaptive convolutional neural network is adopted, and the method includes the following steps: (1) reading a training sample image and a test sample image; (2) Normalizing the sizes, the postures and the illumination of all the images; (3) Randomly selecting a weighted average of the processed training images to generate a new training sample, so that the sample is amplified; (4) Initializing a network structure of a convolutional neural network, and setting two index values for controlling network growth: systematic average error and recognition rate of training sample; (5) And sending the processed training samples into an initial network, judging whether the initial network has a convergence trend within the specified training times, and if the initial network does not converge, adding a global branch on the basis of the initial network, fixing the original network structure and training only the newly added branch. If the newly added branch is not converged, then adding a new branch, and so on until a certain branch is added, and meeting convergence conditions, training the whole network until the average error of the system reaches an expected value, and completing the expansion of the global network; if the initial network has a convergence trend, the global expansion is not unfolded, and the initial network is trained until the average error of the system reaches an expected value; (6) After the global network learning is finished, if the recognition rate of the training sample does not reach the expected value, expanding local expansion, namely adding a local branch, fixing the original global network structure, training only the newly added local branch until the recognition rate of the training sample reaches the expected value, and finishing the learning of the global network; if the recognition rate of the training sample reaches the expected value after the global network learning is finished, local expansion is not expanded, and image compression feature extraction on the premise of ensuring the recognition rate is realized.
In this embodiment, picture samples of a meter of a transformer primary equipment operation site are collected, the current situation of typical application scenes of the meter type is analyzed, configuration conditions, forms and visual features of a transformer substation SF6 pressure meter, an oil level meter and the like are investigated, and different numerical value and state reading schemes of a numerical value meter and a scale type meter are analyzed and proposed. And determining normal working intervals and abnormal state intervals of different meters according to the equipment management operation rules, and providing support for training of intelligent image recognition algorithms of the meters.
The secondary equipment comprises a transformer substation SF6 pressure gauge, an oil level gauge, a device panel liquid crystal display and indication lamp, a switching handle, an idle opening and secondary pressing plate switching device. The technical embodiment is similar to the technical embodiment of the primary equipment. The secondary platen of this project was found to have the following patterns: 1. the two-position opening and closing type 2, the two-position plug-in type 3 and the three-position switching type realize the reading and checking of the state of the pressing plate through the comparison of the shape and the state of the pressing plate.
In this embodiment, the analysis and recognition of the video face image includes several steps of image capturing, face detection and positioning, image and processing, feature extraction and recognition, and the implementation manner is as follows: (1) acquiring a face image by using a camera or forming a face image file by using a photo, and processing the image files to generate a face database serving as a recognition basis; (2) a video of a monitoring scene is obtained by using a camera, a face detection algorithm is adopted in a video stream to obtain a face image, and corresponding information such as position, time and the like is recorded; (3) and comparing the current face image with stored data in a database. The most critical part is on feature extraction and classification in face recognition. The feature description method of the face is used as a bridge for connecting the bottom layer information and the high-level understanding, is in an important position in face recognition, and the classification capability and complexity of the features are the most important factors to be considered in the feature extraction method. In the aspect of personnel tracking based on personnel identity recognition, through a global camera and a panoramic camera which are deployed at a station end, a moving target (a person, a car and the like) in a large-picture scene can be automatically recognized, and the recognized moving target is subjected to detail tracking under the condition that a large-picture scene video is not lost, wherein the most important problem in a target tracking scene is to solve the corresponding relation between switching among multiple cameras and a visual angle.
The present invention is not limited to the above-described specific embodiments, and various modifications may be made by those skilled in the art without inventive effort from the above-described concepts, and are within the scope of the present invention.

Claims (8)

1. Substation equipment inspection system based on heterogeneous data of multisource fuses, characterized by comprising:
the multi-source heterogeneous data acquisition module comprises a video image data acquisition module and a sensing data acquisition module, wherein the video image data acquisition module adopts a monitoring camera to comprehensively monitor all the devices of the total station, and acquires a plurality of video image data of each device; the sensing data acquisition module acquires sensing data from different aspects of all equipment of the total station by adopting a sensor and is used for acquiring multiple-aspect parameters of each equipment;
the multi-source heterogeneous data storage module is used for converting and storing various data acquired by the video image data acquisition module and the sensing data acquisition module;
a multi-source heterogeneous data fusion module comprising: a deep neural network classification module and a classification result fusion module,
the deep neural network classification module extracts characteristics of the data acquired by the video image data acquisition module and the sensing data acquisition module through the deep neural network model respectively to acquire the identification result of each equipment state;
the classification result fusion module is used for marking the recognition result of the equipment state obtained by the data acquired by the video image data acquisition module as a first classification result, marking the recognition result of the equipment state obtained by the data acquired by the sensing data acquisition module as a second classification result, and fusing the first classification result and the second classification result by adopting a fuzzy integration algorithm to acquire a final equipment state recognition result;
the classification result fusion module specifically comprises:
(6.1) respectively establishing a matrix according to the first classification result and the second classification result, wherein each column of each matrix for the matrix established according to the first classification result and the matrix established according to the second classification result represents the identification result of different samples on the same equipment, wherein the equipment state is normally marked as 0, the equipment state is abnormally marked as 1, the equipment state is opened marked as 0, the equipment state is closed marked as 1, and each row of each matrix represents the identification result of different equipment in the same sample;
(6.2) obtaining the probability of the true value being i when the classification result is j in the first class classification result and the second class classification result according to the formula, wherein i and j represent the states of the devices, the value ranges are 0 and 1, and the value ranges are recorded as p ij
(6.3) obtaining lambda in the fuzzy integral according to the credibility of the first classification result of each device, wherein the formula is as follows:
(6.4) obtaining a g value in fuzzy integration, wherein the formula is as follows:
g(A 1 )=g({y i })=g 1
g(A i )=g i +g(a i-1 )+λg i g(a i-1 ),1<i≤n;
(6.5) obtaining a fuzzy integral value, namely a final classification result, wherein the formula is as follows:
2. the substation equipment state analysis system based on multi-source heterogeneous data fusion according to claim 1, wherein the monitoring camera adopts multi-azimuth and multi-angle setting for equipment and is used for acquiring multi-angle video data of each equipment; the sensor comprises fire-fighting sensing equipment, an environment sensor and a sensor in a transformer, wherein the sensor is used for acquiring various sensing information of each equipment, the fire-fighting sensing equipment comprises a flame detector, a fire-fighting pipeline pressure sensor and an independent smoke-sensing fire detector, the environment sensor comprises a temperature and humidity sensor, a water immersion sensor and an SF6 sensor, and the sensor in the transformer comprises a vibration monitoring sensor and a current sensor.
3. The substation equipment state analysis system based on multi-source heterogeneous data fusion according to claim 2, wherein the monitoring cameras are divided into three-level structures, namely a near view level, a middle view level and a far view level, the near view level cameras are used for collecting near view data of equipment, the range of the collected data of the middle view level cameras is a range of areas collected by a preset number of near view level cameras, and the range of the collected data of the far view level cameras is a range of areas collected by a preset number of middle view level cameras.
4. The substation equipment state analysis system based on multi-source heterogeneous data fusion according to claim 1, wherein the deep neural network classification module comprises a convolutional neural network and a deep random configuration network, the convolutional neural network is used for carrying out feature target recognition and target state recognition on data acquired by the video image data acquisition module, and the deep random configuration network is used for carrying out state recognition on the data acquired by the sensing data acquisition module.
5. The substation equipment state analysis system based on multi-source heterogeneous data fusion according to claim 1, wherein the equipment is divided into a transformer, an electric wire and a cable and a switch equipment, the equipment states of the transformer and the electric wire and cable are normal and abnormal, and the equipment states of the switch equipment are open and closed.
6. The substation equipment state analysis system based on multi-source heterogeneous data fusion according to claim 1, wherein the deep neural network classification module specifically comprises:
7.1, establishing a convolutional neural network and a deep random configuration network, wherein network model parameters are set as initial values;
7.2, determining the layer number and the loss function in the network model, wherein the loss function is used for correcting model parameters by the network during back propagation until the loss function converges and the model establishment is completed;
and 7.3, collecting data samples according to preset moments, and recording the true values of the setting states at the corresponding moments, wherein the data samples are respectively input into a convolutional neural network and a deep random configuration network, and correspondingly obtain a first classification result and a second classification result.
7. The substation equipment inspection method based on multi-source heterogeneous data fusion is characterized by comprising the following steps of:
the method comprises the steps of collecting multi-source heterogeneous data, comprehensively monitoring all equipment of a total station by adopting a monitoring camera, and obtaining a plurality of video image data of each equipment; acquiring sensing data from different aspects of all equipment of the total station by adopting a sensor, and acquiring multiple-aspect parameters of each equipment;
the multi-source heterogeneous data storage is used for converting and storing a plurality of acquired video image data and sensing data into a format;
a multi-source heterogeneous data fusion comprising: the deep neural network classification and the fusion of classification results,
the deep neural network is used for classifying, extracting characteristics of the collected video image data and the collected sensing data through a deep neural network model respectively, and obtaining a recognition result of each equipment state;
the classification results are fused, the identification result of the equipment state obtained by the video image data is recorded as a first classification result, and the first classification result is transmittedSensory dataThe obtained equipment state recognition result is recorded as a second classification result, and a fuzzy integration algorithm is adopted to fuse the first classification result with the second classification result so as to obtain a final equipment state recognition result;
the classification result fusion specifically comprises the following steps:
(6.1) respectively establishing a matrix according to the first classification result and the second classification result, wherein each column of each matrix for the matrix established according to the first classification result and the matrix established according to the second classification result represents the identification result of different samples on the same equipment, wherein the equipment state is normally marked as 0, the equipment state is abnormally marked as 1, the equipment state is opened marked as 0, the equipment state is closed marked as 1, and each row of each matrix represents the identification result of different equipment in the same sample;
(6.2) obtaining the probability of the true value being i when the classification result is j in the first class classification result and the second class classification result of each device according to a formula, wherein i and j represent the device state, and the value range is 0 and 1 and is recorded as pij;
(6.3) obtaining lambda in the fuzzy integral according to the credibility of the first classification result of each device, wherein the formula is as follows:
(6.3) obtaining a g value in fuzzy integration, wherein the formula is as follows:
g(A 1 )=g({y i })=g 1
g(A i )=g i +g(a i-1 )+λg i g(a i-1 ),1<i≤n;
(6.4) obtaining a fuzzy integral value, namely a final classification result, wherein the formula is as follows:
8. the substation equipment inspection method based on multi-source heterogeneous data fusion according to claim 7, wherein the deep neural network classification comprises characteristic target identification and target state identification of collected video image data by adopting a convolutional neural network, and state identification of collected sensing data by adopting a deep random configuration network.
CN202010022973.0A 2020-01-09 2020-01-09 Substation equipment inspection system and method based on multi-source heterogeneous data fusion Active CN111209434B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010022973.0A CN111209434B (en) 2020-01-09 2020-01-09 Substation equipment inspection system and method based on multi-source heterogeneous data fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010022973.0A CN111209434B (en) 2020-01-09 2020-01-09 Substation equipment inspection system and method based on multi-source heterogeneous data fusion

Publications (2)

Publication Number Publication Date
CN111209434A CN111209434A (en) 2020-05-29
CN111209434B true CN111209434B (en) 2024-02-13

Family

ID=70786095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010022973.0A Active CN111209434B (en) 2020-01-09 2020-01-09 Substation equipment inspection system and method based on multi-source heterogeneous data fusion

Country Status (1)

Country Link
CN (1) CN111209434B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582399B (en) * 2020-05-15 2023-07-18 吉林省森祥科技有限公司 Multi-sensor information fusion method for sterilization robot
CN112082094A (en) * 2020-08-12 2020-12-15 苏州聆听智能科技有限公司 Heterogeneous sensor-based pipeline abnormal signal positioning detection method and device
CN112184678A (en) * 2020-09-30 2021-01-05 国网北京市电力公司 Image recognition method, image recognition device, computer-readable storage medium and processor
CN112465030B (en) * 2020-11-28 2022-06-07 河南财政金融学院 Multi-source heterogeneous information fusion fault diagnosis method based on two-stage transfer learning
CN112667717B (en) * 2020-12-23 2023-04-07 贵州电网有限责任公司电力科学研究院 Transformer substation inspection information processing method and device, computer equipment and storage medium
CN112766795B (en) * 2021-01-29 2024-05-07 长兴云尚科技有限公司 Cloud processing-based pipe gallery intelligent information management method and system
CN112700840A (en) * 2021-02-03 2021-04-23 山东中医药大学 Multi-mode human body action recognition scheme based on dual-channel heterogeneous neural network
CN113076991B (en) * 2021-03-30 2024-03-08 中国人民解放军93114部队 Nonlinear integration algorithm-based multi-target information comprehensive processing method and device
CN113129285B (en) * 2021-04-20 2022-11-25 国网山东省电力公司安丘市供电公司 Method and system for verifying regional protection pressing plate
CN113110351A (en) * 2021-04-28 2021-07-13 广东省科学院智能制造研究所 Industrial production field heterogeneous state data acquisition system and method
CN113297972B (en) * 2021-05-25 2022-03-22 国网湖北省电力有限公司检修公司 Transformer substation equipment defect intelligent analysis method based on data fusion deep learning
CN113327219B (en) * 2021-06-21 2022-01-28 易成功(厦门)信息科技有限公司 Image processing method and system based on multi-source data fusion
CN113438451B (en) * 2021-06-21 2022-04-19 易成功(厦门)信息科技有限公司 Unified standardization processing platform and method for multi-terminal multi-source data
CN114755528A (en) * 2022-03-04 2022-07-15 重庆邮电大学 Method for positioning faults at two ends of power distribution network
CN114697771B (en) * 2022-03-16 2023-07-21 电子科技大学 Multi-sensor heterogeneous data synchronous acquisition matching system
CN114863324B (en) * 2022-04-18 2024-05-07 山东浪潮科学研究院有限公司 Fire monitoring method, equipment and medium for new energy power equipment
CN116109599B (en) * 2023-02-17 2024-06-11 湖北清江水电开发有限责任公司 Carbon brush sparking monitoring system of generator
CN117312828B (en) * 2023-09-28 2024-06-14 光谷技术有限公司 Public facility monitoring method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819239A (en) * 2011-06-08 2012-12-12 同济大学 Intelligent fault diagnosis method of numerical control machine tool
CN103630244A (en) * 2013-12-18 2014-03-12 重庆大学 Equipment fault diagnosis method and system of electric power system
US20190114481A1 (en) * 2017-10-18 2019-04-18 The Trustees Of Columbia University In The City Of New York Methods and systems for pattern characteristic detection

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819239A (en) * 2011-06-08 2012-12-12 同济大学 Intelligent fault diagnosis method of numerical control machine tool
CN103630244A (en) * 2013-12-18 2014-03-12 重庆大学 Equipment fault diagnosis method and system of electric power system
US20190114481A1 (en) * 2017-10-18 2019-04-18 The Trustees Of Columbia University In The City Of New York Methods and systems for pattern characteristic detection

Also Published As

Publication number Publication date
CN111209434A (en) 2020-05-29

Similar Documents

Publication Publication Date Title
CN111209434B (en) Substation equipment inspection system and method based on multi-source heterogeneous data fusion
CN111353413B (en) Low-missing-report-rate defect identification method for power transmission equipment
CN110827251B (en) Power transmission line locking pin defect detection method based on aerial image
CN110598736B (en) Power equipment infrared image fault positioning, identifying and predicting method
CN110717481B (en) Method for realizing face detection by using cascaded convolutional neural network
CN110929918A (en) 10kV feeder line fault prediction method based on CNN and LightGBM
CN112070134A (en) Power equipment image classification method and device, power equipment and storage medium
CN115270965A (en) Power distribution network line fault prediction method and device
CN117151649B (en) Construction method management and control system and method based on big data analysis
CN113191429A (en) Power transformer bushing fault diagnosis method and device
CN113361686A (en) Multilayer heterogeneous multi-mode convolutional neural network integrated robot inspection method
CN114359695A (en) Insulator breakage identification method based on uncertainty estimation
CN117032165A (en) Industrial equipment fault diagnosis method
CN113486950B (en) Intelligent pipe network water leakage detection method and system
CN115035328A (en) Converter image increment automatic machine learning system and establishment training method thereof
CN112434887B (en) Water supply network risk prediction method combining network kernel density estimation and SVM
CN112419243A (en) Power distribution room equipment fault identification method based on infrared image analysis
CN115830302A (en) Multi-scale feature extraction and fusion power distribution network equipment positioning identification method
CN114283367B (en) Artificial intelligent open fire detection method and system for garden fire early warning
CN114882410B (en) Tunnel dome lamp fault detection method and system based on improved positioning loss function
CN115880472A (en) Intelligent diagnosis and analysis system for electric power infrared image data
CN112508946B (en) Cable tunnel anomaly detection method based on antagonistic neural network
CN114220084A (en) Distribution equipment defect identification method based on infrared image
CN116403163B (en) Method and device for identifying opening and closing states of handles of cut-off plug doors
Wang et al. Cable temperature alarm threshold setting method based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant