CN113920097B - Power equipment state detection method and system based on multi-source image - Google Patents

Power equipment state detection method and system based on multi-source image Download PDF

Info

Publication number
CN113920097B
CN113920097B CN202111199043.3A CN202111199043A CN113920097B CN 113920097 B CN113920097 B CN 113920097B CN 202111199043 A CN202111199043 A CN 202111199043A CN 113920097 B CN113920097 B CN 113920097B
Authority
CN
China
Prior art keywords
power equipment
visible light
thermal imaging
imaging picture
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111199043.3A
Other languages
Chinese (zh)
Other versions
CN113920097A (en
Inventor
尚博文
冯光
徐铭铭
孙芊
王鹏
徐恒博
牛荣泽
王倩
李宗峰
张建宾
陈明
谢芮芮
李丰君
董轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Research Institute of State Grid Henan Electric Power Co Ltd
Original Assignee
Electric Power Research Institute of State Grid Henan Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of State Grid Henan Electric Power Co Ltd filed Critical Electric Power Research Institute of State Grid Henan Electric Power Co Ltd
Priority to CN202111199043.3A priority Critical patent/CN113920097B/en
Publication of CN113920097A publication Critical patent/CN113920097A/en
Application granted granted Critical
Publication of CN113920097B publication Critical patent/CN113920097B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J5/0096Radiation pyrometry, e.g. infrared or optical thermometry for measuring wires, electrical contacts or electronic systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J2005/0077Imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Image Analysis (AREA)

Abstract

A power equipment state detection method and system based on multi-source images combines thermal imaging and visible light imaging in an application scene of robot inspection, and early warning of equipment faults is achieved in advance through power equipment thermal information. Considering that the resources of the mobile equipment are limited, a D-YOLOv4 model is provided for identifying and positioning the power equipment in the visible light image, the pixel point mapping of the thermal imaging and the visible light imaging is completed through an image registration method and affine transformation based on edge detection, the power equipment area in the infrared image is obtained, and the power grid operation safety standard is used as the state detection basis of each power equipment. And (3) combining the infrared thermal imaging image and the visible light image by applying image fusion and deep learning, establishing a pixel mapping relation, and finding out important attention areas and important attention points, so that the inspection robot obtains the hot spot temperature of each target device during electric inspection, and the intelligent inspection level is improved based on specific analysis of the device types.

Description

Power equipment state detection method and system based on multi-source image
Technical Field
The invention belongs to the technical field of power equipment state detection, and particularly relates to a power equipment state detection method and system based on a multi-source image.
Background
Compared with a manual inspection mode, the electric inspection robot system has the advantages of high automation level, high safety and the like, and at present, a visible light camera and a thermal imaging instrument are common detection instruments of an inspection robot, and can detect electric equipment in a non-contact mode without power failure. Therefore, the real-time fault diagnosis based on image processing has important significance and provides decision support for equipment detection.
However, the massive measurement information generated by uninterrupted measurement presents a great challenge to the traditional fault diagnosis method, and the data processing is difficult to synchronously process. Considering that the number of pictures generated by uninterrupted measurement and inspection is huge, manual investigation is time-consuming and labor-consuming, and omission phenomenon is easy to occur, and the traditional mode of feature extraction and classifier has the defects of low accuracy and weak generalization capability. With the continuous exploration of students, convolutional neural networks have achieved breakthrough results in terms of computer vision. In view of the strong data analysis capability of deep learning, how to use the deep convolutional neural network for fault diagnosis, improves the detection efficiency, reduces the potential safety hazard, and becomes a hot spot problem which needs to be solved urgently at present.
In the prior art, the overall electrical performance and insulation level change of the equipment, such as defects of the equipment itself, oxidation and corrosion of contact surfaces, loosening of bolts, wire strand scattering and the like, are detected based on the temperature change of the power equipment, and can be reflected by thermal imaging. Target detection based on convolutional neural network is a large branch of computer vision, aiming at finding a specific target in an image and labeling the specific target, but there are many limiting factors in directly applying a thermal imaging image to target detection: first, the infrared image has less texture information, low resolution and large noise. Secondly, the color value distribution of the thermal imaging pixels is unstable, a pseudo-color image is generated according to the temperature, the colors of the areas with similar temperatures are almost the same, so that the detail information of equipment is desalted, the characteristics are not obvious, pictures in different states of the same equipment are also different, especially when the equipment is in fault, the outline of the fault part of the power equipment is strong, the whole outline information is weaker, reliable identification and positioning can not be carried out on the power equipment, and the missed detection and even false detection are easily caused. The more serious the equipment failure, the worse the detection effect, contrary to the detection requirements. Thirdly, safety and stability are basic requirements for operation of the power system, the acquisition of infrared images under various fault conditions is difficult, if a mode of directly identifying the infrared images is adopted, the requirements of the convolutional neural network on a large number of fault data sets are difficult to meet, faults which are not contained in a training set are easy to miss or misdetect, and the training effect cannot be guaranteed. In contrast, the visible light imaging has obvious advantages in object detection because the visible light imaging has high resolution and rich texture information, is widely applied to various monitoring fields, is not influenced by the running state of equipment. The visible light image has the characteristic of being not influenced by the running state of the equipment, but at the same time, the visible light image also means that the visible light image lacks thermal fault information of the power equipment. Therefore, the information advantages of the infrared image and the visible light image are combined in consideration of complementarity, a corresponding relationship between the infrared image and the visible light image is established, and multi-information matching fusion and comprehensive utilization are realized. Compared with single-source information, the multi-source information can improve the comprehensiveness of target perception.
Disclosure of Invention
In order to solve the defects in the prior art, the invention aims to provide a multi-source image-based power equipment state detection method and system, which can realize the mutual combination of thermal imaging and visible light imaging advantages in an application scene of robot inspection and early warn equipment faults in advance through power equipment thermal information.
The invention adopts the following technical scheme.
A multi-source image-based power device state detection method, comprising:
step 1, acquiring a thermal imaging picture and a visible light imaging picture of power equipment;
Step 2, establishing an improved single-stage detector model based on a convolutional neural network; taking a visible light imaging picture as input data of the improved single-stage detector model, and outputting a pixel region of the power equipment;
step 3, respectively extracting edge contours of the power equipment from the thermal imaging picture and the visible light imaging picture; extracting stable characteristic point pairs between a thermal imaging picture and a visible light imaging picture according to the edge profile;
step 4, registering the thermal imaging picture and the visible light imaging picture by utilizing affine transformation according to the stable characteristic point pairs, namely establishing a mapping relation between the pixels of the thermal imaging picture and the pixels of the visible light imaging picture;
step 5, positioning a pixel area of the power equipment in the visible light imaging picture to a temperature area of the power equipment in the thermal imaging picture according to the mapping relation, and extracting a target temperature area from the pixel area;
Step 6, traversing the equipment temperature in the target temperature area, and taking the maximum value of the equipment temperature in the target temperature area as the hot spot temperature of the power equipment;
And 7, comparing the hot spot temperature with a set hot spot temperature limit value, and if the hot spot temperature is greater than or equal to the hot spot temperature limit value, determining that the state of the power equipment is abnormal, and carrying out temperature alarm.
Preferably, in step 1, when the thermal imaging picture and the visible light picture of the power equipment are acquired, the position, the shooting direction and the shooting angle of the picture shooting device are consistent.
Preferably, in step 2, the model is modified based on a single-stage detector model YOLOv based on a convolutional neural network, comprising:
Step 2.1, the backbone structure CSPDARKNET53 of the single-stage detector model YOLOv4 is modified, and the structure of the residual network ResNet in the convolution layer corresponding to the pre-selected feature maps in the backbone structure CSPDARKNET is replaced by dense modules in DENSENETS; the dense module comprises dense blocks and transition layers which are alternately connected, the current layer dense block takes the characteristic information output by each layer of dense blocks in front of the current layer dense block as input, the characteristic information output by the current layer dense block is the input of each layer of dense blocks behind the current layer dense block, and the characteristic information s n output by the nth layer dense block meets the following relation:
sn=Hn[s0,s1,s2,...,sn-1]
Wherein s 0,s1,s2,...,sn-1 represents characteristic information of the 0 th layer, the 1 st layer, the 2 nd layer, the … … th layer and the n-1 st layer respectively,
H n is a combined operation function of batch normalization, activation function and convolution of the input data by the nth layer of dense block;
step 2.2, clustering the number and the size of the candidate frames of the pixel region of the power equipment of the single-stage detector model YOLOv based on a k-means++ algorithm, and selecting the size of the candidate frames according to the following relation:
In the method, in the process of the invention,
N represents the total number of candidate boxes,
X i represents the i-th candidate box, i=1, 2, …, n,
K represents the number of clustering centers;
cen j denotes the size of the jth cluster center, i.e., the jth candidate box, j=1, 2, …, k,
Avg_ IoU k represents the matching degree of the anchor frame and the candidate frame when the number of the clustering centers is k, and the value range is 0 to 1;
Taking the point corresponding to the maximum value of each element in the set A= { Δavg_ IoU k|avg_IoUk≥70%,k∈[K1,K2 ] } as a clustering result, wherein k sizes corresponding to the clustering result are used as the size of the candidate frame of the pixel region of the power equipment of the single-stage detector model YOLOv; wherein Δavg_ IoU k=avg_IoUk-avg-IoUk-1,Δavg_IoUk represents an increase in avg_ IoU k; the number K of the clustering centers is all integers from K 1 to K 2;
step 2.3, improving the neck structure PANet of the single-stage detector model YOLOv4, and fusing the characteristic information s n into the network detection layer through transverse connection and downsampling;
step 2.4, performing sparse training on the single-stage detector model YOLOv4 modified in steps 2.1 to 2.3 according to a Loss function Loss with sparse regularization penalty, wherein the Loss function Loss of the sparse training satisfies the following relation:
In the method, in the process of the invention,
Loss YOLOv4 represents a loss function for normal training,
G (gamma) represents the regularization penalty function of the scale factor gamma,
Λ represents a balance factor;
Step 2.5, determining pruning proportion in the value range of the proportion factor gamma according to the model accuracy, and pruning the channel; the value of the scale factor gamma ranges from 1% to 99%.
Preferably, in step 2.1, the preselected feature map sizes in the backbone structure CSPDARKNET are 19×19, 38×38, 76×76, respectively.
Preferably, in step 2, the improved single-stage detector model outputs the upper left corner position coordinates (x 0,y0) of the power device, the width w and the height h of the power device pixel region; the power equipment position coordinate (x 0,y0) is the center point coordinate of a pixel area of the power equipment, and the pixel area is a rectangular area with points (x 0-w/2,y0 -h/2) and points (x 0+w/2,y0 +h/2) connected as diagonal lines.
Preferably, step 3 comprises:
Step 3.1, respectively extracting edge contours of power equipment in the thermal imaging picture and the visible light imaging picture by using an edge detection method; the edge detection method adopts a Sobel operator;
Step 3.2, extracting thermal imaging feature points and thermal imaging local feature descriptors from the edge outline of the thermal imaging picture by using an acceleration robustness feature algorithm to form a thermal imaging feature point set, and extracting visible imaging feature points and visible imaging local feature descriptors from the edge outline of the visible imaging picture to form a visible imaging feature point set; wherein each local feature descriptor is a 64-dimensional feature vector;
Step 3.3, matching the thermal imaging local feature descriptor and the visible light imaging local feature descriptor by using a k-dimensional-tree algorithm and a k-nearest neighbor algorithm to obtain a plurality of feature point pairs;
and 3.4, executing a random sampling consistency algorithm to filter the characteristic point pairs, and screening out stable characteristic point pairs.
Preferably, in step 3.4, the one stable characteristic point pair includes a pixel point position coordinate (x, y) in the thermal imaging picture and a pixel point position coordinate (x ', y') in the visible light imaging picture.
Preferably, step 4 comprises:
step 4.1, calculating the confidence coefficient of each stable characteristic point pair, and selecting 3 stable characteristic point pairs with the highest confidence coefficient;
And 4.2, registering the thermal imaging picture and the visible light imaging picture by utilizing the selected 3 stable characteristic point pairs and utilizing affine transformation, wherein the affine transformation of the thermal imaging picture and the visible light imaging picture meets the following relational expression:
In the method, in the process of the invention,
A denotes an object scaling assignment caused by a difference between the thermal imaging and the visible light image acquisition device,
Θ represents the object rotation angle caused by the difference of the thermal imaging and the visible light image capturing device,
T x、tv represents the amount of translation of the object in the horizontal direction and the amount of translation in the vertical direction, respectively, of the thermographic image as compared to the visible image due to the differences in the acquisition devices;
x and y respectively represent the position coordinates of the pixel points in the thermal imaging picture,
X 'and y' respectively represent the position coordinates of the pixel points in the visible light imaging picture.
Preferably, in step 5, the process comprises,
Step 5.1, positioning the pixel area of the power equipment in the visible light imaging picture to the temperature area of the power equipment in the thermal imaging picture according to the mapping relation, namely converting the pixel area of the power equipment in the visible light imaging picture into the temperature area T m×n of the power equipment in the thermal imaging picture, and satisfying the following relation:
Wherein T cd represents the temperature corresponding to each pixel in the pixel region, c is equal to or more than 1 and equal to or less than m, d is equal to or less than 1 and equal to or less than n;
Step 5.2, extracting a target temperature region T p×q of the power equipment in the thermal imaging picture from the temperature region T m×n of the power equipment in the thermal imaging picture, wherein the target temperature region T p×q satisfies the following relation:
Wherein T ij∈Tm×n is equal to or greater than 1 and equal to or less than p and equal to or less than m, and equal to or greater than 1 and less than j and equal to or less than q and less than n.
Preferably, in step 7, the improved single-stage detector model further outputs a power device type, and determines different hot spot temperature limits according to different power device types.
A multi-source image based power device status detection system, comprising: the device comprises a picture acquisition module, a picture area processing module, a picture registering module, a hot spot temperature detection module and an equipment state early warning module;
The image acquisition module is used for acquiring thermal imaging images and visible light imaging images of the power equipment and respectively inputting the images into the image area processing module and the image registration module;
The picture area processing module comprises a pixel area processing unit and a temperature area processing unit; the pixel area processing unit is used for taking a visible light imaging picture as input data and outputting a pixel area of the power equipment based on the improved single-stage detector model; the temperature region processing unit is used for extracting a target temperature region of the power equipment in the thermal imaging picture from a pixel region of the power equipment in the visible light imaging picture according to the registration result provided by the picture registration module;
the image registration module is used for respectively extracting edge contours of the power equipment from the thermal imaging image and the visible light imaging image; extracting stable characteristic point pairs between a thermal imaging picture and a visible light imaging picture according to the edge profile; registering the thermal imaging picture and the visible light imaging picture by utilizing affine transformation according to the stable characteristic point pairs; the registration result output by the picture registration module is input data of the temperature area processing unit;
The hot spot temperature detection module is used for traversing the equipment temperature in the target temperature area, and taking the maximum value of the equipment temperature in the target temperature area as the hot spot temperature of the power equipment; the hot spot temperature output by the hot spot temperature detection module is input data of the equipment state early warning module;
the equipment state early warning module is used for comparing the hot spot temperature with a set hot spot temperature limit value, and if the hot spot temperature is greater than or equal to the hot spot temperature limit value, determining that the state of the power equipment is abnormal, and carrying out temperature warning.
Compared with the prior art, the method has the advantages that image fusion and deep learning are simultaneously applied, the infrared thermal imaging image and the visible light image are combined with each other, a pixel mapping relation is established, important attention areas and important attention points are found out, the maximum temperature value of each target device is obtained when the inspection robot performs electric inspection, specific analysis on specific devices is achieved, and the intelligent inspection level is improved.
The beneficial effects of the invention include:
1. Temperature analysis at individual level is realized: according to different safe operation temperature thresholds of each power device in the power grid operation defect level standard, the specific temperature of each device is analyzed, the intelligent level of monitoring can be improved, the fact that only the highest temperature in the whole infrared image is used for identifying the device is effectively avoided, and therefore the detection sensitivity of device faults is improved, meanwhile, the problem of displaying the device can be highlighted when the temperature of the individual layer is thinned, the manual processing workload is remarkably reduced, and the working efficiency is improved;
2. the anti-interference device has the advantages of interference resistance: the advantage that the abundant texture information of the visible light image can enhance the anti-interference performance of various equipment identification is fully exerted, and the advantage that the infrared image provides pixel temperature information is fully exerted, so that the reliability and the accuracy of the fault of the early warning equipment are ensured by combining the two data sources;
3. The need for real-time detection can be realized: the YOLOv model is improved, a more excellent detection effect is achieved by using 25.8% of the parameter of the original model, a lightweight model suitable for mobile terminal application and real-time detection is constructed, and the aims of few parameters and high accuracy are achieved.
Drawings
FIG. 1 is a block diagram of steps of a method for detecting a status of a power device based on a multi-source image according to the present invention;
FIG. 2 is a schematic diagram of a 3-layer dense block in an improved single-stage detector model according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a structure of PANet of an improved single-stage detector model according to an embodiment of the present invention;
FIG. 4 is a flow chart of network pruning in an improved single-stage detector model according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a picture registration process according to an embodiment of the present invention;
FIG. 6 is a convolution kernel of the Sobel operator in an embodiment of the present invention.
Detailed Description
The application is further described below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present application, and are not intended to limit the scope of the present application.
Referring to fig. 1, a method for detecting a state of a power device based on a multi-source image includes steps 1 to 7, specifically as follows:
and step 1, acquiring thermal imaging pictures and visible light imaging pictures of the power equipment.
In the preferred embodiment of the invention, a 500kV transformer substation isolating switch is taken as a research object, and 517 Zhang Recheng image pictures and visible light imaging pictures are obtained during the inspection of the inspection robot to construct a training data set as an input data source for detection.
Specifically, in step 1, when the thermal imaging picture and the visible light picture of the power equipment are acquired, the position, the shooting direction and the shooting angle of the picture shooting device are consistent.
In the preferred embodiment of the invention, in order to avoid the difference of the thermal imaging picture and the visible light imaging picture in the aspects of size, visual angle, visual field and the like, the picture is shot by respectively utilizing an infrared imaging instrument and a visible light camera when the inspection robot inspects, and the infrared imaging instrument and the visible light camera are positioned at the same position and in the same shooting direction. In addition, as long as the positions of the infrared camera and the visible light camera are unchanged during inspection shooting, various geometric transformation relations of translation, rotation and scaling between the thermal imaging picture and the visible light picture are always applicable, re-solving is not needed, and the real-time performance of detection is not affected.
Step 2, establishing an improved single-stage detector model based on a convolutional neural network; and taking the visible light imaging picture as input data of the improved single-stage detector model, and outputting a pixel area of the power equipment.
Convolutional neural networks are a commonly used deep learning framework, which adopts small calculation amount to mine deep features of pictures through unique structures such as convolutional kernels, pooling and the like. Compared with the traditional object detection method, the convolutional neural network has the great advantages of strong self-learning and detection capability, and is divided into a single-stage detector and a two-stage detector in the aspect of target detection. Two-stage detectors such as fast R-CNN have met with great success in terms of high accuracy, but they generally require more inference time. In contrast, the single-stage detector mainly pursues computational efficiency, and is suitable for real-time detection tasks, so that a single-stage detector model YOLOv with outstanding comprehensive performance in real-time performance and accuracy is selected as a base network. The application object of the preferred embodiment of the invention is a patrol robot, so the focus is on detection precision, mobile terminal application and real-time monitoring. Because YOLOv4 has more parameters and large model quantity, the problem of difficult application of a mobile terminal exists, and the detection instantaneity is closely related to the size of the model, so that the model accuracy is ensured, the model parameters are reduced to serve as optimization targets, the model improvement is carried out on YOLOv, and a D-YOLOv4 model is provided for target detection of power equipment.
YOLOv4 uses CSPDARKNET53 as the Backbone network (Backbone), where the CSPNet architecture improves the learning ability of the network. The neck network (Neck) employs a path aggregation network (PANet) and a spatial pyramid pooling network (SPP) for multi-scale feature extraction and feature map fusion. The role of the head network (head) part is to predict the class and bounding box of the object. YOLOv4 has excellent detection performance, is widely applied in industry and academia, but has high requirement on computing power and hardware configuration. YOLOv4 has a model size of 256M, requiring 1270 hundred million floating point operations (BFLOPs), which means that when it is applied to an internet of things device, embedded device or other mobile device to implement edge computation or real-time detection, its large, complex computation of parameters will be the main resistance.
In the preferred embodiment of the invention, when the substation is patrolled and examined, the data processing is preferably completed at the distributed monitoring nodes in the patrolling and examining robot, so that the time delay caused by data transmission during remote data processing is avoided. Therefore, in order to reduce model parameters, improve feature expression capability and shorten calculation time, anchor frame parameters and network architecture of the model are optimized, and finally, a final optimized model is determined according to sparse training and channel pruning.
Specifically, in step 2, the model is modified based on a single-stage detector model YOLOv based on a convolutional neural network, which includes:
Step 2.1, the backbone structure CSPDARKNET53 of the single-stage detector model YOLOv4 is modified, and the structure of the residual network ResNet in the convolution layer corresponding to the pre-selected feature maps in the backbone structure CSPDARKNET is replaced by the dense blocks in DENSENETS;
The dense convolution network (DenseNet) effectively enhances feature reuse and mitigates the problem of gradient vanishing, denseNet uses denser skipped connections than the residual network (ResNet), replaces the "add" operation with the "connect" operation, and achieves better performance with fewer parameters.
The backbone network of YOLOv4 is CSPDARKNET53, which is mainly built on the residual block, in order to improve the model efficiency, in the preferred embodiment of the present invention, the structure of ResNet (residual network) in the convolution layer corresponding to the 19×19, 38×38, 76×76 feature map is changed into a dense block; the dense module comprises dense blocks and transition layers which are alternately connected; as shown in fig. 2, the current layer compact block takes as input the characteristic information output by each layer of compact blocks in front of the current layer compact block, the characteristic information output by the current layer compact block is input by each layer of compact blocks behind the current layer compact block, wherein the characteristic information s n output by the nth layer compact block satisfies the following relation:
sn=Hn[s0,s1,s2,...,sn-1]
In the method, in the process of the invention,
S 0,s1,s2,...,sn-1 represents the characteristic information of the 0 th layer, the 1 st layer, the 2 nd layer, the … … th layer and the n-1 st layer respectively,
H n is a combined operation function of batch normalization, activation function and convolution of the input data by the nth layer of dense block;
In a preferred embodiment of the invention, the combined operating function is BN-Mish-Conv (1X 1) -BN-Mish-Conv (3X 3); the purpose of Conv (1×1) convolution kernel is to reduce the number of parameters, realize cross-layer information integration, and use Conv (3×3) convolution kernel for feature extraction. The specific parameter settings for these dense blocks are shown in table 1.
TABLE 1 dense block setting parameters in the trunk section (input size: 608 x 608 pixels)
Detailed data is described taking a dense block of 76 x 76 resolution as an example. The number of the 1×1 and 3×3 convolution kernels is 64 and 32, respectively, so that the input data size of the 3×3 convolution kernels is reduced to 76×76×64 by 1×1 convolution operation, and the corresponding processing result is that the size of the 3×3 kernels is 76×76×32. Thus, the 5-layer input data sizes in this dense block are 76×76×128, 76×76×160, 76×76×192, 76×76×224, 76×76×256 in this order, and all output sizes are 76×76×32.
Dense connections enhance feature reuse and make parameters more efficient.
Step 2.2, clustering the number and the size of the candidate frames of the pixel region of the power equipment of the single-stage detector model YOLOv based on a k-means++ algorithm, and selecting the size of the candidate frames according to the following relation:
In the method, in the process of the invention,
N represents the total number of candidate boxes,
X i represents the i-th candidate box, i=1, 2, …, n,
K represents the number of clustering centers;
cen j denotes the size of the jth cluster center, i.e., the jth candidate box, j=1, 2, …, k,
Avg_ IoU k represents the matching degree of the anchor frame and the candidate frame when the number of the clustering centers is k, and the value range is 0 to 1; it can be seen that a larger avg_ IoU k value indicates a better clustering result.
Considering that the invention needs to consider both the computational complexity and the accuracy, the point corresponding to the maximum value of each element in the set A= { Δavg_ IoU k|avg_IoUk≥70%,k∈[K1,K2 ] } is used as a clustering result, and k sizes corresponding to the clustering result are used as the candidate frame sizes of the pixel region of the power equipment of the single-stage detector model YOLOv; wherein Δavg_ IoU k=avg_IoUk-avg_IoUk-1,Δavg_IoUk represents an increase in avg_ IoU k; the number K of cluster centers is all integers from K 1 to K 2.
The original target candidate frame size of the single-stage detector model YOLOv is aimed at the COCO public data set, so that pertinence is lacking, in view of the fact that the power equipment has unique length-width ratio characteristics, the k-means++ algorithm is adopted to conduct cluster analysis on the anchor frame size and number, and the adaptability of the network to the power equipment size is enhanced.
Step 2.3, improving the neck structure PANet of the single-stage detector model YOLOv4, and fusing the characteristic information s n into the network detection layer through transverse connection and downsampling; information transmission directions among scale layers are improved by adding a jump connection mechanism, the calculated amount of a detection layer is reduced, and meanwhile information interaction among different scales is enhanced.
For the recognition task, higher resolution features provide more accurate localization signals, which is important for small objects, while deep exploration of features with small resolution has deeper semantic information. PANet establishes a path from top to bottom and from bottom to top based on the thought, extracts semantic feature information of different scales from the feature information obtained by up-sampling and the feature information obtained by down-sampling respectively, and integrates the semantic feature information; to facilitate information flow transfer between the multi-scale features.
Due to integration of semantic feature information, the target detection layer is better predicted. PANet is the neck portion in YOLOv, and for a 608×608 input size, the final output size is 1/8 (76×76), 1/16 (38×38), 1/32 (19×19) of the original, i.e., the minimum receptive field is 8×8. Dense grids facilitate localization signals and small object predictions, but add significant computation. For example, 76×76 grids are 16 times and 4 times the calculated amount of 38×38 and 19×19 grids, respectively. In the preferred embodiment of the invention, the infrared information is fully used, but the infrared information is inaccurate due to the too far distance. In addition, the device farther from the robot can be detected even when the distance is short, so that small object detection is not required. In summary, the Neck structure is improved as shown in fig. 3. The plurality of convolution layers are stacked to form a feature hierarchy, and interlayer connection is realized through downsampling and upsampling. To further simplify the model, the hierarchy shown in dashed lines in the figure is omitted, but to use the feature information extracted by this part of the hierarchy in target prediction, the feature information may be fused to the next depth scale by downsampling and cross-linking. Therefore, the prediction part contains accurate positioning information, and does not need to bear a large amount of calculation burden brought by high resolution in the prediction link.
Step 2.4, sparse training is implemented by scaling factors of each channel in the BN layer, that is, according to a Loss function with sparse regularization penalty, sparse training is performed on the single-stage detector model YOLOv4 modified in steps 2.1 to 2.3, where the Loss function Loss of sparse training satisfies the following relation:
In the method, in the process of the invention,
Loss YOLOv4 represents a loss function for normal training,
G (gamma) represents the regularization penalty function of the scale factor gamma,
Λ represents a balance factor;
In a preferred embodiment of the present invention, L1-regularization is selected to achieve network sparseness. The penalty function brings the scaling factor of the unimportant channels to zero. The value of the scaling factor is used to measure the importance of the channel and accordingly prune the channel.
Step 2.5, determining pruning proportion in the value range of the proportion factor gamma according to the model accuracy, and pruning the channel; the value of the scale factor gamma ranges from 1% to 99%.
In the preferred embodiment of the invention, the purpose of network pruning is to remove less contributing parts and obtain a more efficient and simplified model. The main steps of network pruning are shown in fig. 4, and mainly comprise sparse training, channel pruning and fine tuning. Channel pruning determines the final network channel parameters, which are based on sparse training.
After pruning, the accuracy of the model is reduced, and fine adjustment is needed to improve the detection performance. On the basis of sparse training, pruning rates of 80%, 60%, 40% and 20% were tried, respectively. To achieve a balance of model size and accuracy, 40% was chosen as pruning rate. The [email protected] of the trimmed model reaches 92.86%, and is used as a final optimized network model D-YOLOv.
In the preferred embodiment of the invention, the D-YOLOv model of the power equipment is obtained by improving the single-stage detector model YOLOv, and compared with experimental results of transformer substation disconnecting switch data sets collected by other typical target detection networks in the design, and the experimental results are shown in Table 2 in detail.
Table 2D-YOLOv model evaluation index comparison results
Experiments show that the D-YOLOv4 model has obvious advantages on the data set of the power equipment, and a more excellent detection effect is realized by 25.8% of the parameter quantity of the original model.
Further, in step 2, the improved single-stage detector model outputs the position coordinates (x 0,y0) of the upper left corner of the power device, the width w and the height h of the pixel region of the power device; the power equipment position coordinate (x 0,y0) is the center point coordinate of a pixel area of the power equipment, and the pixel area is a rectangular area with points (x 0-w/2,y0 -h/2) and points (x 0+w/2,y0 +h/2) connected as diagonal lines.
Step 3, respectively extracting edge contours of the power equipment from the thermal imaging picture and the visible light imaging picture; and extracting stable characteristic point pairs between the thermal imaging picture and the visible light imaging picture according to the edge profile.
Preferably, as shown in fig. 5, step 3 includes:
step 3.1, respectively extracting edge contours of power equipment in the thermal imaging picture and the visible light imaging picture by using an edge detection method; the edge detection method adopts a Sobel operator. The convolution kernel of the Sobel operator is shown in fig. 6.
It should be noted that in the preferred embodiment of the present invention, the Sobel operator is used as an edge detection method, which is a non-limiting preferred choice.
Because the imaging principle difference of visible light and thermal imaging is that color filling is carried out on a thermal imaging picture according to temperature, RGB values of the picture have larger difference, pixels of the thermal imaging picture and the RGB values do not have correlation, so that characteristic points are directly obtained and matched, the characteristic points are obtained and have larger errors, but the whole outline of the power equipment is common to the two pictures, so that the edge outline of an object is extracted through an edge detection algorithm, the common point of information of the two is highlighted, the calculation amount of subsequent characteristic detection is reduced while the characteristic matching problem is solved, and the calculation speed is increased.
In the edge detection step, the Sobel operator is utilized to extract the edge contours of the visible light and infrared images, and the edge detection result is directly related to the subsequent image analysis.
Step 3.2, extracting thermal imaging feature points and thermal imaging local feature descriptors from the thermal imaging picture edge contour by utilizing an acceleration robustness feature (speeded-up robust features, SURF) algorithm to form a thermal imaging feature point set, and extracting visible imaging feature points and visible imaging local feature descriptors from the visible imaging picture edge contour to form a visible imaging feature point set; wherein each local feature descriptor is a 64-dimensional feature vector.
The SURF algorithm is a local feature description operator that maintains invariance to image scaling, rotation, and even affine transformations. The method mainly comprises the steps of scale space extremum detection, key point positioning, direction distribution, key point descriptor and the like. The SURF algorithm uses box filters to approximate the computation of gaussian filters and Hessian matrices, thereby greatly speeding up image processing.
And, after edge detection is added, the number of key points is obviously increased, which means that the matching process has more choices and stronger robustness. More importantly, the number of key points in the visible image is significantly greater than that of the infrared image, which also demonstrates the advantage of using the visible image rather than the thermographic image for object detection.
Step 3.3, performing preliminary matching on the thermal imaging local feature descriptor and the visible light imaging local feature descriptor by using a k-dimensional-tree algorithm and a k-nearest neighbor algorithm to obtain a plurality of feature point pairs;
Feature matching is to compare feature descriptors, and select more similar feature points from the key point sets of different pictures as key point pairs. The SURF descriptor is a 64-dimensional feature vector, and the k-tree algorithm (KD-tree) and the k-nearest neighbor algorithm (k-NN) are often combined with the SURF algorithm to achieve preliminary feature matching.
And 3.4, executing a random sampling consistency algorithm to filter the characteristic point pairs, and screening out stable characteristic point pairs.
Further, in step 3.4, the one stable characteristic point pair includes a pixel point position coordinate (x, y) in the thermal imaging picture and a pixel point position coordinate (x ', y') in the visible light imaging picture.
Preliminary matching pairs can be obtained by applying the KD-tree and k-NN algorithms to the detected feature descriptors, but false matching pairs are unavoidable. Thus, a random sample consensus (RANSAC) algorithm is performed to filter the mismatching pairs and filter out stable feature point pairs to solve for affine transformations.
And 4, registering the thermal imaging picture and the visible light imaging picture by utilizing affine transformation according to the stable characteristic point pairs, namely establishing a mapping relation between the pixels of the thermal imaging picture and the pixels of the visible light imaging picture.
The phenomena of translation, rotation, scaling and the like exist among pictures, and the affine transformation belongs to affine transformation. In view of this, registration of two different source images is accomplished by solving an affine transformation matrix. Affine transformation is a basic and widely used linear transformation model.
Specifically, step 4 includes:
step 4.1, calculating the confidence coefficient of each stable characteristic point pair, and selecting 3 stable characteristic point pairs with the highest confidence coefficient;
And 4.2, registering the thermal imaging picture and the visible light imaging picture by utilizing the selected 3 stable characteristic point pairs and utilizing affine transformation, wherein the affine transformation of the thermal imaging picture and the visible light imaging picture meets the following relational expression:
In the method, in the process of the invention,
A denotes an object scaling assignment caused by a difference between the thermal imaging and the visible light image acquisition device,
Θ represents the object rotation angle caused by the difference of the thermal imaging and the visible light image capturing device,
T x、ty represents the output width target value and height target value of the improved single-stage detector model YOLOv respectively,
X and y respectively represent the position coordinates of the pixel points in the thermal imaging picture,
X 'and y' respectively represent the position coordinates of the pixel points in the visible light imaging picture.
And obtaining the mapping relation between the pixel point temperature value of the thermal imaging picture and the pixel point coordinates of the visible light imaging picture by solving the affine transformation model.
The final goal of image registration is to establish a pixel map. If the positions of the visible light camera and the thermal imager are unchanged, the geometric transformation relation is always applicable. Therefore, registration operation is not needed for each frame, and the real-time performance of equipment state evaluation is not affected.
And 5, positioning the pixel area of the power equipment in the visible light imaging picture to the temperature area of the power equipment in the thermal imaging picture according to the mapping relation, and extracting a target temperature area in the pixel area.
Because the thermal imaging map is essentially a temperature matrix, in general, raw data collected by the thermal imaging sensor is calculated according to a data estimation method of a technical manual to obtain a temperature value of each pixel, and the temperature of each pixel in visible light can be obtained by calculation in this way.
The pixel area of the power equipment in the visible light imaging picture is represented by a pixel point coordinate matrix, the temperature area of the power equipment in the thermal imaging picture is represented by a pixel point temperature value matrix, and the corresponding relation between the pixel point coordinate matrix and the pixel point temperature value matrix is obtained by establishing a mapping relation between the pixel point temperature value of the thermal imaging picture and the pixel point coordinates of the visible light imaging picture.
Specifically, step 5 includes:
Step 5.1, positioning the pixel area of the power equipment in the visible light imaging picture to the temperature area of the power equipment in the thermal imaging picture according to the mapping relation, namely converting the pixel area of the power equipment in the visible light imaging picture into the temperature area T m×n of the power equipment in the thermal imaging picture, and satisfying the following relation:
Wherein T cd represents the temperature corresponding to each pixel in the pixel region, c is equal to or more than 1 and equal to or less than m, d is equal to or less than 1 and equal to or less than n;
Step 5.2, extracting a target temperature region T p×q of the power equipment in the thermal imaging picture from the temperature region T m×n of the power equipment in the thermal imaging picture, wherein the target temperature region T p×q satisfies the following relation:
Wherein T ij∈Tm×n is equal to or greater than 1 and equal to or less than p and equal to or less than m, and equal to or greater than 1 and less than j and equal to or less than q and less than n.
And 6, traversing the device temperature in the target temperature area, and taking the maximum value T max of the device temperature in the target temperature area as the hot spot temperature of the power device.
In the preferred embodiment of the present invention, the hot spot temperatures in 3 scenes are detailed in table 3.
Table 33 hotspot temperature comparisons in scenes
And 7, comparing the hot spot temperature with a set hot spot temperature limit value, if the hot spot temperature is greater than or equal to the hot spot temperature limit value, determining that the state of the power equipment is abnormal, carrying out temperature alarm, and prompting a worker to check in time.
Specifically, in step 7, the improved single-stage detector model also outputs the power equipment types, and determines different hot spot temperature limits according to different power equipment types.
The abnormal state of the power equipment comprises general faults, major faults and emergency faults, and different levels of faults respectively correspond to different thresholds.
In the preferred embodiment of the invention, the hot spot temperature limit value is set according to the national network safety regulation requirement.
In the preferred embodiment of the invention, the defect grade standard of the power equipment related to the hot spot temperature in the power grid enterprise is taken as a state evaluation basis, and the operation state is judged by combining the detected hot spot temperature value of the specific equipment. When a large difference exists between the hot spot temperature of the target equipment area and the highest temperature of the whole infrared image, namely the highest temperature is not in the target power equipment area, the algorithm provided by the method is more specific and accurate. The intelligent temperature measuring device can measure the temperature of a single hot spot of specific power equipment and evaluate the running state of the single hot spot, so that the intellectualization of inspection is improved.
A multi-source image based power device status detection system, comprising: the device comprises a picture acquisition module, a picture area processing module, a picture registering module, a hot spot temperature detection module and an equipment state early warning module;
the image acquisition module is used for acquiring thermal imaging images and visible light imaging images of the power equipment and respectively inputting the images into the image area processing module and the image registration module;
The picture area processing module comprises a pixel area processing unit and a temperature area processing unit; the pixel area processing unit is used for taking the visible light imaging picture as input data and outputting a pixel area of the power equipment based on the improved single-stage detector model; the temperature region processing unit is used for extracting a target temperature region of the power equipment in the thermal imaging picture from the pixel region of the power equipment in the visible light imaging picture according to the registration result provided by the picture registration module;
The image registration module is used for respectively extracting edge contours of the power equipment from the thermal imaging image and the visible light imaging image; extracting stable characteristic point pairs between the thermal imaging picture and the visible light imaging picture according to the edge profile; registering the thermal imaging picture and the visible light imaging picture by utilizing affine transformation according to the stable characteristic point pairs; the registration result output by the picture registration module is input data of the temperature area processing unit;
The hot spot temperature detection module is used for traversing the equipment temperature in the target temperature area, and taking the maximum value of the equipment temperature in the target temperature area as the hot spot temperature of the power equipment; the hot spot temperature output by the hot spot temperature detection module is input data of the equipment state early warning module;
and the equipment state early warning module is used for comparing the hot spot temperature with a set hot spot temperature limit value, and if the hot spot temperature is greater than or equal to the hot spot temperature limit value, determining that the state of the power equipment is abnormal and carrying out temperature warning.
Compared with the prior art, the method has the beneficial effects that the image registration and the deep learning are simultaneously applied, the infrared thermal imaging image and the visible light image are combined with each other, the pixel mapping relation is established, the important attention area and the important attention point are found out, the maximum temperature value of each target device is obtained when the inspection robot performs electric inspection, specific analysis on specific devices is achieved, and the intelligent inspection level is improved.
The beneficial effects of the invention include:
1. Temperature analysis at individual level is realized: according to different safe operation temperature thresholds of each power device in the power grid operation defect level standard, the specific temperature of each device is analyzed, the intelligent level of monitoring is improved, the fact that only the highest temperature in the whole infrared image is used for identifying the device is effectively avoided, and therefore the detection sensitivity of device faults is improved, meanwhile, the problem of displaying the device can be highlighted when the temperature of an individual layer is thinned, the manual processing workload is remarkably reduced, and the working efficiency is improved;
2. The anti-interference device has the advantages of interference resistance: the advantage that the abundant texture information of the visible light image can enhance the anti-interference performance of various equipment such as a disconnecting switch and the like and the advantage that the infrared image provides pixel temperature information are fully exerted, and the combination of the two data sources ensures the reliability and the accuracy of early warning equipment faults;
3. The need for real-time detection can be realized: the YOLOv model is improved, a more excellent detection effect is achieved by using 25.8% of the parameter of the original model, a lightweight model suitable for mobile terminal application and real-time detection is constructed, and the aims of few parameters and high accuracy are achieved.
While the applicant has described and illustrated the embodiments of the present invention in detail with reference to the drawings, it should be understood by those skilled in the art that the above embodiments are only preferred embodiments of the present invention, and the detailed description is only for the purpose of helping the reader to better understand the spirit of the present invention, and not to limit the scope of the present invention, but any improvements or modifications based on the spirit of the present invention should fall within the scope of the present invention.

Claims (10)

1. A power equipment state detection method based on multi-source images is characterized in that,
The method comprises the following steps:
step 1, acquiring a thermal imaging picture and a visible light imaging picture of power equipment;
Step 2, establishing an improved single-stage detector model based on a convolutional neural network; taking a visible light imaging picture as input data of the improved single-stage detector model, and outputting a pixel region of the power equipment;
In step 2, the model is modified based on a single-stage detector model YOLOv based on a convolutional neural network, comprising:
Step 2.1, the backbone structure CSPDARKNET53 of the single-stage detector model YOLOv4 is modified, and the structure of the residual network ResNet in the convolution layer corresponding to the pre-selected feature maps in the backbone structure CSPDARKNET is replaced by dense modules in DENSENETS; the dense module comprises dense blocks and transition layers which are alternately connected, the current layer dense block takes the characteristic information output by each layer of dense blocks in front of the current layer dense block as input, the characteristic information output by the current layer dense block is the input of each layer of dense blocks behind the current layer dense block, and the characteristic information s n output by the nth layer dense block meets the following relation:
sn=Hn[s0,s1,s2,...,sn-1]
wherein s 0,s1,s2,...,sn-1 represents the characteristic information of the 0 th layer, the 1 st layer, the 2 nd layer, the … … th layer and the n-1 th layer respectively, and H n is a combined operation function of carrying out batch normalization, activation function and convolution on input data by the n-th layer dense block;
step 2.2, clustering the number and the size of the candidate frames of the pixel region of the power equipment of the single-stage detector model YOLOv based on a k-means++ algorithm, and selecting the size of the candidate frames according to the following relation:
Wherein n represents the total number of candidate frames, X i represents the i-th candidate frame, i=1, 2, …, n, k represents the number of cluster centers; cen j represents the j-th clustering center, namely the size of the j-th candidate frame, j=1, 2, …, k, avg_ IoU k represents the matching degree of the anchor frame and the candidate frame when the number of the clustering centers is k, and the value range is 0 to 1;
Taking the point corresponding to the maximum value of each element in the set A= (deltaavg_ IoU k|avg_IoUk≥70%,k∈[K1,K2 ] } as a clustering result, wherein K sizes corresponding to the clustering result are used as candidate frame sizes of the pixel region of the power equipment of the single-stage detector model YOLOv4, wherein deltaavg_ IoU k=avg_IoUk-avg_IoUk-1,Δavg_IoUk represents the increment of avg_ IoU k, and the number K of the clustering center is all integers from K 1 to K 2;
step 2.3, improving the neck structure PANet of the single-stage detector model YOLOv4, and fusing the characteristic information s n into the network detection layer through transverse connection and downsampling;
step 2.4, performing sparse training on the single-stage detector model YOLOv4 modified in steps 2.1 to 2.3 according to a Loss function Loss with sparse regularization penalty, wherein the Loss function Loss of the sparse training satisfies the following relation:
in the formula, loss YOLOv4 represents a loss function of normal training, g (gamma) represents a regularization penalty function of a scale factor gamma, and lambda represents a balance factor;
Step 2.5, determining pruning proportion in the value range of the proportion factor gamma according to the model accuracy, and pruning the channel; the value range of the scale factor gamma is 1 to 99 percent;
step 3, respectively extracting edge contours of the power equipment from the thermal imaging picture and the visible light imaging picture; extracting stable characteristic point pairs between a thermal imaging picture and a visible light imaging picture according to the edge profile;
step 4, registering the thermal imaging picture and the visible light imaging picture by utilizing affine transformation according to the stable characteristic point pairs, namely establishing a mapping relation between the pixels of the thermal imaging picture and the pixels of the visible light imaging picture;
step 5, positioning a pixel area of the power equipment in the visible light imaging picture to a temperature area of the power equipment in the thermal imaging picture according to the mapping relation, and extracting a target temperature area from the pixel area;
Step 6, traversing the equipment temperature in the target temperature area, and taking the maximum value of the equipment temperature in the target temperature area as the hot spot temperature of the power equipment;
And 7, comparing the hot spot temperature with a set hot spot temperature limit value, and if the hot spot temperature is greater than or equal to the hot spot temperature limit value, determining that the state of the power equipment is abnormal, and carrying out temperature alarm.
2. The method for detecting a state of a multi-source image based power device according to claim 1, wherein,
In step 1, when the thermal imaging picture and the visible light picture of the power equipment are acquired, the position, the shooting direction and the shooting angle of the picture shooting device are consistent.
3. The method for detecting a state of a multi-source image based power device according to claim 1, wherein,
In step 2.1, the preselected feature map sizes in the backbone structure CSPDARKNET are 19×19, 38×38, 76×76, respectively.
4. The method for detecting a state of a multi-source image based power device according to claim 1, wherein,
In step 2, the improved single-stage detector model outputs the position coordinates (x 0,y0) of the upper left corner of the power equipment, the width w and the height h of the pixel region of the power equipment; the power equipment position coordinate (x 0,y0) is the center point coordinate of a pixel area of the power equipment, and the pixel area is a rectangular area with points (x 0-w/2,y0 -h/2) and points (x 0+w/2,y0 +h/2) connected as diagonal lines.
5. The method for detecting a state of a multi-source image based power device according to claim 1, wherein,
The step 3 comprises the following steps:
Step 3.1, respectively extracting edge contours of power equipment in the thermal imaging picture and the visible light imaging picture by using an edge detection method; the edge detection method adopts a Sobel operator;
Step 3.2, extracting thermal imaging feature points and thermal imaging local feature descriptors from the edge outline of the thermal imaging picture by using an acceleration robustness feature algorithm to form a thermal imaging feature point set, and extracting visible imaging feature points and visible imaging local feature descriptors from the edge outline of the visible imaging picture to form a visible imaging feature point set; wherein each local feature descriptor is a 64-dimensional feature vector;
Step 3.3, matching the thermal imaging local feature descriptor and the visible light imaging local feature descriptor by using a k-dimensional-tree algorithm and a k-nearest neighbor algorithm to obtain a plurality of feature point pairs;
and 3.4, executing a random sampling consistency algorithm to filter the characteristic point pairs, and screening out stable characteristic point pairs.
6. The method for detecting a state of a multi-source image based power device according to claim 5, wherein,
In step 3.4, a stable characteristic point pair includes pixel position coordinates (x, y) in the thermal imaging picture and pixel position coordinates (x ', y') in the visible light imaging picture.
7. The method for detecting a state of a multi-source image based power device according to claim 5, wherein,
Step 4 comprises:
step 4.1, calculating the confidence coefficient of each stable characteristic point pair, and selecting 3 stable characteristic point pairs with the highest confidence coefficient;
And 4.2, registering the thermal imaging picture and the visible light imaging picture by utilizing the selected 3 stable characteristic point pairs and utilizing affine transformation, wherein the affine transformation of the thermal imaging picture and the visible light imaging picture meets the following relational expression:
In the method, in the process of the invention,
A denotes an object scaling assignment caused by a difference between the thermal imaging and the visible light image acquisition device,
Θ represents the object rotation angle caused by the difference of the thermal imaging and the visible light image capturing device,
T x、ty represents the amount of translation of the object in the horizontal direction and the amount of translation in the vertical direction, respectively, of the thermographic image as compared to the visible image due to the differences in the acquisition devices;
x and y respectively represent the position coordinates of the pixel points in the thermal imaging picture,
X 'and y' respectively represent the position coordinates of the pixel points in the visible light imaging picture.
8. The method for detecting a state of a multi-source image based power device according to claim 7, wherein,
In the step 5 of the process,
Step 5.1, positioning the pixel area of the power equipment in the visible light imaging picture to the temperature area of the power equipment in the thermal imaging picture according to the mapping relation, namely converting the pixel area of the power equipment in the visible light imaging picture into the temperature area T m×n of the power equipment in the thermal imaging picture, and satisfying the following relation:
Wherein T cd represents the temperature corresponding to each pixel in the pixel region, c is equal to or more than 1 and equal to or less than m, d is equal to or less than 1 and equal to or less than n;
Step 5.2, extracting a target temperature region T p×q of the power equipment in the thermal imaging picture from the temperature region T m×n of the power equipment in the thermal imaging picture, wherein the target temperature region T p×q satisfies the following relation:
Wherein T ij∈Tm×n is equal to or greater than 1 and equal to or less than p and equal to or less than m, and equal to or greater than 1 and less than j and equal to or less than q and less than n.
9. The method for detecting a state of a multi-source image based power device according to claim 1, wherein,
In step 7, the improved single-stage detector model also outputs power equipment types, and determines different hot spot temperature limit values according to different power equipment types.
10. A multi-source image-based power device state detection system realized by using a multi-source image-based power device state detection method according to any one of claims 1 to 9, characterized in that,
The system comprises: the device comprises a picture acquisition module, a picture area processing module, a picture registering module, a hot spot temperature detection module and an equipment state early warning module;
The image acquisition module is used for acquiring thermal imaging images and visible light imaging images of the power equipment and respectively inputting the images into the image area processing module and the image registration module;
The picture area processing module comprises a pixel area processing unit and a temperature area processing unit; the pixel area processing unit is used for taking a visible light imaging picture as input data and outputting a pixel area of the power equipment based on the improved single-stage detector model; the temperature region processing unit is used for extracting a target temperature region of the power equipment in the thermal imaging picture from a pixel region of the power equipment in the visible light imaging picture according to the registration result provided by the picture registration module;
the image registration module is used for respectively extracting edge contours of the power equipment from the thermal imaging image and the visible light imaging image; extracting stable characteristic point pairs between a thermal imaging picture and a visible light imaging picture according to the edge profile; registering the thermal imaging picture and the visible light imaging picture by utilizing affine transformation according to the stable characteristic point pairs; the registration result output by the picture registration module is input data of the temperature area processing unit;
The hot spot temperature detection module is used for traversing the equipment temperature in the target temperature area, and taking the maximum value of the equipment temperature in the target temperature area as the hot spot temperature of the power equipment; the hot spot temperature output by the hot spot temperature detection module is input data of the equipment state early warning module;
the equipment state early warning module is used for comparing the hot spot temperature with a set hot spot temperature limit value, and if the hot spot temperature is greater than or equal to the hot spot temperature limit value, determining that the state of the power equipment is abnormal, and carrying out temperature warning.
CN202111199043.3A 2021-10-14 2021-10-14 Power equipment state detection method and system based on multi-source image Active CN113920097B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111199043.3A CN113920097B (en) 2021-10-14 2021-10-14 Power equipment state detection method and system based on multi-source image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111199043.3A CN113920097B (en) 2021-10-14 2021-10-14 Power equipment state detection method and system based on multi-source image

Publications (2)

Publication Number Publication Date
CN113920097A CN113920097A (en) 2022-01-11
CN113920097B true CN113920097B (en) 2024-06-14

Family

ID=79240500

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111199043.3A Active CN113920097B (en) 2021-10-14 2021-10-14 Power equipment state detection method and system based on multi-source image

Country Status (1)

Country Link
CN (1) CN113920097B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127445A (en) * 2019-12-26 2020-05-08 智洋创新科技股份有限公司 Distribution network line high-temperature area detection method and system based on deep learning
CN114494186B (en) * 2022-01-25 2022-11-08 国网吉林省电力有限公司电力科学研究院 Fault detection method for high-voltage power transmission and transformation line electrical equipment
CN114581760B (en) * 2022-05-06 2022-07-29 北京蒙帕信创科技有限公司 Equipment fault detection method and system for machine room inspection
CN117152397B (en) * 2023-10-26 2024-01-26 慧医谷中医药科技(天津)股份有限公司 Three-dimensional face imaging method and system based on thermal imaging projection
CN117351049B (en) * 2023-12-04 2024-02-13 四川金信石信息技术有限公司 Thermal imaging and visible light fusion measuring point registration guiding method, device and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112288043A (en) * 2020-12-23 2021-01-29 飞础科智慧科技(上海)有限公司 Kiln surface defect detection method, system and medium
CN112380952A (en) * 2020-11-10 2021-02-19 广西大学 Power equipment infrared image real-time detection and identification method based on artificial intelligence

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11037312B2 (en) * 2019-06-29 2021-06-15 Intel Corporation Technologies for thermal enhanced semantic segmentation of two-dimensional images
US20210090736A1 (en) * 2019-09-24 2021-03-25 Shanghai United Imaging Intelligence Co., Ltd. Systems and methods for anomaly detection for a medical procedure
CN112486197B (en) * 2020-12-05 2022-10-21 青岛民航凯亚***集成有限公司 Fusion positioning tracking control method based on self-adaptive power selection of multi-source image
CN112733950A (en) * 2021-01-18 2021-04-30 湖北工业大学 Power equipment fault diagnosis method based on combination of image fusion and target detection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112380952A (en) * 2020-11-10 2021-02-19 广西大学 Power equipment infrared image real-time detection and identification method based on artificial intelligence
CN112288043A (en) * 2020-12-23 2021-01-29 飞础科智慧科技(上海)有限公司 Kiln surface defect detection method, system and medium

Also Published As

Publication number Publication date
CN113920097A (en) 2022-01-11

Similar Documents

Publication Publication Date Title
CN113920097B (en) Power equipment state detection method and system based on multi-source image
CN107832672B (en) Pedestrian re-identification method for designing multi-loss function by utilizing attitude information
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
Yu et al. A litchi fruit recognition method in a natural environment using RGB-D images
CN106886216B (en) Robot automatic tracking method and system based on RGBD face detection
CN111784633B (en) Insulator defect automatic detection algorithm for electric power inspection video
CN110598736A (en) Power equipment infrared image fault positioning, identifying and predicting method
CN110033431B (en) Non-contact detection device and detection method for detecting corrosion area on surface of steel bridge
CN106128121B (en) Vehicle queue length fast algorithm of detecting based on Local Features Analysis
CN109544501A (en) A kind of transmission facility defect inspection method based on unmanned plane multi-source image characteristic matching
CN106483143A (en) A kind of solar energy photovoltaic panel dust stratification on-Line Monitor Device and its detection method
CN110852179B (en) Suspicious personnel invasion detection method based on video monitoring platform
CN113284144B (en) Tunnel detection method and device based on unmanned aerial vehicle
Zeng et al. Enabling efficient deep convolutional neural network-based sensor fusion for autonomous driving
CN113888754B (en) Vehicle multi-attribute identification method based on radar vision fusion
CN113033315A (en) Rare earth mining high-resolution image identification and positioning method
Liu et al. Extended faster R-CNN for long distance human detection: Finding pedestrians in UAV images
CN112132157B (en) Gait face fusion recognition method based on raspberry pie
CN116704273A (en) Self-adaptive infrared and visible light dual-mode fusion detection method
CN107045630B (en) RGBD-based pedestrian detection and identity recognition method and system
CN116681979A (en) Power equipment target detection method under complex environment
CN111008956B (en) Beam bottom crack detection method, system, device and medium based on image processing
CN116994135A (en) Ship target detection method based on vision and radar fusion
Zhang et al. An effective framework using identification and image reconstruction algorithm for train component defect detection
Shi et al. Cobev: Elevating roadside 3d object detection with depth and height complementarity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant