CN116385958A - Edge intelligent detection method for power grid inspection and monitoring - Google Patents

Edge intelligent detection method for power grid inspection and monitoring Download PDF

Info

Publication number
CN116385958A
CN116385958A CN202310211087.6A CN202310211087A CN116385958A CN 116385958 A CN116385958 A CN 116385958A CN 202310211087 A CN202310211087 A CN 202310211087A CN 116385958 A CN116385958 A CN 116385958A
Authority
CN
China
Prior art keywords
data
model
network
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310211087.6A
Other languages
Chinese (zh)
Inventor
常荣
王勇
方明
朱钱鑫
杨莉
李申章
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yuxi Power Supply Bureau of Yunnan Power Grid Co Ltd
Original Assignee
Yuxi Power Supply Bureau of Yunnan Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yuxi Power Supply Bureau of Yunnan Power Grid Co Ltd filed Critical Yuxi Power Supply Bureau of Yunnan Power Grid Co Ltd
Priority to CN202310211087.6A priority Critical patent/CN116385958A/en
Publication of CN116385958A publication Critical patent/CN116385958A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J13/00Circuit arrangements for providing remote indication of network conditions, e.g. an instantaneous record of the open or closed condition of each circuitbreaker in the network; Circuit arrangements for providing remote control of switching means in a power distribution network, e.g. switching in and out of current consumers by using a pulse code signal carried by the network
    • H02J13/00002Circuit arrangements for providing remote indication of network conditions, e.g. an instantaneous record of the open or closed condition of each circuitbreaker in the network; Circuit arrangements for providing remote control of switching means in a power distribution network, e.g. switching in and out of current consumers by using a pulse code signal carried by the network characterised by monitoring
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Remote Sensing (AREA)
  • Power Engineering (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of power grid inspection software and hardware, in particular to an intelligent edge detection method for power grid inspection and monitoring. Comprising the following steps: continuously acquiring image data through a camera and transmitting the image data to a processing end; the processing end performs image recognition and extraction on the image data and extracts main information data; distinguishing and dividing the target object based on the vector formed by the main information data; and performing risk identification intelligent detection. The design of the invention can solve the problems of cloud, edge and end coordination through the edge intelligent terminal, and can realize quick response, real-time discovery and positioning; the data acquisition, analysis and application capacity of the power grid can be improved; the method can meet the application scene requirements of unmanned aerial vehicle inspection, transformer substation fixed place video monitoring and operation site video monitoring, realize the miniaturization and calculation acceleration of the model, and solve the problems of poor detection capability of key hidden danger and limited endurance capability for better confirming hidden danger due to insufficient calculation power of the unmanned aerial vehicle.

Description

Edge intelligent detection method for power grid inspection and monitoring
Technical Field
The invention relates to the technical field of power grid inspection software and hardware, in particular to an intelligent edge detection method for power grid inspection and monitoring.
Background
At present, the unmanned aerial vehicle technology is a technology which is more and more practical in the power grid inspection field, but aims to expand inspection precision and content, and the current unmanned aerial vehicle inspection technology has a plurality of defects, in the unmanned aerial vehicle inspection process, partial targets are too small, when images shot by the unmanned aerial vehicle are far away, pedestrians are very small in distant view, and inspection omission is easy; in an aerial video picture, a large number of detection objects exist, and dozens of hundreds of targets possibly appear at the same time, and the targets are shielded or overlapped, so that small difficulty is caused.
In the aspect of unmanned aerial vehicle inspection scenes, many power supply companies and power grid companies are equipped with unmanned aerial vehicles, but when the unmanned aerial vehicles are inspected, whether key components in a power transmission line tower are faulty or not generally needs to utilize cloud deep neural network image intelligent recognition technology to carry out offline analysis and model calculation of information on massive image data transmitted back by the unmanned aerial vehicles. Thus being unfavorable for the field operators to quickly locate and timely process the hidden trouble positions. Meanwhile, the defect detection and identification of the transmission tower mainly adopts a target detection and classification algorithm based on deep learning, the calculated amount is large, the processor of the unmanned aerial vehicle terminal is difficult to achieve the effect of real-time detection, the detection capability of small hidden danger at a key part is poor, the endurance time of the unmanned aerial vehicle is always the bottleneck for limiting the technical development of the unmanned aerial vehicle, and the endurance capability of the unmanned aerial vehicle in no-load is generally between 25 and 35 minutes and is not suitable for long-distance line patrol tasks.
The power grid enterprise power terminal generates massive edge data, however, the problems of quick response, real-time discovery and positioning of risk hidden danger and the like cannot be achieved on the massive edge data, and the problems of difficult edge data transmission and calculation, edge data perception improvement, risk identification discovery and response exist in the industry at present. Therefore, the intelligent terminal equipment is supported by the edge calculation which is required to be light in the unmanned aerial vehicle inspection scene.
How to improve the further recognition and response of the edge data in the unmanned aerial vehicle inspection scene is needed to study the application of the edge intelligent terminal equipment with lighter weight. In view of the above, we propose an edge intelligent detection method for power grid inspection and monitoring.
Disclosure of Invention
The invention aims to provide an edge intelligent detection method for power grid inspection and monitoring, which is used for solving the problems in the background technology.
In order to solve the above technical problems, one of the purposes of the present invention is to provide an edge intelligent detection method for power grid inspection and monitoring, comprising the following steps:
step 1, continuously acquiring acquired image data through a camera and transmitting the acquired image data to a processing end;
Step 2, the processing end performs image recognition and extraction on the image data, and extracts main information data in the image;
step 3, distinguishing and dividing the target object based on the vector formed by the main information data, reducing the two-dimensional image into a one-dimensional vector, and dividing, classifying and identifying the obtained different target images through a support vector machine to obtain electric power target identification; the dividing, classifying and identifying processes comprise an algorithm light-weight process, wherein the algorithm light-weight process comprises efficient network structure processing, model pruning processing, weight quantization processing and knowledge distillation processing;
the efficient network structure processing comprises the following steps: in the training stage of the model, combining depth separable convolution to lighten the network architecture; the depth separable convolution adopts different convolution kernels for each input channel, namely one convolution kernel corresponds to one input channel, and the number of the featuremap channels generated in the process is the same as the number of the input channels;
the model pruning process comprises the following steps: pruning the channel dimension of the network in the training process; after performing network normalization parameter distribution on the BN by the model, constraining the trainable parameter gamma on the BN by L1 regularization to enable the trainable parameter gamma to be 0, subsequently eliminating a channel with output of 0, reducing training parameters, and performing sparse training to achieve the aim of pruning the network;
The weight quantization process includes: after the model is trained, the format of the model is obtained as a pt file format, and the format of the weight and related parameters is of a float type; in order to improve the recognition efficiency, the method needs to be converted into a bin format file supported by an onnx format and a terminal, and when the bin format model file is converted into a bin file, the value ranges of the network weight and the activation value are mapped to (-127-128) through int8 integer quantization, so that the calculated amount is reduced, and the network recognition efficiency is improved;
the knowledge distillation treatment comprises: training a smaller network using a large pre-trained network; once a cumbersome and heavy network model is trained, another training can be used to transfer knowledge from the cumbersome model to a smaller model that is more suitable for deployment;
further, the identification of the different target images also includes a target image dataset and an identification model; the preparation of the target image dataset comprises the steps of datamation of image information and attaching a corresponding target information label to each image;
meanwhile, the use of the recognition model includes: reducing model deviation of a complex model, improving the accuracy of statistical estimation by using big data, and solving a large-scale optimization problem by using an expandable gradient descent algorithm; the method comprises the steps that an algorithm for extracting local features of a target image data set is utilized to be integrated into a neural network, the features of relevance of local data in a target image are obtained, a data file for training is formed, deep learning training is conducted by using a convolutional neural network, the convolutional network comprises a convolutional layer, a pooling layer and a full-connection layer, the convolutional layer is matched with the pooling layer to form a plurality of convolutional groups, the features are extracted layer by layer, and finally classification is completed through a plurality of full-connection layers;
Step 4, performing risk identification intelligent detection based on the identified power target; the risk identification intelligent detection comprises one or more of insulator self-explosion and damage detection, damper detection, line foreign matter monitoring, high-voltage tower foreign matter detection and insulator and tower connection detection.
As a further improvement of the present technical solution, in the step 3, the processing of dividing, classifying and identifying different target images specifically includes:
using SVM algorithm to test the data points, judging positive and negative with normal vector point multiplication, identifying PCA characteristics of the power transmission tower, identifying other targets, and classifying; wherein identifying other targets includes:
collecting various target pictures obtained from the inspection video according to the category, expanding the target pictures by rotating, adding noise and mirroring, and taking the expanded target pictures as a real target sample library;
preprocessing pictures in an obtained real target sample library by using a generated countermeasure network gan, fusing a defect target picture with various complex backgrounds, expanding a defect target data set to obtain a target sample expansion library, and dividing the data into a training set and a testing set;
Labeling the selected training set by using a labeling tool labellmg, and storing information of the target picture after labeling is completed to obtain sample data;
improving the target detection network yolov5, and performing iterative training on the improved target detection network yolov5 by using the obtained sample data to obtain optimal target detection network weight data and a reference network of a test set;
and processing the selected test set by using the reference network of the obtained test set to obtain a target defect detection result.
As a further improvement of the present technical solution, in the step 3, aiming at the problem of unobvious characteristics of the defect target, an improvement of the Yolov5s network to detect the target is proposed, which specifically includes:
by adding an SF module attention mechanism at the head of the neck network, the network is enabled to pay more attention to target features, and the feature extraction capability of the network is improved; the formula of the SE module is as follows, assuming that the matrix dimension obtained by convolution operation is [ H, W, C ]:
Figure BDA0004112714280000041
s=F ex (z,W)=σ[w 2 δ[W 1 z]] (2)
F scale (u c ,s c )=s c ·u c (3)
in the formula (1), z c Representing global averaging pooling for converting two-dimensional characteristic information into real numbers with global receptive fields to some extent, F sq Representing an implant global information (Squeeze) operation, u c Represents one channel in u;
in the formula (2), F ex Representing an adaptive calibration (accounting) operation, z represents z, sigma is a Sigmoid function, delta is a ReLU function, W in the last step of implanting global information 1 、W 2 Represented by a linear layer, W 1 For the dimension reduction layer parameters, the parameters are activated by the ReLU function and used as W 2 Is increased in dimension; at this time, s ranging from 0 to 1 is obtained after the activation of the Sigmoid function, and s is used for representing the weight of each characteristic channel (channel);
in the formula (3), u c Represents a channel in u, s c Representing the weights of the channels, F scale Represents F scale Operation, equivalent to multiplying the value of each channel by its weight;
in addition, aiming at an embedded platform with limited resources, a lightweight convolution GSconv network structure is introduced to replace the original convolution, so that the calculation amount of a model is reduced and the precision can be kept.
As a further improvement of the technical scheme, in the step S3 of using the recognition model, the deep learning training includes self-descending non-supervision learning, adopting non-calibration data or layered training of parameters of each layer with calibration data, performing feature learning, top-down supervision learning, performing fine tuning on the network through data training with labels and error top-down transmission, and further fine tuning parameters of the whole multi-layer model based on the obtained parameters of each layer;
Wherein fine tuning parameters of the entire multi-layer model further comprises the steps of: firstly, training a first layer by using calibration-free data, and firstly, learning parameters of the first layer during training, wherein the obtained model can learn the structure of the data due to the limitation of the model capacity and the sparsity constraint, so that the characteristics with the representation capacity more than the input characteristics are obtained; after learning to obtain the n-1 layer, the n-1 layer output is used as the n layer input, and the n layer is trained, so that the parameters of each layer are obtained respectively.
As a further improvement of the present technical solution, in the step S3, the use of the identification model further includes performing model compression on the identification model, and the model compression can be performed by using the design of the fine model, model clipping or core sparsification;
the model is cut into structural pruning: convolutional kernel pruning, channel pruning and hierarchical pruning, and the obtained model can be operated only by changing the number of convolutional kernels and characteristic channels in a network; the method comprises the following steps:
the output of each layer of the model converts the feature map into a vector with the length of filter number c through global pooling, a matrix with the length of n being c can be obtained for n images, each filter is divided into m bins, the probability of each bin is counted, and then the entropy value of each bin is calculated; judging the importance of the filter by utilizing the entropy value, and then cutting the filter which is not important;
The j-th featuremap entropy value is calculated as follows:
Figure BDA0004112714280000051
in the formula (4), H j Represents the j-th featuremap entropy value, m represents the bin number, p i Representing the probability of the ith bin;
after one layer is cut, partial performance is restored through a few iterations, and after all layers are cut, the whole performance is restored through more iterations.
As a further improvement of the technical scheme, in the step 4, the insulator self-explosion and breakage detection includes the following steps:
s1.1, collecting insulator pictures obtained from a patrol video, expanding the pictures by rotating, noise adding and mirror image operation, and taking the expanded insulator pictures as a real insulator sample library;
s1.2, preprocessing pictures in an obtained real insulator sample library by using a generated countermeasure network gan, fusing defective insulator pictures with various complex backgrounds, expanding a defective insulator data set to obtain an insulator sample expansion library, and dividing the data into a training set and a test set;
s1.3, marking the training set selected in the step S1.2 by using a marking tool labellmg, and storing information of the insulator picture after marking is completed to obtain sample data;
s1.4, improving a target detection network yolov5, and performing iterative training on the improved target detection network yolov5 by using the obtained sample data to obtain optimal target detection network weight data and a reference network of a test set;
S1.5, processing the test set by using the reference network of the obtained test set to obtain an insulator defect detection result.
As a further improvement of the present technical solution, in the step 4, the damper detection includes the following steps:
s2.1, collecting video data containing a damper in the inspection video, and taking pictures frame by frame to generate a damper data set; supplementing the vibration damper image data from a network source;
s2.2, preprocessing the collected damper image data, and expanding the data to generate a similar image;
s2.3, marking the damper in the collected data set to obtain coordinates of a candidate frame containing the target object;
s2.4, labeling the preprocessed data set, inputting the labeled data set into a MobileNet V3 network for processing, and extracting a R, G, B-dimension three-dimension feature map;
s2.5, inputting the feature map into a YoloV5 module for training; inputting the trained neural network model parameters into an edge terminal;
s2.6, detecting the damper on the power transmission line by the edge terminal.
As a further improvement of the present technical solution, in the step 4, the line foreign matter monitoring includes the following steps:
s3.1, collecting video data containing plastic bags in the inspection video, and taking pictures frame by frame to generate a data set; then adding the data set through the plastic bag image of the network source;
S3.2, preprocessing the collected data, and expanding the data to generate similar images;
s3.3, marking the plastic bags in the collected data set to obtain coordinates of candidate frames containing the target object;
s3.4, labeling the preprocessed data set, inputting the labeled data set into a MobileNet V3 network for processing, and extracting feature graphs with three dimensions;
s3.5, inputting the feature map into a YoloV5 module for training;
s3.6, inputting the trained optimal neural network model parameters into the edge terminal;
s3.7, detecting the plastic bags on the power transmission line by the edge terminal.
As a further improvement of the present technical solution, in the step 4, the detection of the foreign matter in the high-voltage tower includes the following steps:
s4.1, extracting a pole tower image candidate region by a selective search method based on a deep learning algorithm;
s4.2, based on the Unet network model, adjusting and optimizing the sample and the network parameters through pre-training and retraining;
after the foreign bodies of the high-voltage towers are segmented by using unet, three common evaluation indexes including overall classification accuracy (OverallAccuracy, OA), F1 Score (F1-Score) and average cross-over-Union (MIoU) are used for evaluation on the basis of a confusion matrix (Confucian matrix);
The confusion matrix is often used for visually evaluating the performance of the supervised learning algorithm and is also the basis of various precision evaluation indexes;
the overall classification accuracy OA, the F1 Score F1-Score and the average intersection ratio MIoU index can be calculated through the confusion matrix, and the OA calculation is shown in the following formula (5), and reflects the proportion of the number of correctly classified samples to the number of all samples:
Figure BDA0004112714280000071
in the formula (5), TP represents true positive, TN represents true negative, FP represents false positive, and FN represents false negative;
F1-Score calculation is shown in the following formula (6), reflects the identification and distinguishing capability of positive and negative samples, is a weighted average of model accuracy and recall, and has a value between [0,1 ]:
Figure BDA0004112714280000072
in the formula (6), the amino acid sequence of the compound,
Figure BDA0004112714280000073
the accuracy rate is called as the precision rate, and the accuracy of the predicted positive example pixels is evaluated by representing the proportion of the actual positive example pixels to the predicted positive example sample pixels in the samples predicted as positive examples by the model;
Figure BDA0004112714280000074
called recall, which means that the predicted correct positive example accounts for the total true positive example in the samples of the true positive examples based on the true samplesThe ratio of samples;
MIoU calculates the ratio of the intersection and union of each type of predicted result and the true value, reflecting the model, and then sums the re-averaged results as shown in equation (7) below:
Figure BDA0004112714280000081
In the formula (7), the amino acid sequence of the compound,
Figure BDA0004112714280000082
referred to as the intersection ratio, represents the ratio of the model to the intersection and union of a certain class of predicted results and the true value.
As a further improvement of the technical scheme, in the step 4, the connection detection of the insulator and the tower comprises the following steps:
s5.1, establishing a plurality of aerial foreign matter image libraries according to the foreign matter type differences;
s5.2, respectively manufacturing data sets according to the aerial foreign matter image library;
s5.3, constructing and training a foreign body model of the power transmission line;
s5.4, constructing and training a damper foreign body model;
s5.5, constructing and training a grading ring clamp foreign matter model;
s5.6, constructing and training a tower foreign body model;
s5.7, fine tuning the established aerial foreign matter image library model by adopting a model fine tuning finetune;
s5.8, solidifying the fine-tuned aerial foreign matter image library model;
s5.9, inputting the image to be detected into a solidified aerial foreign matter image library model for detection;
and S5.10, after forward transmission through a network, obtaining coordinates and confidence of a target rectangular frame of a detection result of the corresponding aerial foreign matter image library.
The second object of the present invention is to provide an edge intelligent detection platform device, which includes a processor, a memory, and a computer program stored in the memory and running on the processor, wherein the processor is configured to implement the steps of the edge intelligent detection method for power grid inspection and monitoring when executing the computer program.
It is a further object of the present invention to provide a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the above-mentioned edge intelligent detection method for grid inspection and monitoring.
Compared with the prior art, the invention has the beneficial effects that:
according to the edge intelligent detection method for power grid inspection and monitoring, based on massive edge data generated by the power terminal, cloud, edge and end coordination problems can be solved through the edge intelligent terminal, and meanwhile, quick response, real-time finding and positioning problems can be achieved for the massive edge data; the method is applied to scenes such as unmanned aerial vehicle inspection, operation sites and video monitoring of fixed sites of transformer substations, and can improve the data acquisition, analysis and application capacity of a power grid, realize real-time identification of potential risks and known risks of the operation sites, quickly respond and avoid the potential risks in advance;
in the edge intelligent detection method for power grid inspection and monitoring, hardware architecture innovation is carried out by means of pipeline design, storage mode design and the like, three application scenes of unmanned aerial vehicle inspection, substation fixed place video monitoring and operation site video monitoring are met by researching lighter-weight edge computing intelligent terminal equipment, model and algorithm innovation are carried out, model miniaturization and calculation acceleration are realized by light-weight model design, matrix decomposition, sparse representation and quantitative calculation, and calculation force limitation is reduced by a light-weight AI intelligent algorithm; the problems of locating the hidden danger position in real time and processing in time on site can be solved; the problem of poor detection capability of key hidden danger caused by insufficient calculation power of an unmanned aerial vehicle is solved; the problem that the endurance is limited because the hidden danger needs to be repeatedly flown for better confirmation is solved through the lightweight AI component of the ultra-lightweight edge computing intelligent terminal.
Drawings
FIG. 1 is an exemplary overall process flow diagram of the present invention;
FIG. 2 is a schematic diagram illustrating an exemplary structure for detecting small objects in image object data according to the present invention;
FIG. 3 is a graph of the results of SVM training and testing of an exemplary power transmission tower SVM recognition;
fig. 4 is a schematic diagram of an exemplary power transmission line identification structure according to the present invention;
fig. 5 is a sample view of an exemplary insulator breakage detection image in accordance with the present invention;
FIG. 6 is a network diagram of feature extraction performed on a target by an exemplary modified Yolov5s network in accordance with the present invention;
FIG. 7 is a diagram of a GSconv network structure in insulator self-explosion and breakage detection according to an exemplary embodiment of the present invention;
FIG. 8 is a sample image of an exemplary damper detection image of the present invention;
FIG. 9 is a sample view of an exemplary line foreign matter monitoring image in accordance with the present invention;
FIG. 10 is a sample image of an exemplary high voltage tower foreign object detection image of the present invention;
FIG. 11 is a diagram of an exemplary Unet network architecture in accordance with the present invention;
FIG. 12 is a training error diagram of an exemplary Unet model versus tower foreign object in accordance with the present invention;
FIG. 13 is a diagram of an exemplary network training result according to the present invention;
FIG. 14 is a schematic diagram of an exemplary SE module for use in detecting insulator self-explosion and breakage in accordance with the present invention;
Fig. 15 is a block diagram of an exemplary electronic computer platform assembly according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
As shown in fig. 1 to 10, the present embodiment provides an edge intelligent detection method for power grid inspection and monitoring, including:
step 1, continuously acquiring acquired image data through a camera and transmitting the acquired image data to a processing end;
step 2, the processing end performs image recognition and extraction on the image data, and extracts main information data in the image;
step 3, distinguishing and dividing the target object based on the vector formed by the main information data, reducing the two-dimensional image into a one-dimensional vector, and dividing, classifying and identifying the obtained different target images through a support vector machine to obtain the target identification of the power transmission tower;
The method for dividing, classifying and identifying the different target images specifically comprises the following steps: and (3) detecting the data points by using an SVM algorithm, judging positive and negative by multiplying the normal vector points, identifying PCA characteristics of the power transmission tower, identifying other targets, and classifying.
In this embodiment, the detection analysis for the small target is:
the unmanned aerial vehicle has advantages of good air vision, wide monitoring range and the like, and is widely applied to actual target detection tasks, and as the unmanned aerial vehicle is far away from the ground, small-sized target detection effects are poor frequently in the tasks, and the virtual detection rate and the omission rate are high. Aiming at the problems, an improved unmanned aerial vehicle small object recognition method is provided. Based on a Yolo 5 convolutional neural network, firstly, an unmanned aerial vehicle aerial photographing data set is established, a dimension clustering method is used for designing a proper anchor frame, secondly, generalized cross-correlation is applied to a coordinate loss function of the network to replace original sum variance loss, and finally, a 4-time downsampling characteristic diagram of the Yolo 5 network is spliced with an 8-time downsampling characteristic diagram subjected to upsampling to establish a new 4-time downsampling target detection layer, as shown in fig. 2.
Further, for the power line image analysis, there are:
According to the processing and detection processes, in the aerial transmission line image, accurate transmission line image information is difficult to obtain only by a straight line detection method, different straight lines on the image are distinguished by combining other image recognition modes, the transmission tower part is selected for recognition, and then the lines on the original image, which represent the transmission line and which belong to other image parts such as the background, are recognized according to the connection relation between the transmission line and the transmission tower.
When the image extraction and identification are carried out on the power transmission tower and the power transmission line, the method is mainly carried out according to the following algorithm flow. First, main information data in an image is extracted by using a method of principal component analysis (PrincipalComponentAnalysis, PCA), a transmission tower is distinguished from other scenes by vectors composed of the main information data, and a two-dimensional image is reduced to a one-dimensional vector for classification and identification. The resulting different images are then partitioned by a support vector machine (SupportVectorMachine, SVM) to identify and classify different objects.
In order to identify the power transmission image, the positive and negative training set and the testing set of the power transmission tower are constructed through image data such as aerial images, general detection images and network photographs, so that the difference between the power transmission tower and other targets on the image characteristics can be induced through the algorithm, and the identification operation of the power transmission line is performed.
Further, SVM identification for power transmission towers is:
SVM is also a commonly used, efficient image recognition and classification algorithm. The method divides the entire data space into two parts by finding a boundary between different categories of data points. By calculating the position of each point in the data space, the classification and identification of the data can be accomplished.
When the SVM is used for checking the data points, the data points are multiplied by normal vector points, and then positive and negative are judged. Therefore, the SVM has higher operation speed when judging the data.
The PCA feature of the transmission tower is identified herein using an SVM, and fig. 3 is the SVM training and testing result. In the figure, a '1' and a plus sign represent characteristic points of a power transmission tower, and a '1' and an asterisk represent characteristic points of other various images. The small circles represent support vector points, and the large circles represent classification situations of characteristic points of the power transmission tower images and other targets (including trees and houses) which are separated in the aerial image and identified in the SVM. It can be seen that the SVM can better distinguish the power transmission tower from other targets, especially the characteristic points of the aerial targets, and can be clearly distributed on two sides of the central line. It is believed that this is due to the fact that the aerial image has clear and complete image features and different angles compared to the image acquired during ground detection. This illustrates the advantages of aerial detection. Through more aerial photographing and ground image acquisition training and testing, the recognition accuracy of the algorithm is counted, and along with the increase of the characteristic components of the main components, the recognition result becomes more and more accurate, and meanwhile, the running speed is also slower and slower. Therefore, a more balanced position should be found in terms of accuracy and performance. 7 characteristic components are selected for classification and identification, and the accuracy rate can reach 90%. For the accuracy of further enhancement discernment, can carry out further training process through unmanned aerial vehicle detecting system operation back a large amount of images of taking photo by plane of gathering.
Specifically, the discrimination for the power line is as follows:
after the power transmission tower is identified through the process, the straight line is connected with the power transmission tower by combining the result of Hough straight line detection, and if the connection is successful, the straight line is an image of a power line. If the connection fails, the line is determined to be the interference target and removed. The power line discrimination result is shown in fig. 4.
And step 4, performing risk identification intelligent detection based on the identified power target, wherein the risk identification intelligent detection comprises one or more of insulator self-explosion and breakage detection, vibration damper detection, line foreign matter monitoring, high-voltage tower foreign matter detection and insulator and tower connection detection.
Preferably, the identification of the different target images further comprises a target image dataset and an identification model; the preparation of the target image dataset comprises the steps of datamation of image information and attaching a corresponding target information label to each image;
meanwhile, the use of the recognition model includes: reducing model deviation of a complex model, improving the accuracy of statistical estimation by using big data, and solving a large-scale optimization problem by using an expandable gradient descent algorithm; the method comprises the steps of integrating an algorithm for extracting local features of a target image data set into a neural network, obtaining the associated features of local data in the target image, forming a data file for training, and performing deep learning training by using a convolutional neural network, wherein the convolutional network comprises a convolutional layer, a pooling layer and a full-connection layer, the convolutional layer is matched with the pooling layer to form a plurality of convolutional groups, extracting the features layer by layer, and finally finishing classification through a plurality of full-connection layers.
Preferably, the deep learning training comprises self-descending non-supervision learning, characteristic learning is carried out by adopting non-calibration data or layered training of parameters of each layer with calibration data, top-down supervision learning, error top-down transmission is carried out through data training with labels, fine adjustment is carried out on a network, and parameters of the whole multi-layer model are further fine adjusted based on the obtained parameters of each layer.
Preferably, fine tuning parameters of the entire multi-layer model further comprises the steps of: firstly, training a first layer by using calibration-free data, and firstly, learning parameters of the first layer during training, wherein the obtained model can learn the structure of the data due to the limitation of the model capacity and the sparsity constraint, so that the characteristics with the representation capacity more than the input characteristics are obtained; after learning to obtain the n-1 layer, the n-1 layer output is used as the n layer input, and the n layer is trained, so that the parameters of each layer are obtained respectively.
Preferably, the use of the recognition model further includes model compression of the recognition model, which can be performed using design of a fine model, model clipping, or sparsification of a kernel; model clipping, i.e. finding an effective judging means to judge the importance of parameters, clipping unimportant connections or filters to reduce redundancy of models, and dividing the modes into regular and irregular.
In this embodiment, there are, for insulator self-explosion and breakage detection:
s1.1, collecting inspection videos, obtaining insulator pictures through the inspection videos, as shown in FIG. 5, expanding the pictures through rotation, noise adding and mirror image operation, and taking the expanded insulator pictures as a real insulator sample library;
s1.2, preprocessing pictures in an obtained real insulator sample library by using a generated countermeasure network gan, fusing defective insulator pictures with various complex backgrounds, expanding a defective insulator data set to obtain an insulator sample expansion library, dividing the data into a training set and a testing set, and selecting 20% of the data as the testing set and 80% as the training set;
s1.3, marking the training set selected in the step S1.2 by using a marking tool labellmg, and storing information of the insulator picture after marking is completed to obtain sample data;
s1.4, improving the existing target detection network yolov5, and performing iterative training on the improved target detection network yolov5 by using the sample data obtained in the step S1.3 to obtain optimal target detection network weight data and a reference network of a test set;
s1.5, processing the test set obtained in the step S1.2 by using the reference network of the test set obtained in the step S1.4 to obtain an insulator defect detection result.
Aiming at the problem of unobvious characteristics of the defect insulator, the detection of the target by improving the Yolov5s network is proposed: by adding an SE attention mechanism to the neck network header, the network is enabled to pay more attention to target features, and the feature extraction capability of the network is improved, as shown in figure 6.
Assuming the matrix dimensions obtained by the convolution operation are [ H, W, C ], the SF module is formulated as follows:
Figure BDA0004112714280000141
s=F ex (z,W)=σ[W 2 δ[W 1 z]] (2)
F scale (u c ,s c )=s c ·u c (3)
in the formula (1), z c Representing global averaging pooling for converting two-dimensional characteristic information into real numbers with global receptive fields to some extent, F sq Representing an implant global information (Squeeze) operation, u c Represents one channel in u;
in the formula (2), F ex Representing an adaptive calibration (accounting) operation, z represents z, o is a Sigmoid function, delta is a ReLU function, W in the last step of implanting global information 1 、W 2 Represented by a linear layer, W 1 For the dimension reduction layer parameters, the parameters are activated by the ReLU function and used as W 2 Is increased in dimension; at this time, s ranging from 0 to 1 is obtained after the activation of the Sigmoid function, and s is used for representing the weight of each characteristic channel (channel);
in the formula (3), u c Represents a channel in u, s c Representing the weights of the channels, F scale Represents F scale Operation, equivalent to multiplying the value of each channel by its weight;
For an embedded platform with limited resources, as shown in fig. 7, a lightweight convolution GSconv is introduced to replace the original convolution, so that the calculation amount of a model is reduced and the precision can be kept.
Further, for the damper detection:
the method for detecting the damper based on the deep learning algorithm comprises the following steps:
s2.1, collecting video data containing a damper, and taking pictures frame by frame to generate a data set, as shown in FIG. 8; meanwhile, searching related images through a network, and adding the images into a data set;
s2.2, preprocessing the collected data, and expanding the data to generate similar images;
s2.3, marking the damper in the acquired data set to obtain coordinates of a candidate frame containing the target object;
s2.4, marking the preprocessed data set, inputting the marked data set into a MobileNet V3 network, and extracting feature graphs of three dimensions after network processing;
s2.5, inputting the feature map into a YoloV5 module for training; inputting the trained optimal neural network model parameters into an edge terminal;
s2.6, detecting the damper on the power transmission line by the edge terminal.
Further, there are monitoring for line foreign matter:
the plastic bag detection method based on the deep learning algorithm comprises the following steps:
S3.1, collecting video data containing plastic bags, and taking pictures frame by frame to generate a data set, as shown in FIG. 9; meanwhile, searching related images through a network, and adding the images into a data set;
s3.2, preprocessing the collected data, and expanding the data to generate similar images;
s3.3, marking the plastic bags in the collected data set to obtain coordinates of candidate frames containing the target object;
s3.4, marking the preprocessed data set, inputting the marked data set into a MobileNet V3 network, and extracting feature graphs of three dimensions after network processing;
s3.5, inputting the feature map into a YoloV5 module for training; inputting the trained optimal neural network model parameters into an edge terminal;
s3.6, detecting the plastic bag on the power transmission line by the edge terminal.
Further, the detection of foreign matter for the high-voltage tower is as follows:
when shooting a high-voltage tower image, the shooting environment is too complex, the detection targets are more overlapped with the background, and the interference factors such as a grass, trees, houses and the like are more. When an image is acquired, the shooting distance of the unmanned aerial vehicle, the duty ratio of a bird nest target in the image and dim light in the background can interfere with image detection.
S4.1, extracting a pole tower image candidate region by a selective search method based on a deep learning algorithm;
S4.2, based on the CaffeNet network model, the sample and the network parameters are adjusted and optimized through pre-training and retraining, and finally the existence of the bird nest in the image is intelligently identified and accurately positioned.
Wherein the images used are all from unmanned aerial vehicle shooting, as shown in fig. 10. The algorithm realizes real-time acquisition, real-time monitoring and real-time identification of images. The method can greatly lighten the inspection burden of workers and improve the maintenance efficiency of the power transmission line.
Further, the insulator and the tower are connected and detected:
the method for detecting the aerial foreign matter image in real time based on the deep learning comprises the following steps:
s5.1, establishing 4 aerial foreign matter image libraries according to the foreign matter type differences;
s5.2, respectively manufacturing data sets according to 4 aerial foreign matter image libraries;
s5.3, constructing and training a foreign body model of the power transmission line;
s5.4, constructing and training a damper foreign body model;
s5.5, constructing and training a grading ring clamp foreign matter model;
s5.6, constructing and training a tower foreign body model;
s5.7, fine tuning the established 4 aerial foreign matter image library models by adopting finetune;
s5.8, solidifying the fine-tuned 4 aerial foreign matter image library models;
s5.9, inputting the image to be detected into the model after 4 solidified foreign matters are detected, and obtaining coordinates and confidence coefficients of the target rectangular frames of 4 detection results after forward propagation of the network.
Example 2
As shown in fig. 11-12, in the algorithm process of embodiment 1, the present embodiment introduces a lightweight intelligent algorithm, which specifically includes the following steps:
step 1, continuously acquiring acquired image data through a camera and transmitting the acquired image data to a processing end;
step 2, the processing end performs image recognition and extraction on the image data, and extracts main information in the image;
step 3, distinguishing and dividing the target object based on the vector formed by the main information, reducing the two-dimensional image into a one-dimensional vector, and dividing, classifying and identifying the obtained different target images through a support vector machine to obtain the target identification of the power transmission tower;
the method for dividing, classifying and identifying the different target images specifically comprises the following steps: using SVM algorithm to test the data points, judging positive and negative with normal vector point multiplication, identifying PCA characteristics of the power transmission tower, identifying other targets, and classifying;
in terms of accuracy and performance, a more balanced position should be found; along with the increase of the characteristic components of the main components, the identification result becomes more and more accurate, and the running speed is also slower and slower; therefore, a more balanced position should be found in terms of accuracy and performance;
In order to extract main information in an image, performing dimension reduction processing on the image by adopting PCA, removing information irrelevant to an original image, respectively performing 7 times of processing on the PCA to obtain 7 characteristic components, and then classifying by adopting SVM; 7 characteristic components are selected for classification and identification, and the accuracy rate can reach 90%. For further enhancing the accuracy of recognition, a further training process can be performed through a large number of aerial images acquired after the unmanned aerial vehicle detection system operates;
based on Hough straight line detection in a target, a straight line identification result is obtained, a straight line of the identification result is connected with a power transmission tower, and if the straight line can be successfully connected with the power transmission tower, the straight line is taken as an image of a power line; if the connection fails, the line is determined as an interference target to be removed, and power line image data is obtained;
and 4, performing risk identification intelligent detection based on the identified power target, wherein the risk identification intelligent detection comprises one or more of insulator self-explosion and breakage detection, vibration damper detection, line foreign matter monitoring, high-voltage tower foreign matter detection and insulator and tower connection detection.
Preferably, the identification of the different target images further comprises a target image dataset and an identification model, as in embodiment 1.
Preferably, the deep learning training is the same as embodiment 1.
Preferably, the parameter step of trimming the entire multi-layer model is the same as in example 1.
Preferably, the use of the recognition model further includes model compression of the recognition model, which can be performed using design of a fine model, model clipping, or sparsification of a kernel; the model clipping, i.e. searching an effective judging means to judge the importance of the parameters, clipping the non-important connection or filter to reduce the redundancy of the model, and dividing the model into a regular mode and an irregular mode.
Specifically, unstructured pruning: weight pruning, vector pruning and kernel pruning can cause irregularity of a model structure, so that the methods need special hardware design to support sparse operation, but the model pruning is finer, and the precision after pruning is higher;
structural pruning: convolutional kernel pruning, channel pruning and hierarchical pruning, the obtained model can be operated only by changing the number of convolutional kernels and characteristic channels in a network, and special algorithm design is not needed; when the network model carries out convolution calculation and full connection calculation, more redundant parameters exist, the neuron activation value tends to 0, and when neurons are removed, the expression capacity of the original model is not affected, but the recognition speed of the model can be accelerated. The output of each layer of the model is converted into a vector with the length of c (the number of filters) through a global pooling, a matrix with the length of n x c can be obtained for n images, each filter is divided into m bins, the probability of each bin is counted, the entropy value of the filter is calculated to judge the importance of the filter by using the entropy value, and the non-important filter is cut. The j-th featuremap entropy value is calculated as follows:
Figure BDA0004112714280000181
In the formula (4), H j Represents the j-th featuremap entropy value, m represents the bin number, p i Representing the probability of the ith bin;
after one layer is cut, partial performance is restored through a few iterations, and after all layers are cut, the whole performance is restored through more iterations. Through the clipping, the parameter quantity is greatly reduced, and the recognition efficiency of the model can be improved.
Preferably, the insulator self-explosion and breakage detection comprises the following steps:
s1.1, collecting insulator pictures obtained from a patrol video, expanding the pictures by rotating, noise adding and mirror image operation, and taking the expanded insulator pictures as a real insulator sample library;
s1.2, preprocessing pictures in an obtained real insulator sample library by using a generated countermeasure network gan, fusing defective insulator pictures with various complex backgrounds, expanding a defective insulator data set to obtain an insulator sample expansion library, and dividing the data into a training set and a test set;
s1.3, marking the training set selected in the step S1.2 by using a marking tool labellmg, and storing information of the insulator picture after marking is completed to obtain sample data;
s1.4, improving a target detection network yolov5, and performing iterative training on the improved target detection network yolov5 by using the obtained sample data to obtain optimal target detection network weight data and a reference network of a test set;
Wherein, aiming at the problem of unobvious characteristics of the defect insulator, an improvement Y is provided o lov5s network detects the target, and by adding SE attention mechanism to the neck network head, the network focuses more on the target feature, and the feature extraction capability of the network is improved; aiming at an embedded platform with limited resources, a lightweight convolution GSconv is introduced to replace an original convolution, so that the calculation amount of a model is reduced and the precision can be kept;
s1.5, processing the test set by using the reference network of the obtained test set to obtain an insulator defect detection result.
Preferably, the damper detection includes the steps of:
s2.1, collecting video data containing a damper in the inspection video, and taking pictures frame by frame to generate a damper data set; supplementing the vibration damper image data from a network source;
s2.2, preprocessing the collected damper image data, and expanding the data to generate a similar image;
s2.3, marking the damper in the collected data set to obtain coordinates of a candidate frame containing the target object;
s2.4, labeling the preprocessed data set, inputting the labeled data set into a MobileNet V3 network for processing, and extracting feature graphs with three dimensions; the three dimensions are R, G, B dimensions in the image, respectively;
S2.5, inputting the feature map into a YoloV5 module for training; inputting the trained neural network model parameters into an edge terminal;
s2.6, detecting the damper on the power transmission line by the edge terminal.
Preferably, the line foreign matter monitoring comprises the steps of:
s3.1, collecting video data containing plastic bags in the inspection video, and taking pictures frame by frame to generate a data set; then adding the data set through the plastic bag image of the network source;
s3.2, preprocessing the collected data, and expanding the data to generate similar images;
s3.3, marking the plastic bags in the collected data set to obtain coordinates of candidate frames containing the target object;
s3.4, labeling the preprocessed data set, inputting the labeled data set into a MobileNet V3 network for processing, and extracting feature graphs with three dimensions;
s3.5, inputting the feature map into a YoloV5 module for training;
s3.6, inputting the trained optimal neural network model parameters into the edge terminal;
s3.7, detecting the plastic bags on the power transmission line by the edge terminal.
Preferably, the detection of the high-pressure lever tower foreign matter comprises the steps of:
s4.1, extracting a pole tower image candidate region by a selective search method based on a deep learning algorithm;
S4.2, based on the Unet network model, adjusting and optimizing the sample and the network parameters through pre-training and retraining.
The network structure diagram of the Unet is shown in FIG. 11, in which each color filled frame represents a multi-channel feature map (featuremap), the numbers on the top of the frame represent the channel number, and the numbers on the bottom left of the frame represent the image size; blank boxes correspond to duplicates of feature maps. The arrows represent different operations, where one arrow represents a 3 x 3 convolution operation, and since stride is 1 and the padding policy is valid, the size of featuremap will decrease by 2 after each such operation; wherein the other arrow represents the copy and cut operation for a certain layer of feature map. Because the feature map acquired by each convolution layer in the Unet network is connected to the corresponding up-sampling layer, the last layer of convolution in the same layer is larger than the first layer of up-sampling, and some cutting is needed to make use of the shallow layer features; at the last layer, convolution operations are performed with a convolution kernel size of 1×1, and feature vectors each having 64 dimensions are mapped into the network of output layers. And finally, intelligently identifying the existence of the high-voltage tower foreign matters in the image and accurately positioning.
As shown in fig. 12, after the foreign bodies of the high-voltage column are segmented by using unet, three general evaluation indexes including an overall classification accuracy (OverallAccuracy, OA), an F1 Score (F1-Score) and an average cross-over-Union (MIoU) are used based on a confusion matrix (confusing);
the confusion matrix is shown in table 1, and the confusion matrix shown in the table is often used for visually evaluating the performance of the supervised learning algorithm and is also the basis of various precision evaluation indexes.
TABLE 1 confusion matrix
Figure BDA0004112714280000201
The overall classification accuracy OA, the F1 Score F1-Score and the average intersection ratio MIoU index can be calculated through the confusion matrix, and the OA calculation is shown in the following formula (5), and reflects the proportion of the number of correctly classified samples to the number of all samples:
Figure BDA0004112714280000202
in the formula (5), TP represents true positive, TN represents true negative, FP represents false positive, and FN represents false negative;
F1-Score calculation is shown in the following formula (6), reflects the identification and distinguishing capability of positive and negative samples, is a weighted average of model accuracy and recall, and has a value between [0,1 ]:
Figure BDA0004112714280000211
/>
in the formula (6), the amino acid sequence of the compound,
Figure BDA0004112714280000212
the accuracy rate is called as the precision rate, and the accuracy of the predicted positive example pixels is evaluated by representing the proportion of the actual positive example pixels to the predicted positive example sample pixels in the samples predicted as positive examples by the model;
Figure BDA0004112714280000213
The recall rate is expressed by taking a real sample as a judgment basis, and the predicted correct positive example accounts for the proportion of the total real positive example sample in the samples of the real positive examples;
MIoU calculates the ratio of the intersection and union of each type of predicted result and the true value, reflecting the model, and then sums the re-averaged results as shown in equation (7) below:
Figure BDA0004112714280000214
in the formula (7), the amino acid sequence of the compound,
Figure BDA0004112714280000215
referred to as the intersection ratio, represents the ratio of the model to the intersection and union of a certain class of predicted results and the true value.
Preferably, the insulator-tower connection detection includes the steps of:
s5.1, establishing a plurality of aerial foreign matter image libraries according to the foreign matter type differences;
s5.2, respectively manufacturing data sets according to the aerial foreign matter image library;
s5.3, constructing and training a foreign body model of the power transmission line;
s5.4, constructing and training a damper foreign body model;
s5.5, constructing and training a grading ring clamp foreign matter model;
s5.6, constructing and training a tower foreign body model;
s5.7, fine tuning the established aerial foreign matter image library model by adopting a model fine tuning finetune;
s5.8, solidifying the fine-tuned aerial foreign matter image library model;
s5.9, inputting the image to be detected into a solidified aerial foreign matter image library model for detection;
And S5.10, after forward transmission through a network, obtaining coordinates and confidence of a target rectangular frame of a detection result of the corresponding aerial foreign matter image library.
Example 3
As shown in fig. 13 to 14, based on the algorithms of embodiment 1 and embodiment 2, the present embodiment provides an exemplary embodiment of an insulator self-explosion and breakage detection method for performing data set production according to aerial data.
The data set is produced according to the aerial data. In view of the fact that most of the artificial intelligence belongs to supervised learning at present, in the field of target identification, data generally needs data labeling to be used, and therefore the data labeling occupies an important position in an artificial intelligence industry chain.
In order to obtain the effect of a good model and its generalization ability, it is generally required that the sample of the data set is sufficient. In practical application, the problems of sample number and sample quality generally exist, so that the input image is simply translated, scaled, color changed, cut, gaussian blur and the like, the category of the image is not affected, and the problems of insufficient samples and sample quality can be well solved. The manually enhanced sample data set can enable the training effect of the model to be better.
Common data enhancement techniques are:
(1) And (3) turning: the data inversion comprises horizontal inversion, vertical inversion and the like of the image, and is a common data enhancement technology;
(2) And (3) rotation: performing multi-angle rotation operation on the data;
(3) Scaling: adjusting the size of the image;
(4) Translation: the translation operation of the image is to add the appointed translation amount to all pixel point coordinates of the image, and the translation image can save complete image information;
(5) Adding noise: noise interference is added to the image, so that the diversity of the image can be enhanced;
(6) Color transformation: and changing the pixel value of the image, and adjusting the contrast and brightness of the image.
According to the data enhancement technology, the 1500 pictures marked manually are subjected to image overturning, 90-degree rotation, translation, noise addition and the like.
The amplified dataset was input to lightweight Yolov5s-SE for training, and parameters of the environmental configuration used in the training process are shown in table 2.
Table 2 environmental configuration related parameters
Figure BDA0004112714280000231
7500 images are input into a network for training, and the ratio of the training set to the testing set is set to be 8 to 2. The initial learning rate is set to be 0.001, the learning rate is reduced to one tenth of the original rate every 50 rounds, the training momentum is selected to be 0.9, the training batch batch_size is set to be 16, 500 rounds are trained, and the method of transfer learning is adopted by adopting the yolov5s.pt network model trained on the COCO data set, so that the training process can be accelerated, and the convergence performance is better. In fig. 13, fig. a is a training detection error of the network, fig. b is a target training classification error, fig. c is a target test detection error, and fig. d is a test classification error.
The average accuracy and frame rate of the model are shown in Table 3, and it can be seen that the average accuracy and frame rate are best combined with the light-weighted YOLOV5 s-SE.
Table 3 comparison of map and fps values for the model
Figure BDA0004112714280000232
The result of the light-weight Yolov5s-SE detection is shown in FIG. 14.
As shown in fig. 15, the present embodiment further provides an edge intelligent detection platform device, which includes a processor, a memory, and a computer program stored in the memory and running on the processor.
The processor comprises one or more than one processing core, the processor is connected with the memory through a bus, the memory is used for storing program instructions, and the processor realizes the steps of the intelligent edge detection method for power grid inspection and monitoring when executing the program instructions in the memory.
Alternatively, the memory may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
In addition, the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the steps of the intelligent edge detection method for power grid inspection and monitoring when being executed by a processor.
Optionally, the present invention also provides a computer program product containing instructions which, when run on a computer, cause the computer to perform the steps of the edge intelligent detection method of the aspects described above for power grid inspection and monitoring.
It will be appreciated by those of ordinary skill in the art that the processes for implementing all or part of the steps of the above embodiments may be implemented by hardware, or may be implemented by a program for instructing the relevant hardware, and the program may be stored in a computer readable storage medium, where the above storage medium may be a read-only memory, a magnetic disk or optical disk, etc.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the above-described embodiments, and that the above-described embodiments and descriptions are only preferred embodiments of the present invention, and are not intended to limit the invention, and that various changes and modifications may be made therein without departing from the spirit and scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (10)

1. An intelligent edge detection method for power grid inspection and monitoring is characterized by comprising the following steps:
step 1, continuously acquiring acquired image data through a camera and transmitting the acquired image data to a processing end;
step 2, the processing end performs image recognition and extraction on the image data, and extracts main information data in the image;
step 3, distinguishing and dividing the target object based on the vector formed by the main information data, reducing the two-dimensional image into a one-dimensional vector, and dividing, classifying and identifying the obtained different target images through a support vector machine to obtain electric power target identification; the process of dividing, classifying and identifying comprises an algorithm light-weight process;
further, the identification of the different target images also includes a target image dataset and an identification model; the preparation of the target image dataset comprises the steps of datamation of image information and attaching a corresponding target information label to each image;
meanwhile, the use of the recognition model includes: reducing model deviation of a complex model, improving the accuracy of statistical estimation by using big data, and solving a large-scale optimization problem by using an expandable gradient descent algorithm; the method comprises the steps that an algorithm for extracting local features of a target image data set is utilized to be integrated into a neural network, the features of relevance of local data in a target image are obtained, a data file for training is formed, deep learning training is conducted by using a convolutional neural network, the convolutional network comprises a convolutional layer, a pooling layer and a full-connection layer, the convolutional layer is matched with the pooling layer to form a plurality of convolutional groups, the features are extracted layer by layer, and finally classification is completed through a plurality of full-connection layers;
Step 4, performing risk identification intelligent detection based on the identified power target; the risk identification intelligent detection comprises one or more of insulator self-explosion and damage detection, damper detection, line foreign matter monitoring, high-voltage tower foreign matter detection and insulator and tower connection detection.
2. The method for intelligent detection of edges for inspection and monitoring of electrical network according to claim 1, wherein in the step 3, the process of dividing, classifying and identifying different target images specifically comprises:
using SVM algorithm to test the data points, judging positive and negative with normal vector point multiplication, identifying PCA characteristics of the power transmission tower, identifying other targets, and classifying; wherein identifying other targets includes:
collecting various target pictures obtained from the inspection video according to the category, expanding the target pictures by rotating, adding noise and mirroring, and taking the expanded target pictures as a real target sample library;
preprocessing pictures in an obtained real target sample library by using a generated countermeasure network gan, fusing a defect target picture with various complex backgrounds, expanding a defect target data set to obtain a target sample expansion library, and dividing the data into a training set and a testing set;
Labeling the selected training set by using a labeling tool labellmg, and storing information of the target picture after labeling is completed to obtain sample data;
improving the target detection network yolov5, and performing iterative training on the improved target detection network yolov5 by using the obtained sample data to obtain optimal target detection network weight data and a reference network of a test set;
and processing the selected test set by using the reference network of the obtained test set to obtain a target defect detection result.
3. The edge intelligent detection method for power grid inspection and monitoring according to claim 2, wherein in the step 3, aiming at the problem of unobvious characteristics of a defect target, an improvement to a Yolov5s network is provided for detecting the target, specifically:
by adding an SE module attention mechanism to the neck network head, the network is enabled to pay more attention to target features, and the feature extraction capability of the network is improved; the formula of the SE module is as follows, assuming that the matrix dimension obtained by convolution operation is [ H, W, C ]:
Figure FDA0004112714270000021
s=F ex (z,W)=σ[w δ[w 1 z]] (2)
F scale (u c ,s c )=s c ·u c (3)
in the formula (1), z c Representing global averaging pooling for converting two-dimensional characteristic information into real numbers with global receptive fields to some extent, F sq Representing an operation of implanting global information, u c Represents one channel in u;
in the formula (2), F ex Representing an adaptive calibration operation, z represents z, sigma is a Sigmoid function, delta is a ReLU function, W in the global information implanted in the last step 1 、W 2 Represented by a linear layer, W 1 For the dimension reduction layer parameters, the parameters are activated by the ReLU function and used as W 2 Is increased in dimension; at this time, s in the range of 0-1 is obtained after the Sigmoid function is activated, and s is used for representing the weight of each characteristic channel;
in the formula (3), u c Represents a channel in u, s c Representing the weights of the channels, F scale Represents F scale Operation, equivalent to multiplying the value of each channel by its weight;
In addition, aiming at an embedded platform with limited resources, a lightweight convolution GSconv network structure is introduced to replace the original convolution, so that the calculation amount of a model is reduced and the precision can be kept.
4. The method for intelligent detection of edges for inspection and monitoring of electric network according to claim 3, wherein in the step S3, in use of the identification model, the deep learning training includes self-descending non-supervised learning, layered training of each layer of parameters with no calibration data or with calibration data, feature learning, top-down supervised learning, data training with labels, error top-down transmission, fine tuning of the network, and further fine tuning of parameters of the whole multi-layer model based on the obtained layers of parameters;
Wherein fine tuning parameters of the entire multi-layer model further comprises the steps of: firstly, training a first layer by using calibration-free data, and firstly, learning parameters of the first layer during training, wherein the obtained model can learn the structure of the data due to the limitation of the model capacity and the sparsity constraint, so that the characteristics with the representation capacity more than the input characteristics are obtained; after learning to obtain the n-1 layer, the n-1 layer output is used as the n layer input, and the n layer is trained, so that the parameters of each layer are obtained respectively.
5. The method according to claim 4, wherein in the step S3, the identifying the model further includes performing model compression on the identifying model, and the model compression can be performed using design of a fine model, model clipping or thinning of a core;
the model is cut into structural pruning: convolutional kernel pruning, channel pruning and hierarchical pruning, and the obtained model can be operated only by changing the number of convolutional kernels and characteristic channels in a network; the method comprises the following steps:
the output of each layer of the model converts the feature map into a vector with the length of filter number c through global pooling, a matrix with the length of n being c can be obtained for n images, each filter is divided into m bins, the probability of each bin is counted, and then the entropy value of each bin is calculated; judging the importance of the filter by utilizing the entropy value, and then cutting the filter which is not important;
The j-th featuremap entropy value is calculated as follows:
Figure FDA0004112714270000041
in the formula (4), H j Represents the j-th featuremap entropy value, m represents the bin number, p i Representing the probability of the ith bin;
after one layer is cut, partial performance is restored through a few iterations, and after all layers are cut, the whole performance is restored through more iterations.
6. The method for intelligent detection of edges for inspection and monitoring of electric network according to claim 1, wherein in the step 4, the insulator self-explosion and breakage detection comprises the following steps:
s1.1, collecting insulator pictures obtained from a patrol video, expanding the pictures by rotating, noise adding and mirror image operation, and taking the expanded insulator pictures as a real insulator sample library;
s1.2, preprocessing pictures in an obtained real insulator sample library by using a generated countermeasure network gan, fusing defective insulator pictures with various complex backgrounds, expanding a defective insulator data set to obtain an insulator sample expansion library, and dividing the data into a training set and a test set;
s1.3, marking the training set selected in the step S1.2 by using a marking tool labellmg, and storing information of the insulator picture after marking is completed to obtain sample data;
S1.4, improving a target detection network yolov5, and performing iterative training on the improved target detection network yolov5 by using the obtained sample data to obtain optimal target detection network weight data and a reference network of a test set;
s1.5, processing the test set by using the reference network of the obtained test set to obtain an insulator defect detection result.
7. The method for intelligent detection of edges for inspection and monitoring of electrical network according to claim 1, wherein in the step 4, the damper detection comprises the steps of:
s2.1, collecting video data containing a damper in the inspection video, and taking pictures frame by frame to generate a damper data set; supplementing the vibration damper image data from a network source;
s2.2, preprocessing the collected damper image data, and expanding the data to generate a similar image;
s2.3, marking the damper in the collected data set to obtain coordinates of a candidate frame containing the target object;
s2.4, labeling the preprocessed data set, inputting the labeled data set into a MobileNet V3 network for processing, and extracting a R, G, B-dimension three-dimension feature map;
s2.5, inputting the feature map into a YoloV5 module for training; inputting the trained neural network model parameters into an edge terminal;
S2.6, detecting the damper on the power transmission line by the edge terminal.
8. The method for intelligent detection of edges for inspection and monitoring of electrical network according to claim 1, wherein in the step 4, the line foreign matter monitoring comprises the steps of:
s3.1, collecting video data containing plastic bags in the inspection video, and taking pictures frame by frame to generate a data set; then adding the data set through the plastic bag image of the network source;
s3.2, preprocessing the collected data, and expanding the data to generate similar images;
s3.3, marking the plastic bags in the collected data set to obtain coordinates of candidate frames containing the target object;
s3.4, labeling the preprocessed data set, inputting the labeled data set into a MobileNet V3 network for processing, and extracting feature graphs with three dimensions;
s3.5, inputting the feature map into a YoloV5 module for training;
s3.6, inputting the trained optimal neural network model parameters into the edge terminal;
s3.7, detecting the plastic bags on the power transmission line by the edge terminal.
9. The method for intelligent detection of edges for inspection and monitoring of electric network according to claim 1, wherein in the step 4, the detection of the foreign matters in the high voltage tower comprises the steps of:
S4.1, extracting a pole tower image candidate region by a selective search method based on a deep learning algorithm;
s4.2, based on the Unet network model, adjusting and optimizing the sample and the network parameters through pre-training and retraining;
after foreign bodies in the high-voltage tower are segmented by using unet, the foreign bodies are evaluated by using the total classification precision, the F1 fraction and the average cross-union ratio on the basis of a confusion matrix;
the overall classification accuracy OA, the F1 Score F1-Score and the average intersection ratio MIoU index can be calculated through the confusion matrix, and the OA calculation is shown in the following formula (5), and reflects the proportion of the number of correctly classified samples to the number of all samples:
Figure FDA0004112714270000061
in the formula (5), TP represents true positive, TN represents true negative, FP represents false positive, and FN represents false negative;
F1-Score calculation is shown in the following formula (6), reflects the identification and distinguishing capability of positive and negative samples, is a weighted average of model accuracy and recall, and has a value between [0,1 ]:
Figure FDA0004112714270000062
in the formula (6), the amino acid sequence of the compound,
Figure FDA0004112714270000063
the accuracy rate is called as the precision rate, and the accuracy of the predicted positive example pixels is evaluated by representing the proportion of the actual positive example pixels to the predicted positive example sample pixels in the samples predicted as positive examples by the model; />
Figure FDA0004112714270000064
The recall rate is expressed by taking a real sample as a judgment basis, and the predicted correct positive example accounts for the proportion of the total real positive example sample in the samples of the real positive examples;
MIoU calculates the ratio of the intersection and union of each type of predicted result and the true value, reflecting the model, and then sums the re-averaged results as shown in equation (7) below:
Figure FDA0004112714270000065
in the formula (7), the amino acid sequence of the compound,
Figure FDA0004112714270000066
referred to as the intersection ratio, represents the ratio of the model to the intersection and union of a certain class of predicted results and the true value.
10. The method for intelligent detection of edges for inspection and monitoring of electric network according to claim 1, wherein in the step 4, the insulator-tower connection detection comprises the following steps:
s5.1, establishing a plurality of aerial foreign matter image libraries according to the foreign matter type differences;
s5.2, respectively manufacturing data sets according to the aerial foreign matter image library;
s5.3, constructing and training a foreign body model of the power transmission line;
s5.4, constructing and training a damper foreign body model;
s5.5, constructing and training a grading ring clamp foreign matter model;
s5.6, constructing and training a tower foreign body model;
s5.7, fine tuning the established aerial foreign matter image library model by adopting a model fine tuning finetune;
s5.8, solidifying the fine-tuned aerial foreign matter image library model;
s5.9, inputting the image to be detected into a solidified aerial foreign matter image library model for detection;
and S5.10, after forward transmission through a network, obtaining coordinates and confidence of a target rectangular frame of a detection result of the corresponding aerial foreign matter image library.
CN202310211087.6A 2023-03-07 2023-03-07 Edge intelligent detection method for power grid inspection and monitoring Pending CN116385958A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310211087.6A CN116385958A (en) 2023-03-07 2023-03-07 Edge intelligent detection method for power grid inspection and monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310211087.6A CN116385958A (en) 2023-03-07 2023-03-07 Edge intelligent detection method for power grid inspection and monitoring

Publications (1)

Publication Number Publication Date
CN116385958A true CN116385958A (en) 2023-07-04

Family

ID=86977837

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310211087.6A Pending CN116385958A (en) 2023-03-07 2023-03-07 Edge intelligent detection method for power grid inspection and monitoring

Country Status (1)

Country Link
CN (1) CN116385958A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116958998A (en) * 2023-09-20 2023-10-27 四川泓宝润业工程技术有限公司 Digital instrument reading identification method based on deep learning
CN117406778A (en) * 2023-11-16 2024-01-16 广东工贸职业技术学院 Unmanned plane laser radar ground-imitating flight method based on geospatial data
CN117456389A (en) * 2023-11-07 2024-01-26 西安电子科技大学 Improved unmanned aerial vehicle aerial image dense and small target identification method, system, equipment and medium based on YOLOv5s
CN117541555A (en) * 2023-11-16 2024-02-09 广州市公路实业发展有限公司 Road pavement disease detection method and system

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116958998A (en) * 2023-09-20 2023-10-27 四川泓宝润业工程技术有限公司 Digital instrument reading identification method based on deep learning
CN116958998B (en) * 2023-09-20 2023-12-26 四川泓宝润业工程技术有限公司 Digital instrument reading identification method based on deep learning
CN117456389A (en) * 2023-11-07 2024-01-26 西安电子科技大学 Improved unmanned aerial vehicle aerial image dense and small target identification method, system, equipment and medium based on YOLOv5s
CN117406778A (en) * 2023-11-16 2024-01-16 广东工贸职业技术学院 Unmanned plane laser radar ground-imitating flight method based on geospatial data
CN117541555A (en) * 2023-11-16 2024-02-09 广州市公路实业发展有限公司 Road pavement disease detection method and system
CN117406778B (en) * 2023-11-16 2024-03-12 广东工贸职业技术学院 Unmanned plane laser radar ground-imitating flight method based on geospatial data

Similar Documents

Publication Publication Date Title
CN107609601B (en) Ship target identification method based on multilayer convolutional neural network
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
CN111723654B (en) High-altitude parabolic detection method and device based on background modeling, YOLOv3 and self-optimization
CN116385958A (en) Edge intelligent detection method for power grid inspection and monitoring
CN113160062B (en) Infrared image target detection method, device, equipment and storage medium
CN110084165A (en) The intelligent recognition and method for early warning of anomalous event under the open scene of power domain based on edge calculations
CN113920107A (en) Insulator damage detection method based on improved yolov5 algorithm
CN109919223B (en) Target detection method and device based on deep neural network
CN113408584A (en) RGB-D multi-modal feature fusion 3D target detection method
CN115861619A (en) Airborne LiDAR (light detection and ranging) urban point cloud semantic segmentation method and system of recursive residual double-attention kernel point convolution network
CN114332473A (en) Object detection method, object detection device, computer equipment, storage medium and program product
CN113011308A (en) Pedestrian detection method introducing attention mechanism
CN113569981A (en) Power inspection bird nest detection method based on single-stage target detection network
CN110321867B (en) Shielded target detection method based on component constraint network
CN116503399A (en) Insulator pollution flashover detection method based on YOLO-AFPS
CN115393690A (en) Light neural network air-to-ground observation multi-target identification method
CN112084897A (en) Rapid traffic large-scene vehicle target detection method of GS-SSD
CN115100497A (en) Robot-based method, device, equipment and medium for routing inspection of abnormal objects in channel
CN112837281B (en) Pin defect identification method, device and equipment based on cascade convolution neural network
CN117710841A (en) Small target detection method and device for aerial image of unmanned aerial vehicle
CN116630828B (en) Unmanned aerial vehicle remote sensing information acquisition system and method based on terrain environment adaptation
CN113327253A (en) Weak and small target detection method based on satellite-borne infrared remote sensing image
CN112418262A (en) Vehicle re-identification method, client and system
CN117115616A (en) Real-time low-illumination image target detection method based on convolutional neural network
CN116740516A (en) Target detection method and system based on multi-scale fusion feature extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination