CN115147397A - X-ray image defect detection method for strain clamp of power transmission line - Google Patents

X-ray image defect detection method for strain clamp of power transmission line Download PDF

Info

Publication number
CN115147397A
CN115147397A CN202210875820.XA CN202210875820A CN115147397A CN 115147397 A CN115147397 A CN 115147397A CN 202210875820 A CN202210875820 A CN 202210875820A CN 115147397 A CN115147397 A CN 115147397A
Authority
CN
China
Prior art keywords
multiplied
strain clamp
training
size
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210875820.XA
Other languages
Chinese (zh)
Inventor
邱志斌
李俊轩
卢祖文
周志彪
张润
吴子建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang University
Original Assignee
Nanchang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang University filed Critical Nanchang University
Priority to CN202210875820.XA priority Critical patent/CN115147397A/en
Publication of CN115147397A publication Critical patent/CN115147397A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02GINSTALLATION OF ELECTRIC CABLES OR LINES, OR OF COMBINED OPTICAL AND ELECTRIC CABLES OR LINES
    • H02G7/00Overhead installations of electric lines or cables
    • H02G7/02Devices for adjusting or maintaining mechanical tension, e.g. take-up device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)

Abstract

The invention discloses a method for detecting X-ray image defects of a strain clamp of a power transmission line, which specifically comprises the following steps: constructing an X-ray image data set of the strain clamp of the power transmission line, carrying out standardized processing on the X-ray image data set, and unifying all pictures into an RGB format; denoising and contrast enhancing the image, and labeling the defect type of the data set by using Labelimg; the method comprises the steps that improvement is carried out based on an original YOLOv3 model, an effective feature layer of an original trunk network DarkNet-53 is replaced by feature extraction output of an EfficientNet network to serve as output, and the output is used as three inputs of a feature fusion network FPN; the improved network is trained and tested by using a strain clamp X-ray image data set by adopting the idea of transfer learning. The strain clamp defect detection method based on the improved YOLOv3 algorithm detects the strain clamp defect, and provides technical reference for the field of intelligent inspection of power equipment.

Description

X-ray image defect detection method for strain clamp of power transmission line
Technical Field
The invention belongs to the technical field of power transmission lines, and particularly relates to a method for detecting X-ray image defects of a strain clamp of a power transmission line.
Background
The strain clamp is an important electric power fitting in an overhead transmission line, and in order to ensure the safe and stable operation of an electric power system, an electric power operation and maintenance department needs to overhaul the strain clamp regularly. When the staff overhauls in having a power failure, power equipment inner structure is fairly complicated, and the maintenance process is very loaded down with trivial details, leads to not only spending more time, still need consume a large amount of manpower and materials. Therefore, stricter standards and more efficient methods are required to ensure the reliability and stability of power transmission equipment, thereby ensuring safe operation of a power system. The strain clamp is as the important component part of electric power fitting, and the strain clamp that puts into use is in large quantity, lacks corresponding equipment, the working condition that the method comes the inspection strain clamp, leads to strain clamp just can't learn its work back and whether have the defect once installing the line.
Disclosure of Invention
Aiming at the defect that the existing strain clamp can not be detected on line in real time, the invention aims to provide the X-ray image defect detection method for the strain clamp of the power transmission line, which is used for detecting the running condition of the strain clamp on line in real time and providing technical reference for intelligent inspection of power equipment.
In order to achieve the purpose, the invention is realized by the following scheme: a method for detecting X-ray image defects of a strain clamp of a power transmission line comprises the following steps:
s1, constructing and preprocessing a strain clamp X-ray image data set: constructing a strain clamp X-ray image data set containing 4 defects of groove under-voltage, flash, strand scattering, bending and the like; carrying out standardization processing on the data set, and unifying all the pictures into an RGB format; filtering and denoising the X-ray image of the strain clamp containing noise by adopting a 3-dimensional block matching filtering algorithm; carrying out image contrast enhancement on the denoised image characteristics by utilizing a homomorphic filtering algorithm;
s2, constructing an improved YOLOv3 strain clamp X-ray image defect detection model: replacing an effective characteristic layer of an original YOLOv3 main network DarkNet-53 by the output of three characteristic extraction layers Block3, block5 and Block7 in the EfficientNet network, and taking the effective characteristic layer as three inputs of a characteristic fusion network FPN;
the specific replacement operation is as follows: replacing a feature layer with the size of 52 multiplied by 256 in the original YOLOv3 network with the output of a feature extraction layer Block3 in the EfficientNet network as an output P1, adjusting the size of the feature layer to 52 multiplied by 52 by using a common convolution with the step length of 2 and the size of 3 multiplied by 3, and adjusting the number of channels by 40 by using a convolution with the size of 1 multiplied by 1, wherein the size of the feature layer P1 is 52 multiplied by 40; replacing the feature layer with the size of 26 multiplied by 512 in the original yollov 3 network with the output of a feature extraction layer Block5 in the EfficientNet network as an output P2, adjusting the size of the feature layer to 26 multiplied by 26 by using the common convolution with the step length of 2 and the size of 3 multiplied by 3, and adjusting the number of channels by using the convolution with the size of 1 multiplied by 1 to 112, wherein the size of the feature layer P2 is 26 multiplied by 112; finally, replacing the feature layer with the size of 13 multiplied by 1024 in the original yollov 3 network with the output of a feature extraction layer Block7 in the EfficientNet network as the output P3, adjusting the size of the feature layer to 13 multiplied by 13 by using the common convolution with the step length of 2 and the size of 3 multiplied by 3, and adjusting the number of channels to 320 by using the convolution with the size of 1 multiplied by 1, wherein the size of the feature layer P3 is 13 multiplied by 320;
finally, three feature layers P1, P2 and P3 with the size of 52 multiplied by 40, 26 multiplied by 112 and 13 multiplied by 320 are obtained, and a feature fusion network FPN is input for feature fusion;
s3, training an improved YOLOv3 algorithm model by utilizing the preprocessed image: dividing a training set, a verification set and a test set according to the proportion of 8; inputting a training set and a verification set into an improved YOLOv3 target detection algorithm, and inputting a weight W1 obtained by training an EfficientNet network on a Pascal VOC open data set into the improved YOLOv3 for auxiliary training by adopting the idea of transfer learning; during auxiliary training, firstly freezing the model trunk for 100 rounds of iterative training, unfreezing the model trunk after the loss function convergence tends to be saturated, continuing the iterative training for 100 rounds, and totaling 200 rounds of training in two stages; in the freezing training stage, mixup data enhancement is adopted, and defect types on two different images are randomly mixed to form a new image for training;
s4, carrying out defect detection on the test set image by using the trained improved YOLOv3 model: 200 groups of weights can be obtained after the transfer learning auxiliary training, the weight W2 with the minimum training loss value is taken out, and the W2 is input into an improved YOLOv3 model to detect the test set image.
Further, the 3-dimensional block matching filtering algorithm and the homomorphic filtering algorithm used in the step S1 are used for reasonably setting the corresponding parameters. The noise reduction intensity parameter sigma of the 3-dimensional block matching filter algorithm is very sensitive and needs to be carefully adjusted; the larger the similarity threshold Th is, the higher the filtering strength is; the more the maximum matching block number max _ matched, the stronger the sparsity of the transform group and the stronger the filtering; the Size of the block corresponds to the matching quality and speed. In the homomorphic filtering algorithm, the sharpening coefficient C determines the brightness of the output image, and the image exposure is intensified (the image is brightened) when C <1 and weakened (the image is darkened) when C > 1.
Further, in step S2, the number of channels of the three feature layers output by the original YOLOv3 network is 256, 512, 1024, and after the effective feature layer of the original YOLOv3 backbone network DarkNet-53 is replaced by the output of the three feature extraction layers Block3, block5, block7 in the EfficientNet network, the number of channels changes too much and the fitting degree is not high during model training, so that the number of channels input to the feature layer of the FPN network is adjusted to 40, 112, 320 by using convolution of 1 × 1.
Further, the verification set is divided in the step S3 to verify the training effect of the model, and if the verification set detects that the training model is converged, the training is continued, otherwise, the training is stopped; the weight W1 loss value obtained finally by training is minimum, the effect is optimal in multi-round training, and the method is most suitable for detecting defects in the X-ray image of the strain clamp.
The invention has the beneficial effects
According to the X-ray image defect detection method for the strain clamp of the power transmission line, provided by the invention, the defects can be accurately and quickly identified and positioned only by inputting the X-ray image of the strain clamp, and reference can be provided for intelligent inspection of power equipment in other application fields of the power industry.
Drawings
FIG. 1 is a diagram of the steps of the method of the present invention.
FIG. 2 is a flow chart of the method of the present invention.
FIG. 3 is a plan view of a strain clamp and a defect identified by an X-ray image; (a) is a strain clamp plan view; (b) is "recess under voltage"; (c) is "flash"; (d) is "bend"; (e) is "strand spreading";
fig. 4 shows the prediction result of the defect by using the improved YOLOv3 network.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
The flow chart of the invention is shown in fig. 2, and the embodiment will be described in detail below, and the detection of the strain clamp X-ray image defects comprises the following steps:
the acquired X-ray defect images of the strain clamps are analyzed, and the defects can be classified into 4 types of defects such as groove under-voltage, flash, strand scattering and bending according to the DL/T5285-2013 hydraulic crimping process specification of overhead conductors and ground wires of power transmission and transformation engineering, as shown in fig. 3, 1601 pictures are provided, each picture can have a plurality of defects, the total number of the defects is 7137, and the defects and the respective numbers thereof are shown in table 1:
TABLE 1
Kind of defect Recess under-voltage Flash edge Scattered strand Bending of
Number of pictures 860 pieces 812 sheets of 523 sheets of 220 pieces of paper
The groove under-voltage and the scattered strand are caused by untimely compression, the clamping between two structural parts of the strain clamp is not firm due to the defects, the strain clamp has the risk of falling under the action of stress, and the pressure compensation is carried out in time; when the stress on each part of the bent strain clamp is uneven, once the tension is increased in the operation process, the bent steel anchor is most likely to be broken, and the bent steel anchor is corrected or replaced in time; the flash is cracks on the surface of the steel anchor, and the flash is polished to be smooth in the compression joint process so that the stranded wire can penetrate through the aluminum pipe. Dividing the acquired strain clamp X-ray data set image into a training set, a verification set and a test set according to the proportion of 8. Wherein 80% of the training set pictures are input into the improved network for training, and the model training effect is verified by using the verification set pictures in each round of training.
Secondly, preprocessing the acquired X-ray image of the strain clamp of the power transmission line, firstly, carrying out standardized processing on a data set, and unifying all pictures into an RGB format; then, denoising the image containing noise and blur by adopting a 3-dimensional block matching filter algorithm BM3D, wherein the BM3D is the idea of filtering by using global information of the image and divides the image into a plurality of small areas by taking a block as a unit; the method comprises the steps of searching similar blocks in a strain clamp X-ray image and averaging the blocks to achieve the purpose of removing picture noise, wherein a similarity threshold value Th =400 among the blocks is calculated, the maximum matching block number Max _ matched =16 is calculated, the Size of the blocks is =8 × 8, the pixel Step Size Step =3, and the noise reduction strength sigma =25; and finally, in order to restore more detailed characteristics of the strain clamp X-ray image, image enhancement is carried out on the strain clamp X-ray image by adopting a homomorphic filtering algorithm, and the sharpening coefficient in the homomorphic filtering algorithm is set to be C =0.5, so that the enhancement of dark area characteristics is realized, the image of the data set is clear, and the contrast of the target and the background is improved.
And (III) respectively marking 4 types of defects of groove under-pressure, flash, strand scattering and bending by using a Labelimg image marking tool, wherein each type of defect of each image needs to be marked.
(IV) constructing a modified Yolov3 target detection algorithm model, and specifically operating as follows:
step 1: the infrastructure required for constructing the YOLOv3 network specifically includes: and a Res module: constructing a trunk part Bone by serially connecting a common convolution Conv and standardized BN, and forming a residual error part Short by 1 multiplied by 1 convolution; and constructing a Res module by using the trunk part Bone and the residual part Short.
Step 2: the size of the input strain clamp X-ray picture is set to be 416 multiplied by 3.
And step 3: the picture is not distorted by resize, and the picture size is changed to 416 × 416 × 32 after the number of channels is adjusted by a 1 × 1 ordinary convolution.
And 4, step 4: the feature is extracted through a Res module, and the picture size becomes 208 × 208 × 64.
And 5: the features are further extracted by two Res modules, at which point the picture size becomes 104 × 104 × 128.
Step 6: the output of a feature extraction layer Block3 in the EfficientNet network is used as the output P1 of the main feature extraction network, meanwhile, the size of the feature layer is adjusted to 52 multiplied by 52 by 3 general convolution with the step size of 2 and the size of 3 multiplied by 3, the number of channels is adjusted to 40 by 1 multiplied by 1 convolution, and at the moment, the size of the feature layer P1 is 52 multiplied by 40.
And 7: and then, the output of a feature extraction layer Block5 in the EfficientNet network is used as the output P2 of the main feature extraction network, the size of the feature layer is adjusted to 26 multiplied by 26 by using the common convolution with the step length of 2 and the size of 3 multiplied by 3, the number of channels is adjusted 112 by using the convolution with the step length of 1 multiplied by 1, and the size of the feature layer P2 is 26 multiplied by 112.
And step 8: then, the output of a feature extraction layer Block7 in the EfficientNet network is used as the output P3 of the main feature extraction network, the size of the feature layer is adjusted to 13 multiplied by 13 by using a common convolution with the step length of 2 and the size of 3 multiplied by 3, the number of channels is adjusted to 320 by using a convolution with the size of 1 multiplied by 1, and the size of the feature layer P3 is 13 multiplied by 320; .
And step 9: finally, three feature layers P1, P2 and P3 with the size of 52 multiplied by 40, 26 multiplied by 112 and 13 multiplied by 320 are obtained, and the feature fusion network FPN is input for feature fusion.
Step 10: and correspondingly inputting the fused feature layers into YOLO Head networks with three different sizes of 52 multiplied by 27, 26 multiplied by 27 and 13 multiplied by 27 for classification and prediction, thereby realizing the detection of small, medium and large defects in the tension clamp X-ray image.
And (V) inputting the training set in the preprocessed X-ray image of the strain clamp of the power transmission line into the built YOLOv3 target detection algorithm model for training.
Sixthly, by taking the idea of transfer learning as reference, extracting the pre-training weight of the network by using the main features as an initialization parameter, setting the training parameters of the YOLOv3 network, and training for 200 rounds in total; the training is divided into two stages, the freezing model trunk is iteratively trained for 100 times in the first stage, the purpose of freezing the trunk is to accelerate the training efficiency and prevent the weight from being damaged, mixup data is adopted for enhancement in the freezing training stage, and defect types on two different images are randomly mixed to form a new image for training; in the period, the structure of the feature extraction network is not changed, and the network only finely adjusts parameters when learning the features, so that the occupied video memory is small; in the second stage, the model trunk is unfrozen, and then the iterative training is carried out for 100 times, so that the occupied video memory is large, and the parameters need to be adjusted to be small appropriately. The initial learning rate is preferably set in the range of 0.001 to 0.1, and when the function is close to convergence, the learning rate needs to be adjusted to be small enough to converge the function at a low speed in order to ensure that the function does not miss any possible minimum point.
And (seventhly) after the training is finished, the weight with the minimum loss value is taken, the training effect of the weight is tested by using the test concentrated image, the mAP value of the detection result is 90.03%, and the detection effect is shown in fig. 4.
According to the influence situation which may occur in the actual operation in the embodiment: the data set allocation should be as consistent as possible with this patent; when various defects are marked, whether the defect characteristics are identified by a machine to be different from other characteristics or not is fully considered; after the replacement of the three output layers is completed, the number of channels is reduced by applying convolution of 1 multiplied by 1; the network training parameters are set to be as small as possible, the number of training rounds is as large as possible, and the training is preferably stopped automatically.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments or portions thereof without departing from the spirit and scope of the invention.

Claims (7)

1. A transmission line strain clamp X-ray image defect detection method is characterized by comprising the following steps:
s1, constructing and preprocessing a strain clamp X-ray image data set: constructing a strain clamp X-ray image data set containing 4 defects of groove under-voltage, flash, strand scattering, bending and the like; carrying out standardization processing on the data set, and unifying all pictures into an RGB format; filtering and denoising the noisy X-ray image of the strain clamp by adopting a 3-dimensional block matching filtering algorithm; carrying out image contrast enhancement on the denoised image characteristics by utilizing a homomorphic filtering algorithm;
s2, constructing an improved YOLOv3 strain clamp X-ray image defect detection model: replacing an effective feature layer of an original YOLOv3 backbone network DarkNet-53 by the output of three feature extraction layers Block3, block5 and Block7 in an EfficientNet network, and taking the effective feature layer as three inputs of a feature fusion network FPN;
the specific replacement operation is as follows: replacing a feature layer with the size of 52 multiplied by 256 in the original YOLOv3 network with the output of a feature extraction layer Block3 in the EfficientNet network as an output P1, adjusting the size of the feature layer to 52 multiplied by 52 by using a common convolution with the step length of 2 and the size of 3 multiplied by 3, and adjusting the number of channels by 40 by using a convolution with the size of 1 multiplied by 1, wherein the size of the feature layer P1 is 52 multiplied by 40; replacing the feature layer with the size of 26 multiplied by 512 in the original yollov 3 network with the output of a feature extraction layer Block5 in the EfficientNet network as an output P2, adjusting the size of the feature layer to 26 multiplied by 26 by using the common convolution with the step length of 2 and the size of 3 multiplied by 3, and adjusting the number of channels by using the convolution with the size of 1 multiplied by 1 to 112, wherein the size of the feature layer P2 is 26 multiplied by 112; finally, replacing the feature layer with the size of 13 multiplied by 1024 in the original yollov 3 network with the output of a feature extraction layer Block7 in the EfficientNet network as the output P3, adjusting the size of the feature layer to 13 multiplied by 13 by using the common convolution with the step length of 2 and the size of 3 multiplied by 3, and adjusting the number of channels to 320 by using the convolution with the size of 1 multiplied by 1, wherein the size of the feature layer P3 is 13 multiplied by 320;
finally obtaining three feature layers P1, P2 and P3 with the dimensions of 52 multiplied by 40, 26 multiplied by 112 and 13 multiplied by 320, and inputting a feature fusion network FPN for feature fusion;
s3, training an improved YOLOv3 algorithm model by utilizing the preprocessed image: dividing a training set, a verification set and a test set according to the proportion of 8; inputting a training set and a verification set into an improved YOLOv3 target detection algorithm, and inputting a weight W1 obtained by training an EfficientNet network on a Pascal VOC open data set into the improved YOLOv3 for auxiliary training by adopting the idea of transfer learning; during auxiliary training, firstly freezing the model trunk for 100 rounds of iterative training, unfreezing the model trunk after the loss function convergence tends to be saturated, continuing the iterative training for 100 rounds, and totaling 200 rounds of training in two stages; in the freezing training stage, mixup data enhancement is adopted, and defect types on two different images are randomly mixed to form a new image for training;
s4, defect detection is carried out on the images of the test set by using the trained improved YOLOv3 model: 200 groups of weights can be obtained after the transfer learning auxiliary training, the weight W2 with the minimum training loss value is taken out, and the W2 is input into an improved YOLOv3 model to detect the test set image.
2. The method for detecting the defects of the strain clamp of the power transmission line with the improved YOLOv3 algorithm as claimed in claim 1,
the definition of the defect in step S1 includes: the aluminum pipe of the strain clamp and the groove are not completely crimped, a gap is left between the aluminum pipe and the groove, the clamping between two structural parts of the strain clamp is not firm due to the defects, and the strain clamp has the risk of falling off under the action of stress; spreading strands: the finger stranded wire and the aluminum pipe are not completely crimped, and are bent: the strain clamp steel anchor and all parts of the sleeve are stressed unevenly to cause deformation; flashing indicates the presence of cracks in the steel anchor.
3. The method for detecting the defect of the strain clamp of the power transmission line based on the improved YOLOv3 algorithm as claimed in claim 1,
in the step S1, the strain clamp X-ray image is preprocessed, BM3D is an idea of filtering by using global information of the image, the image is divided into a plurality of small regions by using a block as a unit, and the purpose of removing picture noise is achieved by a method of searching for similar blocks in the strain clamp X-ray image and averaging the blocks. Calculating a similarity threshold value Th =400 between blocks, the maximum matching block number Max _ matched =16, the Size of the block =8 × 8, the pixel Step Size Step =3, and the noise reduction strength σ =25; and the sharpening coefficient in the homomorphic filtering algorithm is set to be C =0.5, so that the dark area features are enhanced.
4. The method for detecting the defects of the strain clamp of the power transmission line with the improved YOLOv3 algorithm as claimed in claim 1,
the step S2 improves the YOLOv3 algorithm, wherein the constructing of the basic composition unit includes: constructing a Block module by using a depth separable convolution DepthwiseConv and an inverse residual structure of an attention mechanism, wherein the Block module can be divided into a main part and a residual edge part; the main part firstly uses convolution of 1 multiplied by 1 to carry out dimension increasing, after standardization and Swish activation functions, features of cross-feature points are extracted by combining depth separable convolution of 3 multiplied by 3, a channel attention mechanism is added after the features are extracted, dimension reduction is carried out by using convolution of 1 multiplied by 1, and standardization processing is carried out after the dimension reduction is finished; the residual edge part is not processed, and is directly connected with the main part as output.
5. The method for detecting the defects of the strain clamp of the power transmission line with the improved YOLOv3 algorithm as claimed in claim 1,
in the step S2, the EfficientNet network is composed of 1 stem structure, a Block i (i =1, 2, 3, 4, 5, 6, 7) module and 1 output layer in sequence, and the outputs of the feature extraction layers Block3, block5 and Block7 are taken to replace the effective feature layer of the original YOLOv3 backbone network DarkNet-53 as outputs and are used as three inputs of the feature fusion network FPN.
6. The method for detecting the defects of the strain clamp of the power transmission line with the improved YOLOv3 algorithm as claimed in claim 1,
setting parameters in the split-stage training in the step S3, setting the Batch _ size to 8 in the freeze training stage, setting the learning rate to 0.001, and using Mixup data to enhance the round number N =85; the unfreezing training phase Batch _ size is set to 4 and the learning rate is set to 0.0001.
7. The method for detecting the defects of the strain clamp of the power transmission line with the improved YOLOv3 algorithm as claimed in claim 1,
in the step S4, 200 rounds of training are performed in total, and if val _ loss occurs during training and does not decrease for multiple rounds, the training can be stopped in advance, which indicates that the model has converged; and taking the weight W2 generated by the last round of training, inputting the weight W2 into a YOLOv3 model, and detecting the images in the test set.
CN202210875820.XA 2022-07-25 2022-07-25 X-ray image defect detection method for strain clamp of power transmission line Pending CN115147397A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210875820.XA CN115147397A (en) 2022-07-25 2022-07-25 X-ray image defect detection method for strain clamp of power transmission line

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210875820.XA CN115147397A (en) 2022-07-25 2022-07-25 X-ray image defect detection method for strain clamp of power transmission line

Publications (1)

Publication Number Publication Date
CN115147397A true CN115147397A (en) 2022-10-04

Family

ID=83413817

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210875820.XA Pending CN115147397A (en) 2022-07-25 2022-07-25 X-ray image defect detection method for strain clamp of power transmission line

Country Status (1)

Country Link
CN (1) CN115147397A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091506A (en) * 2023-04-12 2023-05-09 湖北工业大学 Machine vision defect quality inspection method based on YOLOV5
CN117152577A (en) * 2023-10-30 2023-12-01 北京机械工业自动化研究所有限公司 Casting defect detection method, casting defect detection device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091506A (en) * 2023-04-12 2023-05-09 湖北工业大学 Machine vision defect quality inspection method based on YOLOV5
CN117152577A (en) * 2023-10-30 2023-12-01 北京机械工业自动化研究所有限公司 Casting defect detection method, casting defect detection device, electronic equipment and storage medium
CN117152577B (en) * 2023-10-30 2024-02-02 北京机械工业自动化研究所有限公司 Casting defect detection method, casting defect detection device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN115147397A (en) X-ray image defect detection method for strain clamp of power transmission line
CN111260616A (en) Insulator crack detection method based on Canny operator two-dimensional threshold segmentation optimization
CN111008961B (en) Transmission line equipment defect detection method and system, equipment and medium thereof
CN108492291B (en) CNN segmentation-based solar photovoltaic silicon wafer defect detection system and method
CN113469953B (en) Transmission line insulator defect detection method based on improved YOLOv4 algorithm
CN111275679A (en) Solar cell defect detection system and method based on image
CN109472788B (en) Method for detecting flaw on surface of airplane rivet
CN114723750B (en) Transmission line strain clamp defect detection method based on improved YOLOX algorithm
CN112784853B (en) Terminal connection state detection method and device
CN110726725A (en) Transmission line hardware corrosion detection method and device
CN111709931B (en) Automatic acquisition method and system for strain clamp defect detection and identification report
CN111311487B (en) Rapid splicing method and system for photovoltaic module images
CN110378860B (en) Method, device, computer equipment and storage medium for repairing video
CN109919917B (en) Image processing-based foreign matter detection method for overhead transmission line
CN113487563B (en) EL image-based self-adaptive detection method for hidden cracks of photovoltaic module
CN111830051A (en) Transformer substation equipment oil leakage detection method and detection system based on deep learning
CN117422696A (en) Belt wear state detection method based on improved YOLOv8-Efficient Net
CN115731166A (en) High-voltage cable connector polishing defect detection method based on deep learning
CN107886493B (en) Method for detecting conductor stranding defects of power transmission line
CN115587966A (en) Method and system for detecting whether parts are missing or not under condition of uneven illumination
CN115619796A (en) Method and device for obtaining photovoltaic module template and nonvolatile storage medium
CN115131337B (en) Transmission line strain clamp defect detection method based on improved CENTERNET algorithm
CN116051530A (en) Semi-supervised photovoltaic cell surface anomaly detection method based on image restoration
Khan et al. Shadow removal from digital images using multi-channel binarization and shadow matting
CN115760640A (en) Coal mine low-illumination image enhancement method based on noise-containing Retinex model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination