CN112907614A - Yoov 5-segnet insulator string contour extraction method based on depth feature fusion - Google Patents

Yoov 5-segnet insulator string contour extraction method based on depth feature fusion Download PDF

Info

Publication number
CN112907614A
CN112907614A CN202110326002.XA CN202110326002A CN112907614A CN 112907614 A CN112907614 A CN 112907614A CN 202110326002 A CN202110326002 A CN 202110326002A CN 112907614 A CN112907614 A CN 112907614A
Authority
CN
China
Prior art keywords
insulator
yolov5
network
segnet
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110326002.XA
Other languages
Chinese (zh)
Inventor
于华楠
万鑫怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Electric Power University
Original Assignee
Northeast Dianli University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Dianli University filed Critical Northeast Dianli University
Priority to CN202110326002.XA priority Critical patent/CN112907614A/en
Publication of CN112907614A publication Critical patent/CN112907614A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a yolov5-segnet insulator string contour extraction method based on depth feature fusion, which comprises the following steps of 1, preprocessing an aerial insulator image set to obtain a preprocessed insulator image set T; 2. randomly selecting m pieces of the preprocessed aerial insulator image set for expansion processing to obtain expanded aerial insulator imagesCollection T1And taking the rest images in the T as a test set T2(ii) a 3. Collecting T the extended aerial insulator image1The n aerial-photographing insulator images are subjected to mask marking, and an aerial-photographing insulator image set marked by the mask is used as a training set T for insulator detection11Taking the rest aerial image insulator image as an insulator verification set T22(ii) a 4. And constructing a yolov5-segnet deep learning network with depth feature fusion. The invention overcomes the defect of a large amount of environmental interference caused by the detection of a large-scale long insulator string by the traditional network, and extracts the outline of the aerial photographing insulator, thereby effectively eliminating the interference for the defect detection of the insulator.

Description

Yoov 5-segnet insulator string contour extraction method based on depth feature fusion
Technical Field
The invention relates to the field of deep learning image processing, in particular to a yolov5-segnet insulator string contour extraction method based on depth feature fusion.
Background
The insulator is an important component for fixing a current-carrying conductor and preventing current from flowing back to the ground in the power transmission line, but the insulator is corroded by the environment for a long time, so that the potential safety hazard of breakage or falling is easily caused, and huge economic loss and casualties are caused due to serious consequences; therefore, the method is particularly important for the regular detection and timely maintenance of the insulator. With the continuous development of economy and technology and the proposal of a smart power grid concept, the power grid inspection is also developed from the traditional manual inspection into the power inspection of helicopters and unmanned planes; under the technical background, the unmanned aerial vehicle or the helicopter acquires visible, infrared or ultraviolet images of an insulator in a power transmission line by carrying a multi-mode camera, and the image processing technology based on deep learning analyzes the characteristics of the images, so that the intelligent detection of a power transmission system and equipment is realized, and the inspection efficiency and accuracy are greatly improved.
At present, an insulator self-explosion fault detection method based on deep learning, chen, Yan, and the like, which are 'aerial photography insulator convolution neural network detection and self-explosion identification research' (electronic measurement and instrument study, 2017), provides an improved Fast-RCNN algorithm based on aerial photography insulator detection and self-explosion fault identification; an effective insulator self-explosion defect positioning method (electronics and informatics, 2020) of old literature, li-color forest, yun-bin and the like provides an insulator self-explosion fault detection method based on deep learning. In the above insulator target detection based on deep learning, the detection result is that the insulator region is marked by the rectangular frame, and there are problems that the background and the insulators cannot be accurately distinguished, and the insulators in dense distribution are difficult to separate. It is difficult to further improve the fault detection accuracy.
Therefore, a related learner proposes a deep learning directional recognition algorithm in a specific field, angle rotation parameters are introduced into an axis alignment rectangular frame to form a directional detection frame to detect a glass insulator at any angle in an aerial image, the output form of the rectangular frame is not changed at all, the method has no universality for large-scale long-string and radian insulator strings and still brings huge environmental interference, and therefore, the method has great significance for researching a high-precision contour extraction algorithm for detecting the insulator strings under a complex background.
Disclosure of Invention
The invention aims to provide a yolov5-segnet insulator string contour extraction method based on depth feature fusion to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme:
the yolov5-segnet insulator string contour extraction method based on depth feature fusion comprises the following steps:
s1, preprocessing the aerial insulator image set to obtain a preprocessed insulator image set;
s2, randomly selecting m pieces of the preprocessed aerial photographing insulator image set seeds for expansion processing to obtain an expanded aerial photographing insulator image set T1And taking the rest images in the T as a test set T2
S3, carrying out mask marking on the n aerial photographing insulator images in the extended aerial photographing insulator image set T, and taking the aerial photographing insulator image set marked by the mask as a training set T for insulator detection11And taking the rest aerial image insulator images as a verification set T for insulator detection22
S4, constructing a yolov5-segnet deep learning network with depth feature fusion;
s5 construction test T2The reference network selects the parameter model of the μ best detection precision in the μ iteration results in the step 4.4.3 as the test set T2The reference network of (1). Test set T2Inputting the data into a reference network for testing to obtain a test result data set T3To makeUsing contour extraction function in openCV to pair T3Detecting the outline of the picture in the data set and drawing the outline to obtain an insulator outline picture,
the method for constructing the yolov5-segnet deep learning network with the depth feature fusion comprises the following steps:
s4.1, improving a pooling layer of a yolov5 target detection network, and firstly replacing original max pooling with dynamic k max pooling to obtain an improved yolov5 target detection network;
s4.2, building a semantic segmentation network, extracting target features on the basis of yolov5 output feature graphs, performing convolution and upsampling operations, performing 3 × 3 convolution and 2 times upsampling twice after extracting feature targets on the 13 × 13 output feature graphs, performing 3 × 3 convolution and 2 times upsampling once after extracting feature targets on the 26 × 26 output feature graphs, performing 3 × 3 convolution and 2 times upsampling once after extracting feature targets on the 52 × 52 output feature graphs, performing 3 × 3 convolution twice after extracting feature targets on the 52 × 52 output feature graphs, performing concat (tension splicing) on the feature graphs with unified sizes for unifying the feature graph sizes, inputting the feature graphs into a spatial attention module to refine features, and then performing 3 × 3 convolution twice to form the semantic segmentation module;
the improved yolov5 target detection module is combined with a semantic segmentation module to form the yolov5-segnet network with the depth feature fusion;
s4.3, starting to train the network, defining the current iteration number as mu, and initializing mu to be 1; the maximum iteration number is mu max; carrying out mu-th random initialization on parameters of each layer in the depth feature fused yolov5-segnet network, thereby obtaining a mu-th iterative depth feature fused yolov5-segnet network;
s4.4, training set T11Inputting the yolov5-segnet network of the depth feature fusion after the mu-th iteration for training.
As a further aspect of the invention, the training set T11Inputting the yolov5-segnet network of the depth feature fusion after the mu-th iteration for training, and comprising the following steps:
s4.4.1 YO after first stage traininglov5 object detection Module for T11The training set is operated in parallel, on one hand, T is used11Carrying out contour detection on all pictures in a training set to obtain a marked contour point (xi, yi), wherein i is 1,2,3,4.. n, then scratching the contour map, calculating a minimum bounding box of all the contour maps, wherein the coordinates of the upper left corner and the lower right corner of the minimum bounding box are (xmin, ymax and ymin), then carrying out loss calculation and precision solving on the target detection iteration output result, and on the other hand, carrying out T on the T by a front 52 lamination layer of a trunk CSPDarkNet-53 of improved yolov511Carrying out convolution operation on the training set, wherein each layer of convolution operation is followed by a batch regularization layer and a LeakyReLU activation layer;
s4.4.2, training semantic segmentation module in the second stage, screening by confidence coefficient on the basis of the training result in the first stage, and retaining the block diagram with confidence coefficient more than 0.5 for matting to form N1Training data set due to N1The sizes of the pictures in the method are different, and N is clustered by using a k-means clustering method1All the pictures in the picture are clustered to obtain the optimal mask size, and then N is used1Image size resize to optimal mask size forming N2Training set, will N2Inputting the training set into a semantic segmentation module for training;
the clustering process is as follows:
A. setting central points of 1 feature graph, wherein each central point corresponds to 1 clustering center;
B. the distance d between each sample in n1 and the center point is found using equation (1):
Figure BDA0002994696690000031
in formula (1), b (xi), b (yi) are the abscissa and ordinate of the ith sample, and c (xi), c (yi) are the abscissa and ordinate of the ith central point;
C. calculating the mean value of the distance from each sample to the central point by using the formula (2)
Figure BDA0002994696690000032
Figure BDA0002994696690000033
D. Iteratively updating the clustering center until the clustering center is not changed any more;
s4.4.3, performing mu iterations, using the validation set T after each iteration22The network is authenticated and the authentication results and network parameters are retained for each pass.
Compared with the prior art, the invention has the beneficial effects that: the built semantic segmentation module is combined with the yolov5 target detection module to form a new yolov5-segnet deep learning network with depth feature fusion, so that the defect of a large amount of environmental interference caused by detection of a large long insulator string in the traditional network is overcome, the outline of the insulator is extracted by aerial photography, and the interference is effectively eliminated for insulator defect detection.
Drawings
FIG. 1 is a training flow chart of the yolov5-segnet network algorithm for depth feature fusion in the invention.
FIG. 2 is a flow chart of semantic segmentation according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without any inventive step, are within the scope of the present invention.
Referring to fig. 1 and 2, the method for extracting the outline of the yolov5-segnet insulator string based on depth feature fusion comprises the following steps:
s1, preprocessing the aerial insulator image set, detecting whether the calculation capability of a computer meets the calculation requirement of the size of the original image and whether the problems of shaking blur and the like exist, and performing size conversion and shaking and noise removal processing to obtain 1000 preprocessed insulator images;
s2, randomly selecting 200 aerial insulated sub-images in the preprocessed 1000 aerial insulated sub-images to be used as a test set T1Expanding the remaining 800 aerial photography insulator images, and performing operations such as mirror image overturning, rotation and the like on the preprocessed aerial photography insulator on the basis of processing the images in the step 1 to simulate the conditions that an unmanned aerial vehicle or a helicopter suffers turbulence jitter and is at different shooting visual angles in the aerial photography process, expanding the data, and obtaining an expanded 1600 aerial photography insulator image set T2
S3, at T21280 aerial photography insulator images are selected in the image set, artificial mask marking is carried out on the 1280 aerial photography insulator images by adopting a Labelling tool to generate a json file, and therefore the marked aerial photography insulator image set is obtained and serves as a training set T for insulator detection21;T2Taking the remaining 320 aerial photography insulator image sets of the image set as a verification set T22
S4, constructing a yolov5-segnet deep learning network with depth feature fusion;
s4.1, improving a pooling layer of a yolov5 target detection network, and firstly replacing original max pooling with dynamic k max pooling to obtain an improved yolov5 target detection network;
s4.2, building a semantic segmentation network, extracting target features on the basis of yolov5 output feature graphs, performing convolution and upsampling operations, performing 3 × 3 convolution and 2 times upsampling twice after extracting feature targets on the 13 × 13 output feature graphs, performing 3 × 3 convolution and 2 times upsampling once after extracting feature targets on the 26 × 26 output feature graphs, performing 3 × 3 convolution and 2 times upsampling once after extracting feature targets on the 52 × 52 output feature graphs, performing 3 × 3 convolution twice after extracting feature targets, performing concat (tension splicing) on the feature graphs with unified sizes for the purpose of unifying the feature graph sizes, inputting the feature graphs into a spatial attention module to refine features, and then performing 3 × 3 convolution twice, thus forming the semantic segmentation module.
The modified yolov5 target detection module is combined with a semantic segmentation module to form the yolov5-segnet network with the depth feature fusion.
S4.3, starting to train the network, defining the current iteration number as mu, initializing the current iteration number as mu being 1, setting the maximum iteration number as mu max, and carrying out the mu-th random initialization on the parameters of each layer in the depth feature fused yolov5-segnet network so as to obtain the depth feature fused yolov5-segnet network of the mu-th iteration;
s4.4, training set T11Inputting the yolov5-segnet network with the depth feature fused after the mu-th iteration, and training in two stages;
s4.4.1, a yolov5 target detection module after the first stage training improvement, for T11The training set is operated in parallel, on one hand, T is used11Carrying out contour detection on all pictures in a training set to obtain a marked contour point (xi, yi), wherein i is 1,2,3,4.. n, then scratching the contour map, calculating a minimum bounding box of all the contour maps, wherein the coordinates of the upper left corner and the lower right corner of the minimum bounding box are (xmin, ymax and ymin), then carrying out loss calculation and precision solution on the target detection iteration output result, and on the other hand, carrying out T on the T by a front 52 convolution layer of a trunk CSPDarkNet-53 of improved YOLOv511Performing convolution operation on the training set, wherein the sizes of convolution kernels are 3 × 3 and 1 × 1 respectively, and each layer of convolution operation is followed by a batch regularization layer and a LeakyReLU activation layer for accelerating the training speed;
s4.4.2, training semantic segmentation module in the second stage, screening by confidence coefficient on the basis of the training result in the first stage, and retaining the block diagram with confidence coefficient more than 0.5 for matting to form N1Training data set due to N1The sizes of the pictures in the N1 are different, all the pictures in the N1 are clustered by using a k-means clustering method to obtain the optimal mask size, and then the N is used1Image size resize to optimal mask size forming N2Training set, will N2Inputting the training set into a semantic segmentation module for training;
the clustering process is as follows:
A. setting central points of 1 feature graph, wherein each central point corresponds to 1 clustering center;
B. the distance d between each sample in N1 and the center point is found using equation (1):
Figure BDA0002994696690000061
in formula (1), b (xi), b (yi) are the abscissa and ordinate of the ith sample, and c (xi), c (yi) are the abscissa and ordinate of the ith central point;
C. calculating the mean value of the distance from each sample to the central point by using the formula (2)
Figure BDA0002994696690000062
Figure BDA0002994696690000063
D. Iteratively updating the clustering center until the clustering center is not changed any more;
s4.4.3, performing mu iterations, using the validation set T after each iteration22The network is authenticated and the authentication results and network parameters are retained for each pass.
S5 construction test T2The Mu best parameter in S4.4.3 is selected as the test set T2The reference network inputs the test set into the reference network for testing to obtain a test result.
While the preferred embodiments of this patent have been described in detail, this patent is not limited to the embodiments described above, and variations can be made within the knowledge of those skilled in the art without departing from the spirit of the patent.

Claims (2)

1. The yolov5-segnet insulator string contour extraction method based on depth feature fusion comprises the following steps:
s1, preprocessing the aerial insulator image set to obtain a preprocessed insulator image set;
s2, randomly selecting m pieces of the preprocessed aerial photographing insulator image set seeds for expansion processing to obtain an expanded aerial photographing insulator image set T1, wherein the rest images in the T are used as a test set T2;
s3, carrying out mask marking on the n aerial photographing insulator images in the extended aerial photographing insulator image set T, taking the aerial photographing insulator image set marked by the mask as a training set T11 for insulator detection, and taking the rest aerial photographing insulator images as a verification set T22 for insulator detection;
s4, constructing a yolov5-segnet deep learning network with depth feature fusion;
s5, constructing a reference network for testing T2, and selecting the μ best parameter model in the μ iteration result in the step 4.4.3 as the reference network of the test set T2. Inputting the test set T2 into a reference network for testing to obtain a test result data set T3, carrying out contour detection on pictures in the T3 data set by using a contour extraction function in openCV and drawing a contour to obtain an insulator contour diagram,
the method is characterized in that the method for constructing the yolov5-segnet deep learning network with the depth feature fusion comprises the following steps: s4.1, improving a pooling layer of a yolov5 target detection network, and firstly replacing original max pooling with dynamic k max pooling to obtain an improved yolov5 target detection network;
s4.2, building a semantic segmentation network, extracting target features on the basis of yolov5 output feature graphs, performing convolution and upsampling operations, performing 3 × 3 convolution and 2 times upsampling twice after respectively extracting feature targets on 13 × 13 output feature graphs, performing 3 × 3 convolution and 2 times upsampling once after extracting feature targets on 26 × 26 output feature graphs, performing 3 × 3 convolution and 2 times upsampling once after extracting feature targets on 52 × 52 output feature graphs, performing concat (tensor splicing) on the feature graphs with unified sizes for the purpose of unifying the feature graph sizes, inputting a spatial attention module to refine features, and performing 3 × 3 convolution twice, wherein the semantic segmentation module is formed by the operations;
the improved yolov5 target detection module is combined with a semantic segmentation module to form the yolov5-segnet network with the depth feature fusion;
s4.3, starting to train the network, defining the current iteration number as mu, and initializing mu to be 1; the maximum iteration number is mu max; carrying out mu-th random initialization on parameters of each layer in the depth feature fused yolov5-segnet network, thereby obtaining a mu-th iterative depth feature fused yolov5-segnet network;
s4.4, training set T11Inputting the yolov5-segnet network of the depth feature fusion after the mu-th iteration for training.
2. The method for extracting the outline of the yolov5-segnet insulator string based on depth feature fusion as claimed in claim 1, wherein the training set T11 is input into the yolov5-segnet network of depth feature fusion after the μ iteration for training, and comprises the following steps:
s4.4.1, a yolov5 target detection module after the first stage training improvement, for T11The training set is operated in parallel, on one hand, T is used11Carrying out contour detection on all pictures in a training set to obtain a marked contour point (xi, yi), wherein i is 1,2,3,4.. n, then scratching the contour map, calculating a minimum bounding box of all the contour maps, wherein the coordinates of the upper left corner and the lower right corner of the minimum bounding box are (xmin, ymax and ymin), then carrying out loss calculation and precision solving on the target detection iteration output result, and on the other hand, carrying out T-layer pair on the T-layer by the first 52 volume layers of the trunk CSPDarkNet-53 of improved yolov511Carrying out convolution operation on the training set, wherein each layer of convolution operation is followed by a batch regularization layer and a LeakyReLU activation layer;
s4.4.2, training semantic segmentation module in the second stage, screening by confidence coefficient on the basis of the training result in the first stage, and retaining the block diagram with confidence coefficient more than 0.5 for matting to form N1Training data set due to N1The sizes of the pictures in the method are different, and N is clustered by using a k-means clustering method1All the pictures in the picture are clustered to obtain the optimal mask size, and then N is used1Image size resize to optimal mask size forming N2Training set, will N2Inputting the training set into a semantic segmentation module for training;
the clustering process is as follows:
A. setting central points of 1 feature graph, wherein each central point corresponds to 1 clustering center;
B. calculating N by the formula (1)1Distance d between each sample and the center point:
Figure FDA0002994696680000021
in formula (1), b (xi), b (yi) are the abscissa and ordinate of the ith sample, and c (xi), c (yi) are the abscissa and ordinate of the ith central point;
C. calculating the mean value of the distance from each sample to the central point by using the formula (2)
Figure FDA0002994696680000022
Figure FDA0002994696680000023
D. Iteratively updating the clustering center until the clustering center is not changed any more;
s4.4.3, performing mu iterations, using the validation set T after each iteration22The network is authenticated and the authentication results and network parameters are retained for each pass.
CN202110326002.XA 2021-03-26 2021-03-26 Yoov 5-segnet insulator string contour extraction method based on depth feature fusion Pending CN112907614A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110326002.XA CN112907614A (en) 2021-03-26 2021-03-26 Yoov 5-segnet insulator string contour extraction method based on depth feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110326002.XA CN112907614A (en) 2021-03-26 2021-03-26 Yoov 5-segnet insulator string contour extraction method based on depth feature fusion

Publications (1)

Publication Number Publication Date
CN112907614A true CN112907614A (en) 2021-06-04

Family

ID=76108816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110326002.XA Pending CN112907614A (en) 2021-03-26 2021-03-26 Yoov 5-segnet insulator string contour extraction method based on depth feature fusion

Country Status (1)

Country Link
CN (1) CN112907614A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469953A (en) * 2021-06-10 2021-10-01 南昌大学 Transmission line insulator defect detection method based on improved YOLOv4 algorithm
CN113506290A (en) * 2021-07-29 2021-10-15 广东电网有限责任公司 Method and device for detecting defects of line insulator
CN114332083A (en) * 2022-03-09 2022-04-12 齐鲁工业大学 PFNet-based industrial product camouflage flaw identification method
CN114359286A (en) * 2022-03-21 2022-04-15 湖南应超智能计算研究院有限责任公司 Insulator defect identification method, device and medium based on artificial intelligence
CN114529574A (en) * 2022-02-23 2022-05-24 平安科技(深圳)有限公司 Image matting method and device based on image segmentation, computer equipment and medium
CN115082318A (en) * 2022-07-13 2022-09-20 东北电力大学 Electrical equipment infrared image super-resolution reconstruction method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469953A (en) * 2021-06-10 2021-10-01 南昌大学 Transmission line insulator defect detection method based on improved YOLOv4 algorithm
CN113506290A (en) * 2021-07-29 2021-10-15 广东电网有限责任公司 Method and device for detecting defects of line insulator
CN114529574A (en) * 2022-02-23 2022-05-24 平安科技(深圳)有限公司 Image matting method and device based on image segmentation, computer equipment and medium
CN114332083A (en) * 2022-03-09 2022-04-12 齐鲁工业大学 PFNet-based industrial product camouflage flaw identification method
CN114359286A (en) * 2022-03-21 2022-04-15 湖南应超智能计算研究院有限责任公司 Insulator defect identification method, device and medium based on artificial intelligence
CN115082318A (en) * 2022-07-13 2022-09-20 东北电力大学 Electrical equipment infrared image super-resolution reconstruction method

Similar Documents

Publication Publication Date Title
CN112907614A (en) Yoov 5-segnet insulator string contour extraction method based on depth feature fusion
Lei et al. Intelligent fault detection of high voltage line based on the Faster R-CNN
CN109816725B (en) Monocular camera object pose estimation method and device based on deep learning
CN109948425B (en) Pedestrian searching method and device for structure-aware self-attention and online instance aggregation matching
WO2020102988A1 (en) Feature fusion and dense connection based infrared plane target detection method
CN110929607B (en) Remote sensing identification method and system for urban building construction progress
CN109615611A (en) A kind of insulator self-destruction defect inspection method based on inspection image
CN108038846A (en) Transmission line equipment image defect detection method and system based on multilayer convolutional neural networks
CN107423760A (en) Based on pre-segmentation and the deep learning object detection method returned
CN106504233A (en) Image electric power widget recognition methodss and system are patrolled and examined based on the unmanned plane of Faster R CNN
CN109598794A (en) The construction method of three-dimension GIS dynamic model
CN113920107A (en) Insulator damage detection method based on improved yolov5 algorithm
CN112529005B (en) Target detection method based on semantic feature consistency supervision pyramid network
CN110472652A (en) A small amount of sample classification method based on semanteme guidance
CN111461006B (en) Optical remote sensing image tower position detection method based on deep migration learning
CN111652835A (en) Method for detecting insulator loss of power transmission line based on deep learning and clustering
CN112819008B (en) Method, device, medium and electronic equipment for optimizing instance detection network
CN109584206B (en) Method for synthesizing training sample of neural network in part surface flaw detection
CN110223310A (en) A kind of line-structured light center line and cabinet edge detection method based on deep learning
CN105488541A (en) Natural feature point identification method based on machine learning in augmented reality system
CN110909623A (en) Three-dimensional target detection method and three-dimensional target detector
CN114519819B (en) Remote sensing image target detection method based on global context awareness
CN117011274A (en) Automatic glass bottle detection system and method thereof
CN109816634A (en) Detection method, model training method, device and equipment
CN115049833A (en) Point cloud component segmentation method based on local feature enhancement and similarity measurement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication