CN112419316A - Cross-device visible light texture defect detection method and device - Google Patents

Cross-device visible light texture defect detection method and device Download PDF

Info

Publication number
CN112419316A
CN112419316A CN202011472397.6A CN202011472397A CN112419316A CN 112419316 A CN112419316 A CN 112419316A CN 202011472397 A CN202011472397 A CN 202011472397A CN 112419316 A CN112419316 A CN 112419316A
Authority
CN
China
Prior art keywords
visible light
light image
network model
defect
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011472397.6A
Other languages
Chinese (zh)
Inventor
王彦波
沈桂竹
毛航银
戴波
姚一杨
于豪
王雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
State Grid Zhejiang Electric Power Co Ltd
Information and Telecommunication Branch of State Grid Zhejiang Electric Power Co Ltd
Original Assignee
Northwestern Polytechnical University
State Grid Zhejiang Electric Power Co Ltd
Information and Telecommunication Branch of State Grid Zhejiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University, State Grid Zhejiang Electric Power Co Ltd, Information and Telecommunication Branch of State Grid Zhejiang Electric Power Co Ltd filed Critical Northwestern Polytechnical University
Priority to CN202011472397.6A priority Critical patent/CN112419316A/en
Publication of CN112419316A publication Critical patent/CN112419316A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Quality & Reliability (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention provides a cross-device visible light texture defect detection method and device, which can identify an actual texture defect area and a defect type in a visible light image by utilizing a first network model and a second network model through shooting the visible light image by a device to be detected. Under the condition of ensuring the detection accuracy, the invention designs the visible light texture defect detection which can be carried out across equipment, has higher use flexibility and reduces the cost of defect detection.

Description

Cross-device visible light texture defect detection method and device
Technical Field
The invention relates to the technical field of computer vision and machine learning, in particular to a cross-device visible light texture defect detection method and device.
Background
In the modern manufacturing industry, the defect detection technology can help people to timely and quickly find the existing problems of equipment, so that the equipment can be overhauled at the highest speed, and the long-term stable, safe and economic operation of the equipment is ensured.
An important technical route at present is defect detection by a computer vision method. However, the related schemes are often used for detecting some defect types of a certain type of equipment, or are only simple to be specific to a certain type of equipment, and are difficult to adapt to different equipment.
Disclosure of Invention
In view of the above, to solve the above problems, the present invention provides a cross-device visible light texture defect detection method and apparatus, and the technical scheme is as follows:
a cross-device visible light texture defect detection method, the method comprising:
acquiring a visible light image of a device to be detected;
inputting the visible light image into a first network model obtained by pre-training, wherein the first network model comprises a defect area positioning branch and a defect feature extraction branch;
determining a texture defect region of the visible light image through the defect region positioning branch, and extracting abstract texture features of the visible light image through the defect feature extraction branch;
performing feature pooling on abstract texture features of the visible light image according to the texture defect area of the visible light image;
and inputting the abstract texture features of the visible light image after the features are pooled into a second network model obtained by pre-training so as to determine an actual texture defect area and a defect type through the second network model.
Preferably, before the feature pooling is performed on the abstract texture features of the visible light image according to the texture defect area of the visible light image, the method further includes:
and carrying out non-maximum suppression on the texture defect area of the visible light image.
Preferably, the training process of the first network model includes:
acquiring a first visible light image sample;
performing preprocessing operation on the first visible light image sample to obtain a second visible light image sample;
calling a first basic network model corresponding to the defect region positioning branch and a second basic network model corresponding to the defect feature extraction branch, wherein the first basic network model comprises a plurality of volume blocks, a plurality of FPN feature pyramid networks and an RPN region suggestion network, and the first basic network model comprises a plurality of small perception field blocks;
and simultaneously inputting the second visible light image sample into the first basic network model and the second basic network model for model training to obtain the defect region positioning branch and the defect feature extraction branch.
Preferably, the training process of the second network model includes:
acquiring a third visible light image sample;
performing a preprocessing operation on the third visible light image sample to obtain a fourth visible light image sample;
inputting the fourth visible light image sample into the first network model;
identifying a texture defect region of the fourth visible light image sample through the defect region locating branch, and extracting abstract texture features of the fourth visible light image sample through the defect feature extracting branch;
performing feature pooling on abstract texture features of the fourth visible light image sample according to the texture defect area of the fourth visible light image sample;
and calling a third basic network model corresponding to the second network model, and inputting the abstract texture features after the feature pooling of the fourth visible light image sample, and the actual texture defect area labels and the actual defect type labels corresponding to the abstract texture features into the third basic network model to obtain the second network model.
Preferably, the pretreatment operation: one or more of image denoising, data augmentation, and scale transformation.
Preferably, the defect types include: oil leakage, rust, dirt, cracks and ice coating.
An apparatus for cross-device visible texture defect detection, the apparatus comprising:
the image acquisition module is used for acquiring a visible light image of the equipment to be detected;
the first image recognition module is used for inputting the visible light image into a first network model obtained by pre-training, and the first network model comprises a defect area positioning branch and a defect feature extraction branch; determining a texture defect region of the visible light image through the defect region positioning branch, and extracting abstract texture features of the visible light image through the defect feature extraction branch;
the image processing module is used for performing feature pooling on abstract texture features of the visible light image according to the texture defect area of the visible light image;
and the second image identification module is used for inputting the abstract texture features of the visible light image after the features are pooled into a second network model obtained by pre-training so as to determine the actual texture defect area and the defect type through the second network model.
Preferably, the image processing module is further configured to:
and carrying out non-maximum suppression on the texture defect area of the visible light image.
Preferably, the process of training the first network model by the first image recognition module includes:
acquiring a first visible light image sample; performing preprocessing operation on the first visible light image sample to obtain a second visible light image sample; calling a first basic network model corresponding to the defect region positioning branch and a second basic network model corresponding to the defect feature extraction branch, wherein the first basic network model comprises a plurality of volume blocks, a plurality of FPN feature pyramid networks and an RPN region suggestion network, and the first basic network model comprises a plurality of small perception field blocks; and simultaneously inputting the second visible light image sample into the first basic network model and the second basic network model for model training to obtain the defect region positioning branch and the defect feature extraction branch.
Preferably, the process of training the second network model by the second image recognition module includes:
acquiring a third visible light image sample; performing a preprocessing operation on the third visible light image sample to obtain a fourth visible light image sample; inputting the fourth visible light image sample into the first network model; identifying a texture defect region of the fourth visible light image sample through the defect region locating branch, and extracting abstract texture features of the fourth visible light image sample through the defect feature extracting branch; performing feature pooling on abstract texture features of the fourth visible light image sample according to the texture defect area of the fourth visible light image sample; and calling a third basic network model corresponding to the second network model, and inputting the abstract texture features after the feature pooling of the fourth visible light image sample, and the actual texture defect area labels and the actual defect type labels corresponding to the abstract texture features into the third basic network model to obtain the second network model.
Compared with the prior art, the invention has the following beneficial effects:
the invention provides a cross-device visible light texture defect detection method and device, which can identify an actual texture defect area and a defect type in a visible light image by utilizing a first network model and a second network model through shooting the visible light image by a device to be detected. Under the condition of ensuring the detection accuracy, the invention designs the visible light texture defect detection which can be carried out across equipment, has higher use flexibility and reduces the cost of defect detection.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a cross-device visible light texture defect detection method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of equipment corrosion defects provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a contamination defect of a device according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating ice coating defects of an apparatus according to an embodiment of the present invention;
FIG. 5 is an example of a model architecture provided by an embodiment of the present invention;
FIG. 6 is an example of another model architecture provided by embodiments of the present invention;
FIG. 7 is a schematic diagram of feature pooling provided by an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a cross-device visible light texture defect detection apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The defect detection technology has very important practical significance in the industrial field, in the modern manufacturing industry, an enterprise often has a large number of manufacturing devices, and partial devices are expensive, so that once problems occur in the operation process, huge economic losses can be brought, and even the production safety problem can be caused. Under the background, an effective defect detection technology can help people to timely and quickly find the existing problems of equipment, so that the equipment can be overhauled at the highest speed, and the long-term stable, safe and economic operation of the equipment is guaranteed.
The current defect detection scheme has a plurality of technical routes, partial researchers adopt voltage and current values of equipment in operation as the basis of defect detection, when the voltage and the current do not exceed rated values, the equipment is considered to be in normal operation, otherwise, a fault occurs; in addition, some researchers pay attention to the operation temperature of the equipment, and judge whether the equipment is in operation or not through analysis of the operation temperature of the equipment. In addition to the above methods, there is also an important technical route to adopt a computer vision method, which first obtains an image by means of a camera or the like, and then processes the image to obtain a final detection result.
However, many current technical solutions for detecting defects by using vision are often used for detecting some defect types of a certain type of equipment, or simply for determining whether the equipment has defects at present, and are difficult to be applied to different equipment. Based on the situation, the invention considers the common defect set existing on different devices, analyzes the defects and provides a scheme capable of completing defect detection of the cross-device.
Taking the field of power equipment as an example, the defects of dirty blocks may occur on equipment such as transformers, capacitors, transformers, insulators and the like, because the equipment is exposed in natural environment all the year round and is influenced by weather factors such as wind, sand, rain, snow and the like, the dirty blocks with different shapes are formed, and the existence of the dirty blocks may influence the electrical performance and the mechanical performance of the equipment, so that the running efficiency is reduced, and even the equipment is damaged. In addition to the contamination, common defects such as corrosion defects may exist, the equipment often has metal components, the metal components are exposed in the air for a long time, and the corrosion problem of the metal components may occur due to the influence of the weather, the corrosion defects often affect the mechanical performance of the equipment, and if the corrosion occurs on the supporting structure of the equipment, even more serious accidents are caused due to the damage of the supporting structure. In addition, common defects that may be present in the field of electrical equipment are ice coating, oil leakage, cracks, etc.
Therefore, the common defects existing on different devices are analyzed, and related defect detection schemes are designed according to the characteristics of the common defects, so that the cross-device visible texture defect detection can be completed based on the common defect detection method, and the limitation that different schemes need to be designed for the defect detection of different devices is broken through. The invention can obtain the approximate location and the type of the defect in the image by taking a visible light image as input through two network models.
Referring to a method flowchart of a cross-device visible light texture defect detection method shown in fig. 1, the cross-device visible light texture defect detection method includes the following steps:
and S10, acquiring a visible light image of the device to be detected.
In the embodiment of the invention, a visible light camera is used for collecting the visible light image of the equipment to be detected. For the visible light image, preprocessing such as image denoising can be performed to improve the subsequent accuracy. The image denoising can select a filtering mode, such as gaussian filtering, bilateral filtering, and the like.
It should be noted that the devices to be detected include all devices that may have texture defects with identification, and the defect types of the texture defects include, but are not limited to, oil leakage, rust, dirt, cracks, and ice coating. Fig. 2, fig. 3 and fig. 4 are schematic diagrams of equipment corrosion defects, contamination defects and ice coating defects provided by an embodiment of the present invention, respectively.
And S20, inputting the visible light image into a first network model obtained by pre-training, wherein the first network model comprises a defect area positioning branch and a defect feature extracting branch.
Aiming at the bottleneck that different schemes are required to be used for detecting the defects of different equipment at present, the embodiment of the invention considers the common defect types which can appear on different equipment, and provides a cross-equipment visible light texture defect detection technology using a double-branch network (a first network model) by utilizing a deep learning image processing technology.
The training process of the first network model comprises the following steps:
acquiring a first visible light image sample; performing preprocessing operation on the first visible light image sample to obtain a second visible light image sample; calling a first basic network model corresponding to the defect region positioning branch and a second basic network model corresponding to the defect feature extraction branch, wherein the first basic network model comprises a plurality of volume blocks, a plurality of FPN feature pyramid networks and an RPN region suggestion network, and the first basic network model comprises a plurality of small perception field blocks; and simultaneously inputting the second visible light image sample into the first basic network model and the second basic network model for model training to obtain a defect area positioning branch and a defect feature extraction branch.
In the embodiment of the present invention, after the first visible light image sample is obtained, image preprocessing including one or more of image denoising, data augmentation, and scale conversion may be performed on the first visible light image sample. Specifically, the method comprises the following steps:
the image denoising can select a filtering mode, such as Gaussian filtering, bilateral filtering and the like; the data augmentation is to increase the scale of training data and further obtain a better training effect, the related technology comprises image rotation, turning, cutting, splicing and the like, and the overfitting problem caused by insufficient data can be greatly reduced through the data augmentation; the scale transformation can convert the input image into similar size, and the model can perform feature extraction and subsequent decision operation on the same scale through the scale change.
In the process of data expansion, attention needs to be paid to the fact that when an image is transformed, corresponding transformation is also carried out on the coordinates of the defect area corresponding to the image, and consistency of the two is kept.
Further, after image preprocessing, inputting the obtained second visible light image sample into the first basic network model corresponding to the defect area positioning branch. For the first basic Network model, different Network models can be selected according to the difference of the field where the detected device is located and the size of the image resolution, such as VGG (visual geometry Group), ResNet (Residual Network), Squeeze net (compressed Network), densnet (Dense Network), SENet (compressed and excited Network), etc.; different types of defect Region localization modes can be selected, such as a mode of adopting an RPN (Region suggestion network) structure or a direct convolution regression mode.
Due to the particularity of visible light texture defects, such as the defect characteristics of oil leakage, dirt, cracks, ice coating and the like of equipment, fixed or similar modes do not exist in the shape of a defect area, and the shape characteristics of a traditional positioning network are frequently used when positioning is completed, so that the effect of the traditional model used in the position is not ideal. Therefore, in the defect region localization branch of the present invention, according to the priori knowledge of the known device size and the calculation result of the size of the sensing field at different levels of the localization model, a Feature Pyramid Network (FPN) structure is adopted to complete localization regression of the defect region at different levels.
Referring to the model architecture example shown in fig. 5, the first basic network model includes a plurality of convolution blocks, each convolution block may be composed of layers such as a convolution layer, a BN (Batch Normalization) layer, a pooling layer, and a Relu (Rectified Linear Unit) active layer, and a specific configuration form may be flexibly set according to a detected device defect set and an image resolution size. Based on this, a plurality of FPNs are also needed in the first basic network model to complete the positioning operation of the defect area on different scales. With continued reference to fig. 5, the RPN is used to process the output results of all the FPNs, and the RPN makes predictions and regressions for the features of the FPN output on different scales.
In addition, after image preprocessing, inputting the obtained second visible light image sample into a second basic network model corresponding to the defect feature extraction branch. For the second basic Network model, the design may refer to common Network models such as VGG, ResNet, Shuffle Network, and the like, but to satisfy the feature of extracting texture features, the number of layers in these models and the size of convolution kernel need to be modified.
Due to the particularity of visible light texture defects, the second basic network model selected needs to have features with small perception fields in order to be able to better obtain texture features rather than shape features. See the model architecture example shown in FIG. 6. The second basic network model comprises a plurality of small perception field blocks, and each small perception field block can be composed of a convolution layer, a BN layer, a pooling layer, a Relu activation layer and the like. Due to the particularity of the visible light defect texture, the defect feature extraction branch only needs to obtain texture features on a small scale and does not need global shape features on a large scale, so that the network structure of the second basic network model only needs to meet the characteristic of a small perception field, and a large number of 3 × 3 convolution kernels are replaced by 1 × 1 convolution kernels.
It should be noted that, for the training process of the first basic network model and the second basic network model, reference may be made to a deep learning image processing technique, which is not described herein again.
And S30, determining the texture defect area of the visible light image through the defect area positioning branch, and extracting the abstract texture feature of the visible light image through the defect feature extraction branch.
In some other embodiments, in order to remove the redundant area output by the defective area location branch, the embodiment of the present invention further includes the following steps:
and carrying out non-maximum suppression on the texture defect area of the visible light image.
Specifically, because the results given by the RPN have mutually overlapped redundant information, the embodiment of the present invention performs non-maximum Suppression (NMS) operation on the texture defect region output by the defect region location branch, removes the redundant location result, and sets the threshold of the NMS, which is the result of more accurate location, to be generally 0.3 to 0.5.
And S40, performing feature pooling on the abstract texture features of the visible light image according to the texture defect area of the visible light image.
In the embodiment Of the present invention, the feature Pooling may be performed by using a ROI (Region Of Interest) Pooling structure or an SPP (spatial pyramid Pooling) structure, as needed.
See fig. 7 for a schematic view of feature pooling. A represents the abstract texture features of the visible light image obtained by defect feature extraction branching, which exist in the form of a feature map, B represents the area occupied by the texture defect area of the visible light image obtained by defect area positioning branching in the feature map, and C represents the feature map after feature pooling.
Specifically, the feature pooling comprises two stages of cropping and pooling. The purpose of cutting is to enable the pooled feature map to only contain the features of the texture defect area and eliminate the interference of background and the like; and pooling is that the sizes of the positioning results of the texture defect areas are different, and if pooling operation is not performed, the sizes of the pooled feature maps are different, so that subsequent processing cannot be performed. Therefore, the pooling operation in the embodiment of the present invention is to pool the clipped feature maps to a uniform size. For example, the transformation size is ROI _ nums × channels × k × k, where k represents the size of the pooled feature map.
And S50, inputting the abstract texture features of the visible light image after the features are pooled into a second network model obtained by pre-training, so as to determine the actual texture defect area and the defect type through the second network model.
In the embodiment of the invention, the second network model is a network structure with positioning regression and classification and identification functions, and the final region positioning result and type classification result about texture defects are obtained by taking the abstract texture features after feature pooling as input and processing the abstract texture features.
The training process of the second network model comprises the following steps:
acquiring a third visible light image sample; performing preprocessing operation on the third visible light image sample to obtain a fourth visible light image sample; inputting the fourth visible light image sample into the first network model; determining a texture defect area of the fourth visible light image sample through the defect area positioning branch, and extracting abstract texture features of the fourth visible light image sample through the defect feature extraction branch; performing feature pooling on abstract texture features of the fourth visible light image sample according to the texture defect area of the fourth visible light image sample; and calling a third basic network model corresponding to the second network model, and inputting the abstract texture features after the feature pooling of the fourth visible light image sample, and the actual texture defect area labels and the actual defect type labels corresponding to the abstract texture features into the third basic network model to obtain the second network model.
In the embodiment of the present invention, the preprocessing process of the third visible light image sample may refer to the preprocessing process of the first visible light image sample, and details are not described herein again.
In addition, for the process of determining the texture defect area of the fourth visible light image sample for the defect area positioning branch, extracting the abstract texture feature of the fourth visible light image sample for the defect feature extracting branch, and performing feature pooling on the abstract texture feature of the fourth visible light image sample according to the texture defect area of the fourth visible light image sample, reference may be made to the above-mentioned related processing process for the visible light image, which is not described herein again.
For the third basic network model which comprises a plurality of convolution blocks and a full connection block, the number of the convolution blocks is set to be related to the size of the feature map after the feature pooling, so that the feature of the whole defect area is further comprehensively sensed, and more accurate area positioning and type classification are made. The convolution block may be composed of layers such as a convolution layer, a BN layer, and a Relu activation layer. The method comprises the steps of taking a feature graph of feature pooling as input, wherein the feature graph has two main functions, one is that convolution operation is completed through a plurality of convolution blocks, overall texture features after feature pooling are further comprehensively considered, the other one is that final features after convolution are integrated through a full-connection block, and output of the full-connection block is converted into prediction probability of a corresponding class from an meaningless sequence through softmax, so that a final area positioning result and a final type classification result related to texture defects are obtained, and the area positioning result dimension num _ classes is multiplied by 4, and the type classification result dimension num _ classes is multiplied by 1.
It should be noted that, in practical applications, the first network model and the second network model in the embodiment of the present invention may be integrated into one model, where the model includes a defect region locating branch and a defect feature extracting branch, and the feature pooling operation may be performed by constructing a feature pooling module, that is, outputs of the defect region locating branch and the defect feature extracting branch are used as inputs of the feature pooling module, and an output of the corresponding feature pooling module is used as an input of the second network model.
According to the cross-device visible light texture defect detection method provided by the embodiment of the invention, the actual texture defect area and defect type in the visible light image can be identified by utilizing the first network model and the second network model through shooting the visible light image by the device to be detected. Under the condition of ensuring the detection accuracy, the invention designs the visible light texture defect detection which can be carried out across equipment, has higher use flexibility and reduces the cost of defect detection.
See fig. 8 for a schematic structural diagram of the cross-device visible light texture defect detection apparatus. Based on the cross-device visible light texture defect detection method provided by the above embodiment, an embodiment of the present invention correspondingly provides a cross-device visible light texture defect detection apparatus for executing the above method, and the apparatus includes:
the image acquisition module 10 is used for acquiring a visible light image of the equipment to be detected;
the first image detection module 20 is configured to input the visible light image into a first network model obtained through pre-training, where the first network model includes a defect region locating branch and a defect feature extracting branch; determining a texture defect area of the visible light image through the defect area positioning branch, and extracting abstract texture features of the visible light image through the defect feature extraction branch;
the image processing module 30 is configured to perform feature pooling on abstract texture features of the visible light image according to the texture defect region of the visible light image;
the second image detection module 40 is configured to input the abstract texture features after the features of the visible light image are pooled into a second network model obtained through pre-training, so as to determine an actual texture defect area and a defect type through the second network model.
Optionally, the image processing module 30 is further configured to:
and carrying out non-maximum suppression on the texture defect area of the visible light image.
Optionally, the process of training the first network model by the first image detection module 20 includes:
acquiring a first visible light image sample; performing preprocessing operation on the first visible light image sample to obtain a second visible light image sample; calling a first basic network model corresponding to the defect region positioning branch and a second basic network model corresponding to the defect feature extraction branch, wherein the first basic network model comprises a plurality of volume blocks, a plurality of FPN feature pyramid networks and an RPN region suggestion network, and the first basic network model comprises a plurality of small perception field blocks; and simultaneously inputting the second visible light image sample into the first basic network model and the second basic network model for model training to obtain a defect area positioning branch and a defect feature extraction branch.
Optionally, the process of training the second network model by the second image detection module 40 includes:
acquiring a third visible light image sample; performing preprocessing operation on the third visible light image sample to obtain a fourth visible light image sample; inputting the fourth visible light image sample into the first network model; determining a texture defect area of the fourth visible light image sample through the defect area positioning branch, and extracting abstract texture features of the fourth visible light image sample through the defect feature extraction branch; performing feature pooling on abstract texture features of the fourth visible light image sample according to the texture defect area of the fourth visible light image sample; and calling a third basic network model corresponding to the second network model, and inputting the abstract texture features after the feature pooling of the fourth visible light image sample, and the actual texture defect area labels and the actual defect type labels corresponding to the abstract texture features into the third basic network model to obtain the second network model.
The cross-equipment visible light texture defect detection device provided by the embodiment of the invention designs a cross-equipment visible light texture defect detection under the condition of ensuring the detection accuracy, has higher use flexibility and reduces the cost of defect detection.
The method and the device for detecting cross-device visible light texture defects provided by the invention are described in detail above, a specific example is applied in the text to explain the principle and the implementation of the invention, and the description of the above embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include or include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A cross-device visible light texture defect detection method, the method comprising:
acquiring a visible light image of a device to be detected;
inputting the visible light image into a first network model obtained by pre-training, wherein the first network model comprises a defect area positioning branch and a defect feature extraction branch;
determining a texture defect region of the visible light image through the defect region positioning branch, and extracting abstract texture features of the visible light image through the defect feature extraction branch;
performing feature pooling on abstract texture features of the visible light image according to the texture defect area of the visible light image;
and inputting the abstract texture features of the visible light image after the features are pooled into a second network model obtained by pre-training so as to determine an actual texture defect area and a defect type through the second network model.
2. The method of claim 1, wherein prior to feature pooling abstract texture features of the visible light image according to the texture defect region of the visible light image, the method further comprises:
and carrying out non-maximum suppression on the texture defect area of the visible light image.
3. The method of claim 1, wherein the training process of the first network model comprises:
acquiring a first visible light image sample;
performing preprocessing operation on the first visible light image sample to obtain a second visible light image sample;
calling a first basic network model corresponding to the defect region positioning branch and a second basic network model corresponding to the defect feature extraction branch, wherein the first basic network model comprises a plurality of volume blocks, a plurality of FPN feature pyramid networks and an RPN region suggestion network, and the first basic network model comprises a plurality of small perception field blocks;
and simultaneously inputting the second visible light image sample into the first basic network model and the second basic network model for model training to obtain the defect region positioning branch and the defect feature extraction branch.
4. The method of claim 1, wherein the training process of the second network model comprises:
acquiring a third visible light image sample;
performing a preprocessing operation on the third visible light image sample to obtain a fourth visible light image sample;
inputting the fourth visible light image sample into the first network model;
determining a texture defect area of the fourth visible light image sample through the defect area positioning branch, and extracting abstract texture features of the fourth visible light image sample through the defect feature extraction branch;
performing feature pooling on abstract texture features of the fourth visible light image sample according to the texture defect area of the fourth visible light image sample;
and calling a third basic network model corresponding to the second network model, and inputting the abstract texture features after the feature pooling of the fourth visible light image sample, and the actual texture defect area labels and the actual defect type labels corresponding to the abstract texture features into the third basic network model to obtain the second network model.
5. Method according to claim 3 or 4, characterized in that said pre-processing operation: one or more of image denoising, data augmentation, and scale transformation.
6. The method of claim 1, wherein the defect types comprise: oil leakage, rust, dirt, cracks and ice coating.
7. An apparatus for cross-device visible light texture defect detection, the apparatus comprising:
the image acquisition module is used for acquiring a visible light image of the equipment to be detected;
the first image detection module is used for inputting the visible light image into a first network model obtained by pre-training, and the first network model comprises a defect area positioning branch and a defect feature extraction branch; determining a texture defect region of the visible light image through the defect region positioning branch, and extracting abstract texture features of the visible light image through the defect feature extraction branch;
the image processing module is used for performing feature pooling on abstract texture features of the visible light image according to the texture defect area of the visible light image;
and the second image detection module is used for inputting the abstract texture features of the visible light image after the features are pooled into a second network model obtained by pre-training so as to determine the actual texture defect area and the defect type through the second network model.
8. The apparatus of claim 7, wherein the image processing module is further configured to:
and carrying out non-maximum suppression on the texture defect area of the visible light image.
9. The apparatus of claim 7, wherein the process of the first image detection module training the first network model comprises:
acquiring a first visible light image sample; performing preprocessing operation on the first visible light image sample to obtain a second visible light image sample; calling a first basic network model corresponding to the defect region positioning branch and a second basic network model corresponding to the defect feature extraction branch, wherein the first basic network model comprises a plurality of volume blocks, a plurality of FPN feature pyramid networks and an RPN region suggestion network, and the first basic network model comprises a plurality of small perception field blocks; and simultaneously inputting the second visible light image sample into the first basic network model and the second basic network model for model training to obtain the defect region positioning branch and the defect feature extraction branch.
10. The apparatus of claim 7, wherein the process of the second image detection module training the second network model comprises:
acquiring a third visible light image sample; performing a preprocessing operation on the third visible light image sample to obtain a fourth visible light image sample; inputting the fourth visible light image sample into the first network model; determining a texture defect area of the fourth visible light image sample through the defect area positioning branch, and extracting abstract texture features of the fourth visible light image sample through the defect feature extraction branch; performing feature pooling on abstract texture features of the fourth visible light image sample according to the texture defect area of the fourth visible light image sample; and calling a third basic network model corresponding to the second network model, and inputting the abstract texture features after the feature pooling of the fourth visible light image sample, and the actual texture defect area labels and the actual defect type labels corresponding to the abstract texture features into the third basic network model to obtain the second network model.
CN202011472397.6A 2020-12-14 2020-12-14 Cross-device visible light texture defect detection method and device Pending CN112419316A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011472397.6A CN112419316A (en) 2020-12-14 2020-12-14 Cross-device visible light texture defect detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011472397.6A CN112419316A (en) 2020-12-14 2020-12-14 Cross-device visible light texture defect detection method and device

Publications (1)

Publication Number Publication Date
CN112419316A true CN112419316A (en) 2021-02-26

Family

ID=74776194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011472397.6A Pending CN112419316A (en) 2020-12-14 2020-12-14 Cross-device visible light texture defect detection method and device

Country Status (1)

Country Link
CN (1) CN112419316A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114049507A (en) * 2021-11-19 2022-02-15 国网湖南省电力有限公司 Distribution network line insulator defect identification method, equipment and medium based on twin network

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829893A (en) * 2019-01-03 2019-05-31 武汉精测电子集团股份有限公司 A kind of defect object detection method based on attention mechanism
CN110097053A (en) * 2019-04-24 2019-08-06 上海电力学院 A kind of power equipment appearance defect inspection method based on improvement Faster-RCNN
CN110570410A (en) * 2019-09-05 2019-12-13 河北工业大学 Detection method for automatically identifying and detecting weld defects
CN110827251A (en) * 2019-10-30 2020-02-21 江苏方天电力技术有限公司 Power transmission line locking pin defect detection method based on aerial image
CN111339882A (en) * 2020-02-19 2020-06-26 山东大学 Power transmission line hidden danger detection method based on example segmentation
CN111444939A (en) * 2020-02-19 2020-07-24 山东大学 Small-scale equipment component detection method based on weak supervision cooperative learning in open scene of power field
CN111524135A (en) * 2020-05-11 2020-08-11 安徽继远软件有限公司 Image enhancement-based method and system for detecting defects of small hardware fittings of power transmission line
CN111797890A (en) * 2020-05-18 2020-10-20 中国电力科学研究院有限公司 Method and system for detecting defects of power transmission line equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829893A (en) * 2019-01-03 2019-05-31 武汉精测电子集团股份有限公司 A kind of defect object detection method based on attention mechanism
CN110097053A (en) * 2019-04-24 2019-08-06 上海电力学院 A kind of power equipment appearance defect inspection method based on improvement Faster-RCNN
CN110570410A (en) * 2019-09-05 2019-12-13 河北工业大学 Detection method for automatically identifying and detecting weld defects
CN110827251A (en) * 2019-10-30 2020-02-21 江苏方天电力技术有限公司 Power transmission line locking pin defect detection method based on aerial image
CN111339882A (en) * 2020-02-19 2020-06-26 山东大学 Power transmission line hidden danger detection method based on example segmentation
CN111444939A (en) * 2020-02-19 2020-07-24 山东大学 Small-scale equipment component detection method based on weak supervision cooperative learning in open scene of power field
CN111524135A (en) * 2020-05-11 2020-08-11 安徽继远软件有限公司 Image enhancement-based method and system for detecting defects of small hardware fittings of power transmission line
CN111797890A (en) * 2020-05-18 2020-10-20 中国电力科学研究院有限公司 Method and system for detecting defects of power transmission line equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐超凡: "基于立体光源的工业外观缺陷检测平台的算法设计与实现", 《硕士电子期刊》, 15 January 2020 (2020-01-15), pages 13 - 19 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114049507A (en) * 2021-11-19 2022-02-15 国网湖南省电力有限公司 Distribution network line insulator defect identification method, equipment and medium based on twin network

Similar Documents

Publication Publication Date Title
CN111257341B (en) Underwater building crack detection method based on multi-scale features and stacked full convolution network
CN112308826A (en) Bridge structure surface defect detection method based on convolutional neural network
CN116823800A (en) Bridge concrete crack detection method based on deep learning under complex background
CN110555831A (en) Drainage pipeline defect segmentation method based on deep learning
Chavan et al. Quality control of PCB using image processing
CN115631132A (en) Network training method, defect detection method, device, storage medium and equipment
CN115546223A (en) Method and system for detecting loss of fastening bolt of equipment under train
CN111080599A (en) Fault identification method for hook lifting rod of railway wagon
CN112419316A (en) Cross-device visible light texture defect detection method and device
Yu YOLO V5s-based deep learning approach for concrete cracks detection
CN112614094B (en) Insulator string abnormity positioning and identifying method based on sequence state coding
CN113012107B (en) Power grid defect detection method and system
CN113516652A (en) Battery surface defect and adhesive detection method, device, medium and electronic equipment
CN113705567B (en) Ship crack detection method, system, equipment and computer readable storage medium
CN115526869A (en) Hardware corrosion detection and quantification method in power inspection image
CN116188361A (en) Deep learning-based aluminum profile surface defect classification method and device
CN114037840A (en) Power transmission line visual object extraction method and system based on multi-feature fusion
CN115330705A (en) Skin paint surface defect detection method based on adaptive weighting template NCC
CN112116561B (en) Power grid transmission line detection method and device based on image processing fusion network weight
CN114862755A (en) Surface defect detection method and system based on small sample learning
CN113033556A (en) Insulator rapid distinguishing and positioning method and system based on machine vision
CN112053357A (en) FPN-based steel surface flaw detection method
CN114004963B (en) Target class identification method and device and readable storage medium
Chinchu et al. Insulator fault detection from unmanned aerial vehicle images
CN116645371B (en) Rail surface defect detection method and system based on feature search

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination