CN117152139A - Patch inductance defect detection method based on example segmentation technology - Google Patents
Patch inductance defect detection method based on example segmentation technology Download PDFInfo
- Publication number
- CN117152139A CN117152139A CN202311414253.9A CN202311414253A CN117152139A CN 117152139 A CN117152139 A CN 117152139A CN 202311414253 A CN202311414253 A CN 202311414253A CN 117152139 A CN117152139 A CN 117152139A
- Authority
- CN
- China
- Prior art keywords
- inductance
- patch
- bounding box
- defect
- detection model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 92
- 230000007547 defect Effects 0.000 title claims abstract description 60
- 230000011218 segmentation Effects 0.000 title claims abstract description 17
- 238000012549 training Methods 0.000 claims abstract description 17
- 238000007781 pre-processing Methods 0.000 claims abstract description 13
- 238000004519 manufacturing process Methods 0.000 claims abstract description 10
- 230000006870 function Effects 0.000 claims description 35
- 238000000034 method Methods 0.000 claims description 17
- 238000010586 diagram Methods 0.000 claims description 14
- 230000004913 activation Effects 0.000 claims description 9
- 238000010606 normalization Methods 0.000 claims description 7
- 238000011176 pooling Methods 0.000 claims description 6
- 238000012360 testing method Methods 0.000 claims description 6
- 238000012795 verification Methods 0.000 claims description 6
- 230000000007 visual effect Effects 0.000 claims description 4
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 229910052802 copper Inorganic materials 0.000 claims description 3
- 239000010949 copper Substances 0.000 claims description 3
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 230000002708 enhancing effect Effects 0.000 claims description 3
- 230000004927 fusion Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- 238000011897 real-time detection Methods 0.000 claims description 2
- 230000009286 beneficial effect Effects 0.000 abstract description 3
- 238000002372 labelling Methods 0.000 abstract 1
- 230000002950 deficient Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000010355 oscillation Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 239000002994 raw material Substances 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of computer vision detection, and particularly discloses a patch inductance defect detection method based on an example segmentation technology, which comprises the following steps of: collecting various patch inductance images, and then preprocessing and labeling to obtain a patch inductance image data set; constructing a network detection model based on an example segmentation technology, and training the network detection model by using a training set; checking the trained network detection model to obtain a final network detection model; and preprocessing a patch inductance image acquired in real-time production, and inputting the preprocessed patch inductance image into a final network detection model to detect patch inductance defects in real time. The invention can effectively improve the detection efficiency by detecting the defects of the patch inductor through computer vision, can classify the patch inductor with the defects detected, simultaneously improves the detection stability and accuracy, can detect in real time in the production process, and is beneficial to improving the yield of the produced products.
Description
Technical Field
The invention belongs to the technical field of computer vision detection, and particularly relates to a patch inductance defect detection method based on an example segmentation technology.
Background
Chip inductors are key components in electronic devices, and their performance directly affects the stability, reliability, and lifetime of the electronic device. Therefore, in the manufacturing process of electronic equipment, the quality requirement on the chip inductor is high, and strict detection is required. Through carrying out defect detection to the paster inductance, can in time discover and get rid of bad product, improve the holistic yields of product. The traditional patch inductance detection method mainly relies on manual visual detection, and operators observe patch inductances one by one on a production line to identify and mark defective products. The efficiency of manual detection is low, and the detection is easily influenced by factors such as fatigue degree and experience of staff, so that the detection result is unstable, and detection omission and false detection are easily caused; however, eddy current detection, magnetic leakage detection and infrared detection are also performed at present, wherein the eddy current detection and the magnetic leakage detection are easy to perform false detection when detecting a rough surface, and the infrared detection is more limited, and is usually only used for off-line detection in a small range, so that the requirement of real-time monitoring of the current industrial production cannot be met. The example segmentation network has a good effect on a standard data set, and the yolov7 widely used at present uses deeper network structures, feature pyramids with finer granularity, more effective loss functions and the like, so that the accuracy of the example segmentation network on a target detection task is higher. However, since the chip inductor has a variety of defects and relatively small targets, the chip inductor cannot be detected by using the above-described example division network.
Disclosure of Invention
The invention aims to solve the technical problem of providing a patch inductance defect detection method based on an example segmentation technology, which can effectively improve the detection efficiency and improve the detection stability and accuracy.
The technical scheme adopted by the invention is as follows: a patch inductance defect detection method based on an example segmentation technology comprises the following steps:
step 1: shooting and collecting various patch inductance images through a vision camera, preprocessing the patch inductance images, marking the patch inductance images with patch inductance defects after preprocessing, obtaining a patch inductance image data set, and dividing the patch inductance image data set into a training set and a test set;
step 2: constructing a network detection model based on an instance segmentation technology, wherein the network detection model comprises a backbone network and a head network;
the CA-ELAN module is added into the backbone network layer;
adding a layer of structure containing a 3×3 convolution kernel and a 1×1 convolution kernel into the final three prediction heads in the head network, and adding batch normalization as a residual network of a pre-operation before a LeakyReLu activation function; the training speed can be improved, and the model can be improved in prediction accuracy on the basis of ensuring the prediction speed;
step 3: inputting the training set in the step 1 into a network detection model, training the network detection model, continuously calculating a total loss function in training, and continuously updating detection model parameters through small-batch gradient descent to obtain a trained network detection model;
step 4: inputting the test set in the step 1 into the trained network detection model for verification, and obtaining a final network detection model after verification;
step 5: preprocessing patch inductance images acquired in real-time production, and inputting the preprocessed patch inductance images into a final network detection model for visual real-time detection of the patch inductance defects;
step 6: and (3) updating the patch inductance image data set of the network detection model at regular intervals according to the detection result in the step (5), and retraining the network detection model with the updated patch inductance image data set.
Preferably, the preprocessing of the patch inductance image in step 1 includes: and sequentially scaling, denoising and enhancing the patch inductance image.
Preferably, the patch inductance image data set obtained in step 1 further adopts mosaic data enhancement, that is, 4 patch inductance images are randomly extracted from the patch inductance image data set to be combined into a new patch inductance image, and the new patch inductance image is added into the patch inductance image data set.
Preferably, the patch inductance defect in step 1 includes a magnetic ring dark crack defect, an electrode copper exposure defect, an electrode line exposure defect and a magnetic ring breakage defect.
Preferably, the CA-ELAN module is a CA module introduced into an E-ELAN module of a backbone network layer, the CA module is used for acquiring the width and the height of a patch inductance image, and the accurate position is encoded, so that the network captures the context information of the patch inductance image of the multi-scale information, and then the E-ELAN module is used for carrying out feature fusion to obtain the patch inductance defect image feature with the context information, wherein the multi-scale information is fused;
the CA module is a coordinate attention mechanism module and is a module for enhancing the perception of the position information by the neural network; in the traditional attention mechanism, attention weight is calculated only through channel information of the features, but position information is not considered, however, in target detection and image segmentation of a patch inductance image, the position information is very important for correctly understanding and processing image content, therefore, by introducing a CA module, a network detection model can automatically learn which feature channels are more important for a target detection task, the network detection model can weight different channels according to the importance of the feature channels, so that the network detection model pays more attention to key features contributing to target detection, influences of noise and useless features are reduced, namely, by introducing the CA module, the model is helped to automatically select the important feature channels, the target expression capability is improved, the performance of target detection is improved, and the model can more effectively utilize the feature information, thereby improving the precision and effect of the target detection task, namely, the model can be positioned to the features useful for defect detection, and the useless features are restrained.
Preferably, the CA module comprises the following steps:
step a, inputting the input size asAnd using pooling cores with the sizes of (1, W) and (H, 1) to carry out global average pooling to obtain a horizontal direction feature vector and a vertical direction feature vector of global average value vector codes in channel dimension:
wherein H represents height, W represents width, C represents channel number,representing the global average of the input profile X at the c-th channel,/or->Representing that the c-th channel coordinates are (i, j) feature information;
step b, carrying out convolution, batch normalization and LeakyReLu activation functions on the horizontal direction feature vector and the vertical direction feature vector obtained in the step a by using a convolution kernel with the size of 1 multiplied by 1 to carry out feature mapping:
wherein,representing the attention response of the input profile X on the c-th channel,/for>Representing a 1 x 1 convolution kernel;
c, decomposing the features mapped in the step b into two independent decomposition features along the horizontal-vertical direction according to the original W and HAnd the characteristic conversion is carried out respectively on the two convolution kernel convolutions with the size of 1 multiplied by 1 and the activation function of the LeakyReLu:
,
wherein,attention weights in horizontal and vertical directions of the input patch inductance defect feature map, respectively, +.>Convolution kernel of size 1 x 1;
step d, obtaining a final feature vector:
final output chip inductor defect characteristic diagram;
Preferably, the total loss function in step 3 includes a category loss function, a positioning loss function, and a target confidence loss function.
Preferably, a wide-height loss is introduced into the positioning loss function, so that the difference between the width and the height of the real boundary box and the prediction boundary box is minimized, specifically:
wherein b is a prediction frame, b gt For a real bounding box, the IOU is the degree of coverage between the prediction bounding box and the real bounding box,squaring the Euclidean distance between the prediction boundary box and the actual boundary box center point; />Square of the diagonal length of the smallest closed rectangle that contains the prediction bounding box and the actual bounding box; />Squaring the Euclidean distance between the width of the prediction bounding box and the actual bounding box width; />Is the square of the euclidean distance between the height of the prediction bounding box and the height of the actual bounding box; />Square of the width of the smallest closed rectangle that contains the prediction bounding box and the actual bounding box; />Square of the height of the smallest closed rectangle that contains the prediction bounding box and the actual bounding box; w (w) gt For the width of the actual bounding box, h gt To the height of the actual bounding box, w p For predicting the width of the bounding box, h p Is the height of the prediction bounding box.
The invention has the beneficial effects that: the invention can effectively improve the detection efficiency by detecting the defects of the patch inductor through computer vision, can classify the patch inductor with the defects detected, simultaneously improves the detection stability and accuracy, can detect in real time in the production process, and is beneficial to improving the yield of the produced products.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a block diagram of a backbone network in a network detection model of the present invention;
FIG. 3 is a block diagram of a header network in the network detection model of the present invention;
FIG. 4 is a block diagram of a CA-ELAN module in a network detection model according to the present invention;
fig. 5 is a block diagram of a CA module in the CA-ELAN module of the present invention.
Detailed Description
The invention will be described in further detail with reference to the accompanying drawings and specific examples.
As shown in fig. 1, 2, 3, 4 and 5, the patch inductance defect detection method based on the example division technology provided in this embodiment includes the following steps:
step 1: shooting and collecting various patch inductance images through a visual camera, wherein the collected patch inductance images comprise a defect-free patch inductance image and a defect patch inductance image, and the defect patch inductance image definition rule is formulated by consulting a factory production standard corresponding system and by negotiating with a basic-level worker of a field factory in detail and comprises a patch inductance image with a magnetic ring dark crack defect, a patch inductance image with an electrode copper exposure defect, a patch inductance image with an electrode line exposure defect and a patch inductance image with a magnetic ring breakage defect;
constructing a patch inductance image data set, collecting enough non-defective patch inductance images and enough various defective patch inductance images, preprocessing the collected defective patch inductance images, namely firstly scaling the images to 640 x 640 pixels, and sequentially carrying out denoising and image enhancement operations; marking the defective chip inductor image after preprocessing, marking the position, type and defect type of the chip inductor, and obtaining a chip inductor image data set; meanwhile, mosaic data enhancement is adopted for the preprocessed patch inductance image, namely 4 image data are randomly selected from a plurality of patch inductance images to be combined into a new image, the combined image is marked, the marked image is used as new training data to be added into a patch inductance image data set, more context information can be provided for the model in a training stage through image combination, and therefore the generalization capability of the model is improved;
dividing the constructed patch inductance image data set into a training set and a testing set according to the proportion of 8:2;
step 2: constructing a network detection model based on an instance segmentation technology, wherein the network detection model comprises a backbone network and a head network; inputting images in a patch inductance image data set into a backbone network, introducing a CA module into an E-ELAN module in the backbone network, acquiring the width and the height of the patch inductance image through the CA module, and coding the accurate position to enable the network to capture the context information of the patch inductance image with multi-scale information, wherein the method comprises the following specific steps:
step a, inputting the input size asAnd using pooling cores with the sizes of (1, W) and (H, 1) to carry out global average pooling to obtain a horizontal direction feature vector and a vertical direction feature vector of global average value vector codes in channel dimension:
wherein H represents height, W represents width, C represents channel number,representing the global average of the input profile X at the c-th channel,/or->Representing that the c-th channel coordinates are (i, j) feature information;
step b, carrying out convolution, batch normalization and LeakyReLu activation functions on the horizontal direction feature vector and the vertical direction feature vector obtained in the step a by using a convolution kernel with the size of 1 multiplied by 1 to carry out feature mapping:
wherein,representing the attention response of the input profile X on the c-th channel,/for>Representing a 1 x 1 convolution kernel;
c, decomposing the features mapped in the step b into two independent decomposition features along the horizontal-vertical direction according to the original W and HAnd the characteristic conversion is carried out respectively on the two convolution kernel convolutions with the size of 1 multiplied by 1 and the activation function of the LeakyReLu:
,
wherein,attention weights in horizontal and vertical directions of the input patch inductance defect feature map, respectively, +.>Convolution kernel of size 1 x 1;
step d, obtaining a final feature vector:
final output chip inductor defect characteristic diagram;
Then, carrying out feature fusion on the output patch inductance defect feature map through an E-ELAN module to obtain patch inductance defect image features with context information and fused with multi-scale information;
three layers of characteristic diagrams with different sizes are output through a backbone network, after the three layers of characteristic diagrams with different sizes are processed by a RepVGG module, the characteristic diagrams with different sizes are respectively output through one layer of residual network layer which contains a 3X 3 convolution kernel and a 1X 1 convolution kernel and is added with batch normalization (batch normalization) before a LeakyReLu activation function is used as a pre-operation, and after the characteristic diagrams with different sizes are processed through one convolution layer, the characteristic diagrams with the sizes of 20X 27, 40X 27 and 80X 27 are respectively output;
step 3: inputting the training set in the step 1 into a network detection model, training the network detection model, and continuously calculating a total loss function in training, wherein the total loss function comprises a category loss function, a positioning loss function and a target confidence loss function;
the total loss function:
wherein t is p For predicting vector, t gt Is a true value vector; k is an output characteristic diagram, S 2 The number of anchor frames on each grid is the number of grids and B is the number of anchor frames on each grid; l (L) CIoU To locate the loss function, L obj For the target confidence loss function, L cls Is a class loss function;weights for the corresponding terms; />The method is characterized in that the method comprises the steps that a kth output characteristic diagram, an ith grid and a jth anchor frame are provided, whether the jth anchor frame is a positive sample or not, if the anchor frame is the positive sample, the anchor frame is 1, and otherwise, the anchor frame is 0; />To balance the weight of the output feature map of each scale, the output feature maps of 80×80×27, 40×40×27, 20×20×27 are sequentially corresponded;
the category loss function:
wherein c p C, for predicting class score of frame gt For the true category of the target frame, C is the number of categories,weights for the i-th category;
the positioning loss function introduces wide and high loss, and is specifically as follows:
wherein b is a prediction frame, b gt For a real bounding box, the IOU is the degree of coverage between the prediction bounding box and the real bounding box,squaring the Euclidean distance between the prediction boundary box and the actual boundary box center point; />Square of the diagonal length of the smallest closed rectangle that contains the prediction bounding box and the actual bounding box; />Squaring the Euclidean distance between the width of the prediction bounding box and the actual bounding box width; />Is the square of the euclidean distance between the height of the prediction bounding box and the height of the actual bounding box; />To include a prediction bounding box and an actual bounding boxSquare of the width of the smallest closed rectangle; />Square of the height of the smallest closed rectangle that contains the prediction bounding box and the actual bounding box; w (w) gt For the width of the actual bounding box, h gt To the height of the actual bounding box, w p For predicting the width of the bounding box, h p Is the height of the prediction bounding box;
the target confidence loss function:
wherein p is o For target confidence score in prediction block, p iou For the iou value, w, of the target frame corresponding to the predicted frame obj Weights for positive samples;
after the total loss function is calculated, optimizing the detection model through small-batch gradient descent, namely updating parameters by using a small part of patch inductance images each time, updating network weights, accelerating network convergence, and reducing oscillation in the convergence process, thereby improving the precision of the detection model and obtaining a trained network detection model;
step 4: inputting the test set in the step 1 into the trained network detection model for verification, and obtaining a final network detection model after verification;
step 5: preprocessing a patch inductance image acquired in real-time production and inputting the preprocessed patch inductance image into a final network detection model, wherein the network detection model can predict possible defects on the patch inductance in real time and output a prediction result, and the prediction result comprises the position, the size and the corresponding defect type of a boundary box; in order to improve the detection accuracy, the detection result can be processed, firstly, a non-maximum suppression algorithm is used, the most reliable bounding box is screened out according to the confidence coefficient of the bounding box, meanwhile, the bounding box with higher overlapping degree is eliminated, and then, the processed detection result is drawn on an original image by utilizing a drawing frame component of an OpenCV frame; the drawing frame component draws a boundary frame in a patch inductance defect area in an original image, and marks a corresponding defect type beside the frame;
step 6: after the network detection model is used for a period of time, new defects may be generated due to changes of production equipment, raw materials or processes, and in order to ensure that the network detection model can still maintain high recognition performance when facing the new defect characteristics, the patch inductance image sample set needs to be updated in time, and the network detection model needs to be trained and fine-tuned again.
Claims (8)
1. The patch inductance defect detection method based on the example segmentation technology is characterized by comprising the following steps of:
step 1: shooting and collecting various patch inductance images through a vision camera, preprocessing the patch inductance images, marking the patch inductance images with patch inductance defects after preprocessing, obtaining a patch inductance image data set, and dividing the patch inductance image data set into a training set and a test set;
step 2: constructing a network detection model based on an instance segmentation technology, wherein the network detection model comprises a backbone network and a head network;
the CA-ELAN module is added into the backbone network layer;
adding a layer of structure containing a 3×3 convolution kernel and a 1×1 convolution kernel into the final three prediction heads in the head network, and adding batch normalization as a residual network of a pre-operation before a LeakyReLu activation function;
step 3: inputting the training set in the step 1 into a network detection model, training the network detection model, continuously calculating a total loss function in training, and continuously updating detection model parameters through small-batch gradient descent to obtain a trained network detection model;
step 4: inputting the test set in the step 1 into the trained network detection model for verification, and obtaining a final network detection model after verification;
step 5: preprocessing a patch inductance image acquired in real-time production, and inputting the patch inductance image into a final network detection model for visual real-time detection of patch inductance defects;
step 6: and (3) updating the patch inductance image data set of the network detection model at regular intervals according to the detection result in the step (5), and retraining the network detection model with the updated patch inductance image data set.
2. The method for detecting a chip inductance defect based on an instance segmentation technique according to claim 1, wherein the preprocessing of the chip inductance image in step 1 includes: and sequentially scaling, denoising and enhancing the patch inductance image.
3. The method for detecting a chip inductance defect based on an example segmentation technique according to claim 1 or 2, wherein the chip inductance image data set obtained in step 1 is further enhanced by using mosaic data, that is, 4 chip inductance images are randomly extracted from the chip inductance image data set to be combined into a new chip inductance image, and the new chip inductance image is added into the chip inductance image data set.
4. A method for detecting a chip inductor defect based on an instance division technique as claimed in claim 3, wherein the chip inductor defect in step 1 includes a magnetic ring dark crack defect, an electrode copper exposure defect, an electrode line exposure defect, and a magnetic ring breakage defect.
5. The method for detecting the chip inductance defect based on the instance segmentation technology according to claim 1, wherein the CA-ELAN module is characterized in that the CA module is introduced into the E-ELAN module of the backbone network layer, the CA module obtains the width and the height of the chip inductance image in the chip inductance defect feature map of the backbone network layer, and encodes the accurate position, so that the network captures the context information of the chip inductance image of the multi-scale information, and then the E-ELAN module performs feature fusion to obtain the chip inductance defect image feature with the context information, wherein the multi-scale information is fused.
6. The method for detecting patch inductance defects based on the instance division technique according to claim 5, wherein the CA module comprises the steps of:
step a, inputting the input size asAnd using pooling cores with the sizes of (1, W) and (H, 1) to carry out global average pooling to obtain a horizontal direction feature vector and a vertical direction feature vector of global average value vector codes in channel dimension:
,
wherein H represents height, W represents width, C represents channel number,representing the global average of the input profile X at the c-th channel,/or->Representing that the c-th channel coordinates are (i, j) feature information;
step b, carrying out convolution, batch normalization and LeakyReLu activation functions on the horizontal direction feature vector and the vertical direction feature vector obtained in the step a by using a convolution kernel with the size of 1 multiplied by 1 to carry out feature mapping:
,
wherein,representing the attention response of the input profile X on the c-th channel,/for>Representing a 1 x 1 convolution kernel;
c, decomposing the features mapped in the step b into two independent decomposition features along the horizontal-vertical direction according to the original W and HAnd the characteristic conversion is carried out respectively on the two convolution kernel convolutions with the size of 1 multiplied by 1 and the activation function of the LeakyReLu:
,
,
wherein,attention weights in the horizontal and vertical directions of the input patch inductance defect feature map,convolution kernel of size 1 x 1;
step d, obtaining a final feature vector:
,
final output chip inductor defect characteristic diagram。
7. The method of claim 1, wherein the total loss function in step 3 includes a class loss function, a location loss function, and a target confidence loss function.
8. The method for detecting patch inductance defects based on the example segmentation technique according to claim 7, wherein the positioning loss function introduces a wide-high loss, specifically:
,
wherein b is a prediction frame, b gt For a real bounding box, the IOU is the degree of coverage between the prediction bounding box and the real bounding box,squaring the Euclidean distance between the prediction boundary box and the actual boundary box center point; />Square of the diagonal length of the smallest closed rectangle that contains the prediction bounding box and the actual bounding box; />Squaring the Euclidean distance between the width of the prediction bounding box and the actual bounding box width; />Is the square of the euclidean distance between the height of the prediction bounding box and the height of the actual bounding box; />Square of the width of the smallest closed rectangle that contains the prediction bounding box and the actual bounding box; />Square of the height of the smallest closed rectangle that contains the prediction bounding box and the actual bounding box; w (w) gt For the width of the actual bounding box, h gt To the height of the actual bounding box, w p For predicting the width of the bounding box, h p Is the height of the prediction bounding box.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311414253.9A CN117152139A (en) | 2023-10-30 | 2023-10-30 | Patch inductance defect detection method based on example segmentation technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311414253.9A CN117152139A (en) | 2023-10-30 | 2023-10-30 | Patch inductance defect detection method based on example segmentation technology |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117152139A true CN117152139A (en) | 2023-12-01 |
Family
ID=88904666
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311414253.9A Pending CN117152139A (en) | 2023-10-30 | 2023-10-30 | Patch inductance defect detection method based on example segmentation technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117152139A (en) |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111223088A (en) * | 2020-01-16 | 2020-06-02 | 东南大学 | Casting surface defect identification method based on deep convolutional neural network |
CN112733924A (en) * | 2021-01-04 | 2021-04-30 | 哈尔滨工业大学 | Multi-patch component detection method |
CN113674247A (en) * | 2021-08-23 | 2021-11-19 | 河北工业大学 | X-ray weld defect detection method based on convolutional neural network |
CN114548231A (en) * | 2022-01-26 | 2022-05-27 | 广东工业大学 | Patch resistor micro-cavity and welding spot feature extraction method based on multilayer convolution network |
CN115457026A (en) * | 2022-10-11 | 2022-12-09 | 陕西科技大学 | Paper defect detection method based on improved YOLOv5 |
CN115511812A (en) * | 2022-09-19 | 2022-12-23 | 华侨大学 | Industrial product surface defect detection method based on deep learning |
CN115546144A (en) * | 2022-09-30 | 2022-12-30 | 湖南科技大学 | PCB surface defect detection method based on improved Yolov5 algorithm |
CN115908382A (en) * | 2022-12-20 | 2023-04-04 | 东华大学 | Fabric defect detection method based on HCS-YOLOV5 |
CN116309451A (en) * | 2023-03-20 | 2023-06-23 | 佛山科学技术学院 | Chip inductor surface defect detection method and system based on token fusion |
CN116385401A (en) * | 2023-04-06 | 2023-07-04 | 浙江理工大学桐乡研究院有限公司 | High-precision visual detection method for textile defects |
CN116399888A (en) * | 2023-04-20 | 2023-07-07 | 广东工业大学 | Method and device for detecting welding spot hollows based on chip resistor |
CN116416613A (en) * | 2023-04-13 | 2023-07-11 | 广西壮族自治区农业科学院 | Citrus fruit identification method and system based on improved YOLO v7 |
CN116468716A (en) * | 2023-04-26 | 2023-07-21 | 山东省计算中心(国家超级计算济南中心) | YOLOv 7-ECD-based steel surface defect detection method |
CN116630263A (en) * | 2023-05-18 | 2023-08-22 | 西安工程大学 | Weld X-ray image defect detection and identification method based on deep neural network |
WO2023173598A1 (en) * | 2022-03-15 | 2023-09-21 | 中国华能集团清洁能源技术研究院有限公司 | Fan blade defect detection method and system based on improved ssd model |
CN116843636A (en) * | 2023-06-26 | 2023-10-03 | 三峡大学 | Insulator defect detection method based on improved YOLOv7 algorithm in foggy weather scene |
-
2023
- 2023-10-30 CN CN202311414253.9A patent/CN117152139A/en active Pending
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111223088A (en) * | 2020-01-16 | 2020-06-02 | 东南大学 | Casting surface defect identification method based on deep convolutional neural network |
CN112733924A (en) * | 2021-01-04 | 2021-04-30 | 哈尔滨工业大学 | Multi-patch component detection method |
CN113674247A (en) * | 2021-08-23 | 2021-11-19 | 河北工业大学 | X-ray weld defect detection method based on convolutional neural network |
CN114548231A (en) * | 2022-01-26 | 2022-05-27 | 广东工业大学 | Patch resistor micro-cavity and welding spot feature extraction method based on multilayer convolution network |
WO2023173598A1 (en) * | 2022-03-15 | 2023-09-21 | 中国华能集团清洁能源技术研究院有限公司 | Fan blade defect detection method and system based on improved ssd model |
CN115511812A (en) * | 2022-09-19 | 2022-12-23 | 华侨大学 | Industrial product surface defect detection method based on deep learning |
CN115546144A (en) * | 2022-09-30 | 2022-12-30 | 湖南科技大学 | PCB surface defect detection method based on improved Yolov5 algorithm |
CN115457026A (en) * | 2022-10-11 | 2022-12-09 | 陕西科技大学 | Paper defect detection method based on improved YOLOv5 |
CN115908382A (en) * | 2022-12-20 | 2023-04-04 | 东华大学 | Fabric defect detection method based on HCS-YOLOV5 |
CN116309451A (en) * | 2023-03-20 | 2023-06-23 | 佛山科学技术学院 | Chip inductor surface defect detection method and system based on token fusion |
CN116385401A (en) * | 2023-04-06 | 2023-07-04 | 浙江理工大学桐乡研究院有限公司 | High-precision visual detection method for textile defects |
CN116416613A (en) * | 2023-04-13 | 2023-07-11 | 广西壮族自治区农业科学院 | Citrus fruit identification method and system based on improved YOLO v7 |
CN116399888A (en) * | 2023-04-20 | 2023-07-07 | 广东工业大学 | Method and device for detecting welding spot hollows based on chip resistor |
CN116468716A (en) * | 2023-04-26 | 2023-07-21 | 山东省计算中心(国家超级计算济南中心) | YOLOv 7-ECD-based steel surface defect detection method |
CN116630263A (en) * | 2023-05-18 | 2023-08-22 | 西安工程大学 | Weld X-ray image defect detection and identification method based on deep neural network |
CN116843636A (en) * | 2023-06-26 | 2023-10-03 | 三峡大学 | Insulator defect detection method based on improved YOLOv7 algorithm in foggy weather scene |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111723748B (en) | Infrared remote sensing image ship detection method | |
CN108961235B (en) | Defective insulator identification method based on YOLOv3 network and particle filter algorithm | |
CN111223088B (en) | Casting surface defect identification method based on deep convolutional neural network | |
CN108009515B (en) | Power transmission line positioning and identifying method of unmanned aerial vehicle aerial image based on FCN | |
CN110175982B (en) | Defect detection method based on target detection | |
CN111080693A (en) | Robot autonomous classification grabbing method based on YOLOv3 | |
US10621717B2 (en) | System and method for image-based target object inspection | |
CN109977808A (en) | A kind of wafer surface defects mode detection and analysis method | |
CN111754498A (en) | Conveyor belt carrier roller detection method based on YOLOv3 | |
CN113920107A (en) | Insulator damage detection method based on improved yolov5 algorithm | |
CN113724231A (en) | Industrial defect detection method based on semantic segmentation and target detection fusion model | |
CN110490842A (en) | A kind of steel strip surface defect detection method based on deep learning | |
CN115439458A (en) | Industrial image defect target detection algorithm based on depth map attention | |
CN113222982A (en) | Wafer surface defect detection method and system based on improved YOLO network | |
CN110929795A (en) | Method for quickly identifying and positioning welding spot of high-speed wire welding machine | |
Wang et al. | Attention-based deep learning for chip-surface-defect detection | |
CN113393426A (en) | Method for detecting surface defects of rolled steel plate | |
CN111368637B (en) | Transfer robot target identification method based on multi-mask convolutional neural network | |
CN114092478B (en) | Anomaly detection method | |
Sun et al. | Cascaded detection method for surface defects of lead frame based on high-resolution detection images | |
CN111582332B (en) | Picture identification method for high-speed railway contact net dropper component | |
CN113962929A (en) | Photovoltaic cell assembly defect detection method and system and photovoltaic cell assembly production line | |
CN115830302B (en) | Multi-scale feature extraction fusion power distribution network equipment positioning identification method | |
CN114078106A (en) | Defect detection method based on improved Faster R-CNN | |
CN114596273B (en) | Intelligent detection method for multiple defects of ceramic substrate by using YOLOV4 network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |