CN116993660A - PCB defect detection method based on improved EfficientDet - Google Patents

PCB defect detection method based on improved EfficientDet Download PDF

Info

Publication number
CN116993660A
CN116993660A CN202310591740.6A CN202310591740A CN116993660A CN 116993660 A CN116993660 A CN 116993660A CN 202310591740 A CN202310591740 A CN 202310591740A CN 116993660 A CN116993660 A CN 116993660A
Authority
CN
China
Prior art keywords
image
pcb
model
efficientdet
diou
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310591740.6A
Other languages
Chinese (zh)
Inventor
张恩浦
刘晓洋
张青春
谭良晨
宁建峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiyin Institute of Technology
Original Assignee
Huaiyin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Institute of Technology filed Critical Huaiyin Institute of Technology
Priority to CN202310591740.6A priority Critical patent/CN116993660A/en
Publication of CN116993660A publication Critical patent/CN116993660A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30141Printed circuit board [PCB]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a PCB defect detection method based on improved Efficientdet, which comprises the following steps: and (1) shooting an image of the PCB board through high light reflection. (2) Preprocessing the acquired image, including image denoising, image enhancement and other operations, so as to improve the quality and accuracy of the image. (3) And respectively dividing the welding spot image and the circuit image by a threshold dividing mode. (4) And marking the separated welding spots and circuit images, and respectively constructing welding spot images and circuit image data sets after data enhancement. (5) improving the BiFPN feature fusion network. (6) adding DIoU-NMS to obtain improved model. (7) training an improved model. (8) And inputting the PCB image to be detected into a model to obtain a detection result. The invention improves the accuracy and efficiency of the defect detection of the PCB by using the Efficientdet model, realizes the automatic detection and saves a great deal of labor cost.

Description

PCB defect detection method based on improved EfficientDet
Technical Field
The invention relates to the technical field of target detection, in particular to a deep learning PCB defect detection method based on improved Efficientdet.
Background
With the popularization and development of electronic products, the quality and reliability of the PCB board as a core component of the electronic products are increasingly emphasized. However, various defects, such as poor soldering, broken lines, short circuits, etc., often occur on the PCB due to various factors in the manufacturing process, such as materials, processes, equipment, etc., and these defects may cause performance degradation, life shortening, and even failure of the electronic product. Therefore, research and application of the PCB defect detection technology are becoming more and more important.
The development of PCB defect detection technology can be traced to the 80 s of the last century, and visual inspection and manual testing methods are mainly adopted at the time. The method is simple and easy to operate, but has low efficiency and low accuracy, and cannot meet the requirement of mass production. With the development of computer technology and image processing technology, automated detection technology is becoming the mainstream.
Currently mainstream methods for detecting defects of a PCB board based on deep learning include a Convolutional Neural Network (CNN) based method and a cyclic neural network (RNN) based method. Among them, the CNN method is mainly used for detecting surface defects such as short circuits, open circuits, poor welding, etc., while the RNN method is mainly used for detecting internal defects such as damages of elements such as capacitors, resistors, etc. Some problems are encountered in constructing a PCB defect detection model, mainly comprising: (1) data set starvation: deep learning requires a large amount of data to train, but the PCB defect data set is relatively few, resulting in insufficient generalization capability of the model. (2) defect types are numerous: the defects of the PCB are various, and different detection methods are needed for different types of defects, so that a plurality of models are needed to be designed for detection. (3) the detection precision is not high: due to the complexity and diversity of PCB defects, the current detection accuracy is still to be improved, and further optimization algorithms and models are required.
Google research team proposed an EfficientDe target detection model in 2019. The method is improved on the basis of an Efficient Net model, and the Efficient Net is an efficient image classification model, and by optimizing network structures and parameters, better performance than other models under the same computing resources is realized.
The goal of the EfficientDet is to achieve better target detection performance while maintaining efficiency. The method adopts a new network structure called BiFPN, and can improve the detection performance without increasing the calculation amount. Furthermore, the EfficientDet uses a new Loss function called Focal Loss to better address the category imbalance problem.
Efficientdet achieves excellent performance across multiple target detection datasets, e.g., mAP indicators on COCO datasets can reach over 50, while its computation speed is faster than other target detection models. Therefore, the EfficientDet has become one of the research hotspots in the field of target detection.
Disclosure of Invention
The invention aims at overcoming the defects of the prior art and provides a PCB defect detection method based on improved Efficientdet.
The technical scheme for realizing the aim of the invention is as follows:
1. a PCB defect detection method based on improved EfficientDet comprises the following steps:
s1: and shooting an image of the PCB board through high light reflection.
S2: preprocessing the acquired image, including image denoising, image enhancement and other operations, so as to improve the quality and accuracy of the image.
S3: and respectively dividing the welding spot image and the circuit image by a threshold dividing mode.
S4: and marking the separated welding spots and circuit images, and respectively constructing welding spot images and circuit image data sets after data enhancement.
S5: improving the BiFPN feature fusion network.
S6: adding DIoU-NMS to obtain improved model.
S7: an improved model is trained.
S8: and inputting the PCB image to be detected into a model to obtain a detection result.
S1: shooting an image of the PCB board through high light reflection: comprising the following steps:
s1.1: preparation device: a high resolution camera, a reflection platform, a light source, etc. are required to be prepared. The reflection platform adopts a combination of a black background plate and a white transparent glass plate so as to reflect an image of the PCB.
S1.2: adjusting a light source: according to the size and shape of the shooting object, the angle and brightness of the light source are adjusted so that the light source can reflect the image on the PCB.
S1.3: placing a PCB: and the PCB to be shot is horizontally placed on the reflection platform, so that the light can fully reflect the image on the PCB.
S1.4: shooting an image: and shooting an image of the PCB by a camera, and performing proper exposure and focusing adjustment so as to obtain a clear and accurate image.
S1.5: post-treatment: and importing the shot image into a computer, and performing post-processing such as cutting, rotation, repair and the like to obtain a final PCB image.
S2: preprocessing the acquired image, including operations such as image denoising, image enhancement and the like, so as to improve the quality and accuracy of the image: comprising the following steps:
s2.1: and the images are normalized, so that the sizes of the pictures are unified, and the follow-up processing is convenient.
S2.2: the brightness and contrast of the image is adjusted as needed to better display the image details.
S2.3: and removing noise around the image by using Gaussian filtering, and improving the definition of the image.
S3: respectively dividing a welding spot image and a circuit image by a threshold dividing mode: comprising the following steps:
s3.1: and reading the preprocessed picture, separating the picture into three RGB channels, and obtaining a welding spot image in a B-G+R mode.
S3.2: and converting the read picture into a gray level picture, searching the outline by an outline searching function, and drawing the outline function to obtain a circuit image.
S3.3: and performing closed operation processing on the obtained welding spot image and the circuit image to remove noise around the image.
S4: marking the separated welding spots and circuit images, and respectively constructing a welding spot image and a circuit image data set after data enhancement: comprising the following steps:
s4.1: and (3) marking the image obtained in the step (S3.3), and marking a defect area in the image. The welding spots are classified according to three categories of normal, multi-tin and hole lack, and the circuits are classified according to three categories of normal, open circuit and short circuit.
S4.2: and carrying out data enhancement on the marked image, including rotation, overturning, scaling and other operations, so as to increase the diversity and the number of the data sets.
S4.3: a solder joint image dataset and a circuit image dataset were constructed separately, with 85% of the dataset as the training set and 15% as the test set.
S5: improving a BiFPN feature fusion network: comprising the following steps:
s5.1: the original five-layer structure of the BiFPN is adjusted to four layers, so that the detection capability of the BiFPN on small target objects is improved.
S5.2: the BiFPN carries out rapid normalization fusion after up-sampling and down-sampling on the feature map to obtain richer feature expression, and an obtained output feature formula is as follows:
wherein omega i Is a learnable weight and may be a scalar, vector or multidimensional tensor. The weight omega is more than or equal to 0 through the ReLU function. Epsilon is a constant and is 0.0001, which is to prevent unstable training. I i Is the i-th layer input image. The rapid normalization fusion enables each node to have respective weight, so that information can be reasonably distributed through the weight when features are fused.
S6: adding DIoU-NMS to obtain an improved model: comprising the following steps:
s6.1: the use of a DIoU-NMS in place of the conventional NMS combines the distance and IoU (Intersection over Union) factors, allowing a more accurate assessment of the quality of the test results.
And S6.2, after the target detection is carried out on the feature map obtained in the step S5, sequencing all detection results according to the confidence level from high to low, selecting the detection result with the highest confidence level as a reference frame, and calculating the distance and the overlapping degree between the rest detection results and the reference frame.
S6.3: and screening and optimizing the defects with high scores. The DIoU calculation formula is as follows:
wherein b and b gt Representing the center points of the other target frame and the reference frame, respectively, ρ is the euclidean distance between the two target frames, and c is the diagonal length between the two target frames. DIoU also considers the distance between the center points of two target boxes while calculating the overlap area. The distance between the centers of two object frames is used to more accurately determine whether the two frames belong to the same object. The DIoU-NMS formula is as follows:
wherein S is i Is confidence, M is the highest scoring prediction box, B i Is the other target box, epsilon is the self-set threshold, ioU is the intersection area of the two target boxes divided by their union area. When the overlapping area of the two target frames is unchanged, if the distance between the center points of the two target frames is relatively large, and the target frame M and the other target frame B having the highest scores i The difference between IoU and DIoU of (a) is less than a threshold, the DIoU-NMS will prefer to consider this to be two objects, block B i Confidence S of (1) i Will remain unchanged. If the IoU and DioU difference of the two blocks is greater than a threshold. In this case, the DIoU-NMS would tend to be considered the same object, S i The value becomes 0 and is filtered out.
S6.4: an improved model is obtained.
S7: training an improved model: comprising the following steps:
s7.1: and (3) performing defect detection on the test set image of the S43 by using the improved model obtained in the S6.3.
S7.2: and evaluating the precision of the model and the parameters of the network model.
S7.3: and adjusting network model parameters according to the model evaluation result.
S7.4: repeating steps S7.1-S7.3 until the model achieves the best performance.
S8: and inputting the PCB image to be detected into a model to obtain a detection result.
S8.1: and (5) obtaining a weight file of the optimal model obtained in the step S7.
S8.2: and loading the trained model into a memory, and initializing.
S8.3: shooting a PCB image to be detected in an S1 high light reflection mode, and preprocessing the shot image through an S2 step.
S8.4: the preprocessed image is input into a model, the model classifies and positions the target according to the learned characteristics in the training data, and a detection result is output.
According to the technical scheme, as the high-layer features of the backbone network are used in the BiFPN for fusion, the feature resolution is low due to the fact that the feature resolution is low through convolution for many times, the detection capability of a small target object in an image is weak, the improved BiFPN is provided, the original structure is thinned, and the detection capability of the small target object in the image is improved.
According to the technical scheme, aiming at the situation that the NMS is very close to the two objects, as IoU between candidate frames of the NMS is relatively large, the NMS can be regarded as the same object, and finally only one detection frame is left, and missing detection occurs, a method for replacing the NMS by using the DIoU-NMS is provided, and the accuracy of model detection is improved.
The invention can improve the accuracy and efficiency of PCB defect detection in actual industrial production.
Drawings
FIG. 1 is a schematic flow chart of a method of an embodiment;
FIG. 2 is a schematic diagram of the structure of BiFPN after modification in the embodiment;
FIG. 3 is a graph showing the performance of the improved model and other models of the same type in the example;
FIG. 4 is a schematic diagram of the detection result in the embodiment;
FIG. 5 is a schematic diagram of the detection result in the embodiment;
fig. 6 is a schematic diagram of the detection result in the embodiment.
Detailed Description
The present invention will now be further illustrated, but not limited, by the following figures and examples.
Examples:
1. referring to fig. 1, a PCB defect detection method based on an improved EfficientDet includes the steps of:
s1: and shooting an image of the PCB board through high light reflection.
S2: preprocessing the acquired image, including image denoising, image enhancement and other operations, so as to improve the quality and accuracy of the image.
S3: and respectively dividing the welding spot image and the circuit image by a threshold dividing mode.
S4: and marking the separated welding spots and circuit images, and respectively constructing welding spot images and circuit image data sets after data enhancement.
S5: improving the BiFPN feature fusion network.
S6: adding DIoU-NMS to obtain improved model.
S7: an improved model is trained.
S8: and inputting the PCB image to be detected into a model to obtain a detection result.
S1: shooting an image of the PCB board through high light reflection: comprising the following steps:
s1.1: preparation device: a high resolution camera, a reflection platform, a light source, etc. are required to be prepared. The reflection platform adopts a combination of a black background plate and a white transparent glass plate so as to reflect an image of the PCB.
S1.2: adjusting a light source: according to the size and shape of the shooting object, the angle and brightness of the light source are adjusted so that the light source can reflect the image on the PCB.
S1.3: placing a PCB: and the PCB to be shot is horizontally placed on the reflection platform, so that the light can fully reflect the image on the PCB.
S1.4: shooting an image: and shooting an image of the PCB by a camera, and performing proper exposure and focusing adjustment so as to obtain a clear and accurate image.
S1.5: post-treatment: and importing the shot image into a computer, and performing post-processing such as cutting, rotation, repair and the like to obtain a final PCB image.
S2: preprocessing the acquired image, including operations such as image denoising, image enhancement and the like, so as to improve the quality and accuracy of the image: comprising the following steps:
s2.1: and the images are normalized, so that the sizes of the pictures are unified, and the follow-up processing is convenient.
S2.2: the brightness and contrast of the image is adjusted as needed to better display the image details.
S2.3: and removing noise around the image by using Gaussian filtering, and improving the definition of the image.
S3: respectively dividing a welding spot image and a circuit image by a threshold dividing mode: comprising the following steps:
s3.1: and reading the preprocessed picture, separating the picture into three RGB channels, and obtaining a welding spot image in a B-G+R mode.
S3.2: and converting the read picture into a gray level picture, searching the outline by an outline searching function, and drawing the outline function to obtain a circuit image.
S3.3: and performing closed operation processing on the obtained welding spot image and the circuit image to remove noise around the image.
S4: marking the separated welding spots and circuit images, and respectively constructing a welding spot image and a circuit image data set after data enhancement: comprising the following steps:
s4.1: and (3) marking the image obtained in the step (S3.3), and marking a defect area in the image. The welding spots are classified according to three categories of normal, multi-tin and hole lack, and the circuits are classified according to three categories of normal, open circuit and short circuit.
S4.2: and carrying out data enhancement on the marked image, including rotation, overturning, scaling and other operations, so as to increase the diversity and the number of the data sets.
S4.3: a solder joint image dataset and a circuit image dataset were constructed separately, with 85% of the dataset as the training set and 15% as the test set.
S5: improving a BiFPN feature fusion network: comprising the following steps:
s5.1: the original five-layer structure of the BiFPN is adjusted to four layers, so that the detection capability of the BiFPN on small target objects is improved.
S5.2: the BiFPN carries out rapid normalization fusion after up-sampling and down-sampling on the feature map to obtain richer feature expression, and an obtained output feature formula is as follows:
wherein omega i Is a learnable weight and may be a scalar, vector or multidimensional tensor. The weight omega is more than or equal to 0 through the ReLU function. Epsilon is a constant and is 0.0001, which is a value to prevent trainingUnstable. I i Is the i-th layer input image. The rapid normalization fusion enables each node to have respective weight, so that information can be reasonably distributed through the weight when features are fused.
S6: adding DIoU-NMS to obtain an improved model: comprising the following steps:
s6.1: the use of a DIoU-NMS in place of the conventional NMS combines the distance and IoU (Intersection over Union) factors, allowing a more accurate assessment of the quality of the test results.
And S6.2, after the target detection is carried out on the feature map obtained in the step S5, sequencing all detection results according to the confidence level from high to low, selecting the detection result with the highest confidence level as a reference frame, and calculating the distance and the overlapping degree between the rest detection results and the reference frame.
S6.3: and screening and optimizing the defects with high scores. The DIoU calculation formula is as follows:
wherein b andrepresenting the center points of the other target frame and the reference frame, respectively, ρ is the euclidean distance between the two target frames, and c is the diagonal length between the two target frames. DIoU also considers the distance between the center points of two target boxes while calculating the overlap area. The distance between the centers of two object frames is used to more accurately determine whether the two frames belong to the same object. The DIoU-NMS formula is as follows:
wherein S is i Is confidence, M is the highest scoring prediction box, B i Is the other target box, epsilon is the self-set threshold, ioU is the intersection area of the two target boxes divided by their union area. When two target frames are overlappedIf the product is unchanged, the distance between the center points of the two target frames is relatively large, and the target frame M and the other target frame B with the highest scores are divided i The difference between IoU and DIoU of (a) is less than a threshold, the DIoU-NMS will prefer to consider this to be two objects, block B i Confidence S of (1) i Will remain unchanged. If the IoU and DioU difference of the two blocks is greater than a threshold. In this case, the DIoU-NMS would tend to be considered the same object, S i The value becomes 0 and is filtered out.
S6.4: an improved model is obtained.
S7: training an improved model: comprising the following steps:
s7.1: and (3) performing defect detection on the test set image of the S4.3 by using the improved model obtained in the S6.3.
S7.2: and evaluating the precision of the model and the parameters of the network model.
S7.3: and adjusting network model parameters according to the model evaluation result.
S7.4: repeating steps S7.1-S7.3 until the model achieves the best performance.
S8: and inputting the PCB image to be detected into a model to obtain a detection result.
S8.1: and (5) obtaining a weight file of the optimal model obtained in the step S7.
S8.2: and loading the trained model into a memory, and initializing.
S8.3: shooting a PCB image to be detected in an S1 high light reflection mode, and preprocessing the shot image through an S2 step.
S8.4: the preprocessed image is input into a model, the model classifies and positions the target according to the learned characteristics in the training data, and a detection result is output.
Compared with the conventional PCB defect detection method based on deep learning, the PCB defect detection method based on improved Efficientdet provided by the invention is compared with the Efficientdet algorithm by referring to FIG. 3, the 3 target detection algorithms of YOLOv3, YOLOv4 and Faster-RCNN are compared with the Efficientdet algorithm by the study, wherein P A Represents average precision, P represents precision, R represents returnRate of return, F 1 Representing the number of pictures processed in one second.
As a result, the average precision averages (PmA) of Faster-RCNN, yolov3 and Yolov4 were 95%, 88% and 73%, respectively. PmA of the EffiientDet was 95% and PmA of the improved EffiientDet was 96%. The average predicted time was the shortest for the picture test, 0.23s for the average, followed by YOLOv3, YOLOv4 and fast-RCNN. The improved EfficientDet is higher than other algorithms through performance test comparison, and is suitable for the rapid detection and classification of PCB defects. The accuracy and the detection speed of the PCB defect detection method based on the improved Efficientdet can be clearly seen to be improved compared with those before the improvement.
The foregoing descriptions of specific exemplary embodiments of the present invention are presented for purposes of illustration and description. It is not intended to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiments were chosen and described in order to explain the specific principles of the invention and its practical application to thereby enable one skilled in the art to make and utilize the invention in various exemplary embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims and their equivalents.

Claims (9)

1. The PCB defect detection method based on the improved Efficientdet is characterized by comprising the following steps of:
s1: shooting an image of the PCB board through high light reflection;
s2: preprocessing the acquired image, including image denoising and image enhancement, so as to improve the quality and accuracy of the image;
s3: respectively dividing a welding spot image and a circuit image in a threshold dividing mode;
s4: marking the separated welding spots and circuit images, and respectively constructing a welding spot image and a circuit image data set after data enhancement;
s5: improving a BiFPN feature fusion network;
s6: adding DIoU-NMS to obtain an improved model;
s7: an improved model is trained.
S8: and inputting the PCB image to be detected into a model to obtain a detection result.
2. The PCB board defect detection method based on EfficientDet of claim 1, wherein the step S1 specifically includes the steps of:
s1.1: preparation device: preparing a high-resolution camera, a reflection platform and a light source device, wherein the reflection platform adopts a combination of a black background plate and a white transparent glass plate so as to reflect an image of a PCB;
s1.2: adjusting a light source: according to the size and shape of a shooting object, the angle and brightness of the light source are adjusted so that the light source can reflect an image on the PCB;
s1.3: placing a PCB: the PCB to be shot is horizontally placed on the reflection platform, so that the light can be fully reflected out of the image on the PCB;
s1.4: shooting an image: shooting an image of the PCB through a camera, and performing proper exposure and focusing adjustment so as to obtain a clear and accurate image;
s1.5: post-treatment: and importing the shot image into a computer, and performing post-processing to obtain a final PCB image.
3. The PCB defect detection method based on EfficientDet of claim 2, wherein the step S2 specifically includes the steps of:
s2.1: normalizing the image to unify the size of the picture;
s2.2: adjusting the brightness and contrast of the image as required;
s2.3: and removing noise around the image by using Gaussian filtering, and improving the definition of the image.
4. The PCB defect detection method based on EfficientDet of claim 3, wherein the step S3 specifically includes the steps of:
s3.1: reading the preprocessed picture, separating the picture into three RGB channels, and obtaining a welding spot image in a B-G+R mode;
s3.2: converting the read picture into a gray level picture, searching the contour through a contour searching function, and drawing the contour function to obtain a circuit image;
s3.3: and performing closed operation processing on the obtained welding spot image and the circuit image to remove noise around the image.
5. The PCB defect detection method based on EfficientDet of claim 4, wherein the step S4 specifically includes the steps of:
s4.1: labeling the image obtained in the step S3.3, namely labeling a defect area in the image, classifying welding spots according to three categories of normal, multi-tin and hole shortage, and classifying circuits according to three categories of normal, open circuit and short circuit;
s4.2: carrying out data enhancement on the marked images, including rotation, overturning, scaling and other operations, so as to increase the diversity and the number of data sets;
s4.3: a solder joint image dataset and a circuit image dataset were constructed separately, with 85% of the dataset as the training set and 15% as the test set.
6. The PCB defect detection method based on EfficientDet of claim 5, wherein the step S5 specifically comprises the steps of:
s5.1: the original five-layer structure of the BiFPN is adjusted to four layers, so that the detection capability of the BiFPN on small target objects is improved;
s5.2: the BiFPN carries out rapid normalization fusion after up-sampling and down-sampling on the feature map to obtain richer feature expression, and an obtained output feature formula is as follows:
wherein omega i Is a learnable weight and may be a scalar, vector or multidimensional tensor. The weight omega is more than or equal to 0 through the ReLU function. Epsilon is a constant and is 0.0001, which is to prevent unstable training. I i Is the i-th layer input image. The rapid normalization fusion enables each node to have respective weight, so that information can be reasonably distributed through the weight when features are fused.
7. The PCB defect detection method based on EfficientDet of claim 6, wherein the step S6 specifically includes the steps of: .
S6.1: the DIoU-NMS is used for replacing the traditional NMS, and the DIoU-NMS combines two factors of distance and IoU (Intersection over Union), so that the quality of a detection result can be estimated more accurately;
s6.2, after target detection is carried out on the feature map obtained in the step S5, sequencing all detection results according to the confidence level from high to low, selecting the detection result with the highest confidence level as a reference frame, and calculating the distance and the overlapping degree between the rest detection results and the reference frame;
s6.3: and screening and optimizing the defects with high scores. The DIoU calculation formula is as follows:
wherein b and b gt Representing the center points of the other target frame and the reference frame, respectively, ρ is the euclidean distance between the two target frames, c is the diagonal length between the two target frames, DIoU, while calculating the overlapping area, also taking into account the distance between the center points of the two target frames, the distance between the centers of the two object frames being used to more accurately determine whether the two frames belong to the same object, the DIoU-NMS formula being as follows:
wherein S is i Is confidence, M is the highest scoring prediction box, B i Is the other target frame, ε is the threshold value set by itself, ioU is the intersection area of two target frames divided by the union area of them, when the overlapping area of two target frames is unchanged, if the distance between the center points of two target frames is relatively large, and the target frame M and the other target frame B with the highest scores i The difference between IoU and DIoU of (a) is less than a threshold, the DIoU-NMS will prefer to consider this to be two objects, block B i Confidence S of (1) i Will remain unchanged if the difference between IoU and DioU of the two boxes is greater than a threshold, in which case the DioU-NMS will tend to assume the same object, S i The value becomes 0 and is filtered out;
s6.4: an improved model is obtained.
8. The PCB defect detection method based on EfficientDet of claim 7, wherein the step S7 specifically includes the steps of:
s7.1: performing defect detection on the test set image of the S4.3 by using the improved model obtained in the S6.4;
s7.2: evaluating the precision of the model and the network model parameters;
s7.3: according to the model evaluation result, adjusting network model parameters;
s7.4: repeating steps S7.1-S7.3 until the model achieves the best performance.
S8: the PCB defect detection method based on EfficientDet of claim 8, wherein the step S8 specifically includes the steps of:
s8.1: and (5) obtaining a weight file of the optimal model obtained in the step S7.
S8.2: and loading the trained model into a memory, and initializing.
S8.3: shooting a PCB image to be detected in an S1 high light reflection mode, and preprocessing the shot image through an S2 step.
S8.4: the preprocessed image is input into a model, the model classifies and positions the target according to the learned characteristics in the training data, and a detection result is output.
CN202310591740.6A 2023-05-24 2023-05-24 PCB defect detection method based on improved EfficientDet Pending CN116993660A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310591740.6A CN116993660A (en) 2023-05-24 2023-05-24 PCB defect detection method based on improved EfficientDet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310591740.6A CN116993660A (en) 2023-05-24 2023-05-24 PCB defect detection method based on improved EfficientDet

Publications (1)

Publication Number Publication Date
CN116993660A true CN116993660A (en) 2023-11-03

Family

ID=88530871

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310591740.6A Pending CN116993660A (en) 2023-05-24 2023-05-24 PCB defect detection method based on improved EfficientDet

Country Status (1)

Country Link
CN (1) CN116993660A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112541389A (en) * 2020-09-29 2021-03-23 西安交通大学 Power transmission line fault detection method based on EfficientDet network
WO2021238826A1 (en) * 2020-05-26 2021-12-02 苏宁易购集团股份有限公司 Method and apparatus for training instance segmentation model, and instance segmentation method
CN114820486A (en) * 2022-04-15 2022-07-29 陕西科技大学 Improved YOLOv5 s-based printed circuit board defect detection method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021238826A1 (en) * 2020-05-26 2021-12-02 苏宁易购集团股份有限公司 Method and apparatus for training instance segmentation model, and instance segmentation method
CN112541389A (en) * 2020-09-29 2021-03-23 西安交通大学 Power transmission line fault detection method based on EfficientDet network
CN114820486A (en) * 2022-04-15 2022-07-29 陕西科技大学 Improved YOLOv5 s-based printed circuit board defect detection method

Similar Documents

Publication Publication Date Title
CN111179251B (en) Defect detection system and method based on twin neural network and by utilizing template comparison
KR102229594B1 (en) Display screen quality detection method, device, electronic device and storage medium
CN112884064B (en) Target detection and identification method based on neural network
CN111583229B (en) Road surface fault detection method based on convolutional neural network
CN109919934B (en) Liquid crystal panel defect detection method based on multi-source domain deep transfer learning
CN113239930B (en) Glass paper defect identification method, system, device and storage medium
CN111814850A (en) Defect detection model training method, defect detection method and related device
CN113077453A (en) Circuit board component defect detection method based on deep learning
CN112561910A (en) Industrial surface defect detection method based on multi-scale feature fusion
CN109034184B (en) Grading ring detection and identification method based on deep learning
CN114663346A (en) Strip steel surface defect detection method based on improved YOLOv5 network
CN111242026B (en) Remote sensing image target detection method based on spatial hierarchy perception module and metric learning
CN115797314B (en) Method, system, equipment and storage medium for detecting surface defects of parts
CN113763364B (en) Image defect detection method based on convolutional neural network
CN117670820B (en) Plastic film production defect detection method and system
CN114429445A (en) PCB defect detection and identification method based on MAIRNet
CN113205511B (en) Electronic component batch information detection method and system based on deep neural network
CN116342536A (en) Aluminum strip surface defect detection method, system and equipment based on lightweight model
JP7059889B2 (en) Learning device, image generator, learning method, and learning program
CN113962980A (en) Glass container flaw detection method and system based on improved YOLOV5X
CN112750113B (en) Glass bottle defect detection method and device based on deep learning and linear detection
CN114387230A (en) PCB defect detection method based on re-verification detection
CN117593264A (en) Improved detection method for inner wall of cylinder hole of automobile engine by combining YOLOv5 with knowledge distillation
CN116993660A (en) PCB defect detection method based on improved EfficientDet
CN113592859B (en) Deep learning-based classification method for defects of display panel

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination