CN114723706A - Welding spot detection and positioning method based on machine vision - Google Patents

Welding spot detection and positioning method based on machine vision Download PDF

Info

Publication number
CN114723706A
CN114723706A CN202210349689.3A CN202210349689A CN114723706A CN 114723706 A CN114723706 A CN 114723706A CN 202210349689 A CN202210349689 A CN 202210349689A CN 114723706 A CN114723706 A CN 114723706A
Authority
CN
China
Prior art keywords
welding
welding spot
data
spot
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210349689.3A
Other languages
Chinese (zh)
Inventor
沈卓南
支浩仕
胡承凯
王斌
项雷雷
黄金来
徐宏
张桦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202210349689.3A priority Critical patent/CN114723706A/en
Publication of CN114723706A publication Critical patent/CN114723706A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30141Printed circuit board [PCB]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a welding spot detection and positioning method based on machine vision. The invention comprises the following steps: step 1, adopting welding spot coarse positioning based on prior knowledge, planning a welding optimal path and providing a running direction for a vision system and a mechanical arm; step 2, fine positioning of welding spots based on machine vision, judgment of welding spot types, accurate guiding of a mechanical arm to find welding spot positions and targeted automatic welding implementation; and 3, detecting the welding spot defects and judging types automatically by adopting welding spot defect detection based on online deep reinforcement learning, and providing basis and guidance for secondary repair welding at the same station. The method automatically plans the welding path, and optimizes the welding path of the camera and the mechanical arm by adopting a path planning algorithm so as to improve the production efficiency; a deep neural network fusing multilayer characteristics is used, so that detection of many and small target scenes of welding spots is facilitated; the online deep reinforcement learning improves the learning efficiency of mass data, thereby reducing the learning complexity.

Description

Welding spot detection and positioning method based on machine vision
Technical Field
The invention relates to the field of machine vision, in particular to a welding spot detection and positioning algorithm based on machine vision.
Background
The electronics manufacturing industry continues to grow and is one of the most important strategic industries in the world today. In the information age, electronic products are not only applied to small-sized calculators, mobile phones and notebook computers, but also widely applied to the fields of large-sized industrial equipment, automobiles, military systems and aviation equipment. Electronic manufacturing has become an important mark for measuring national economic development, technological progress and comprehensive national strength. In recent years, the electronic information manufacturing industry in China has been increasing year by year at a rate of more than 20% per year, and has become the mainstay industry of national economy.
Surface Mount Technology (SMT) has been rapidly promoted in recent years as a basic Technology in the electronics manufacturing and assembly industry, and the scale of the SMT Technology in china and the entire industry thereof has leaped the top of the world. To date, in china, over 15000 automated SMT production lines have become the largest and most important SMT market in the world. However, in the electronic manufacturing and assembling industry, besides standard components which can be automatically pasted with chips, there are many other non-standard components, and due to the particularity of the structural appearance, full-automatic welding cannot be realized so far.
After performing SMT automated mounting, a large number of Printed Circuit Boards (PCBs) must be manually soldered with non-standard components. The traditional manual welding-based mode has low production efficiency and long consumed time, is easy to stack a large number of semi-finished products of the paster and delays the delivery time of the products; and the labor intensity is high and the quality can not be ensured. In particular, quality detection based on manual visual inspection is related to subjective experience of individuals, and people are easy to fatigue and greatly influenced by emotions during work, so that the detection efficiency is low.
In view of the above problems, the main solution in the existing market is to use an Automated Optical Inspection (AOI) apparatus to detect common defects encountered in welding production. At present, domestic AOI equipment is mainly divided into an online mode and an offline mode, the online AOI equipment realizes remote supervision operation in a data transmission mode, one person corresponds to a plurality of production lines, manpower resources are saved, but due to the fact that data transmission is time-delayed and no expert intervenes in real time, detection accuracy is low; the off-line AOI equipment needs manual operation and control, but can feed back a welding detection result in real time, and can help professionals to efficiently find a defect target. The prices of the two devices are over one hundred thousand, so the defect detection and positioning method based on machine vision of the project research has great economic development prospect.
For years, digital production line construction is taken as an enterprise informatization key point by electronic manufacturing and assembling enterprises in China, initial effect is achieved, particularly for full-automatic SMT (surface mount technology) patch welding of standard devices, the traditional production and manufacturing mode is changed, and production of PCB products is powerfully guaranteed. However, due to the characteristics of various shapes and varieties of non-standard components and the high customization of customer requirements, the following two problems still exist in the process of establishing a digital, networked and intelligent full-automatic electronic assembly production line for PCB non-standard components:
(1) the soldering requirements are highly customized. The non-standard components have different shapes and sizes, and the types and the typesetting positions of the non-standard components adopted by different PCBs are also greatly different. The welding production line designed by the traditional high-mechanization process cannot meet the requirements of innovative, personalized and diversified products, and the contradiction is increasingly severe.
(2) And (4) detecting the welding quality in real time on line. The existing welding quality detection equipment is usually separated from a welding mechanical arm process and needs manual auxiliary operation. The mode of finishing welding defect judgment through manual visual inspection cannot meet the requirements of high automation, self learning and self evolution of an intelligent welding production line.
Disclosure of Invention
The invention provides a welding spot detection and positioning method based on machine vision, which aims to meet the flexible welding production requirements of PCB non-standard components and parts and realize intelligent full-automatic welding in the true sense.
In order to achieve the purpose, the technical scheme of the invention comprises the following steps:
step 1, adopting welding spot coarse positioning based on prior knowledge, planning a welding optimal path and providing a running direction for a vision system and a mechanical arm.
And 2, fine positioning of welding points based on machine vision, judgment of welding point types, accurate guidance of a mechanical arm to find welding point positions and targeted implementation of automatic welding.
And 3, detecting the welding spot defects and judging types automatically by adopting welding spot defect detection based on online deep reinforcement learning, and providing basis and guidance for secondary repair welding at the same station.
In the step 1, the knowledge-based welding spot rough positioning and optimal welding path planning are specifically implemented as follows:
1-1, firstly, establishing a non-standard component knowledge base, wherein the knowledge base comprises names, information and welding means of all kinds of non-standard components. And reading the PCB to obtain the information of the required welding components and welding spots, and identifying the welding spots of all nonstandard components in the PCB by using a knowledge base. And establishing a self-defined PCB coordinate system, and marking all non-standard component welding spots to enable each welding spot to obtain unique coordinate information so as to finish coarse positioning of the welding spots.
1-2. to minimize the total working time, multiple welding paths are planned, searching for the optimal path for the vision system camera movement. The welding spots in the PCB are densely distributed, and in order to prevent other welding spots from interfering with the welding spots of the target nonstandard component, the only Field of view (FOV) of the target welding spot needs to be determined. The field of view is the largest image area that can be obtained by one camera in a single shot. After the PCB is loaded and secured, the camera first moves to the full board MARK point (the MARK printed on the board with copper) as the initial point of the camera seek path on the PCB.
And 1-3, moving to the target visual field area according to the planned path sequence. The problem of solder joint sequential access is modeled as a standard traveler problem. And (3) obtaining an optimal path by using a Hopfield neural network based on the coordinate information of the welding point, and automatically planning the welding sequence of the welding point, as shown in FIG. 2.
In the step 2, the fine positioning of the welding spot and the shape discrimination of the welding spot based on the machine vision are realized as follows:
the fine positioning of the target is carried out by using the YOLOv5 as a target detection model, and the target detection model is improved in applicability on the basis of YOLOv 4. In the Neck structure of YOLOv5, a CSP2 structure designed by referring to CSPNet is adopted to enhance the capability of network feature fusion. The algorithm also becomes one of the most excellent target detection algorithms so far.
The welding spot fine positioning based on the machine vision comprises the following steps:
data set making, network model training, filtering and recognition results and outputting.
The data set production comprises the following steps:
2-1-1, data acquisition, wherein the inventor obtains all types of welding spot data required by project detection by taking a way of obtaining a real object picture of a component through field investigation of an enterprise PCB production line, as shown in fig. 3.
2-1-2. data preprocessing, YOLOv5 commonly used image sizes at network input are 416 × 416, 608 × 608, etc. Because the resolution of the shot component object image is very high, the complete object image is sent to a neural network model for training very inconvenient, so a solution scheme that the image is cut into sub-images with small resolution, subdivided according to different welding spot types and finally unified manually data labeling is provided. In conclusion, the team divides the image into 416 × 416 sizes, and then performs manual labeling.
And 2-1-3, data labeling, wherein a large amount of image data is needed for neural network training, a part of image is randomly selected and manually labeled by using a labeling tool LabelImg, and a connector welding spot target is labeled, as shown in fig. 4.
And 2-1-4, data enhancement, namely, in the deep learning aiming at the images, in order to enrich the image training set, better extract image characteristics, generalize model capability and prevent model overfitting, and data enhancement processing is carried out on clumsy data images like a team, as shown in fig. 5.
And 2-1-5, storing data, and generating an xml file according to the result after marking, wherein the stored key information comprises a target category name and four endpoint coordinates xmin, xmax, ymin and ymax of a target frame. The marked data is stored according to a VOC data format, one image corresponds to one label file, the image storage format is img, and the label file storage format is xml.
The process of training the network model comprises the following steps:
2-2-1, network input and data enhancement. In YOLOv5 there are 5 downsampling processes, 25Since the input image size is 416 × 416, YOLOv5 divides the input image into a 13 × 13 grid, the grid calculation formula 416/32 is 13.
To ensure that the trained model has sufficient generalization, enough data needs to be ensured for training, and data enhancement needs to be performed on the limited data. The data enhancement method used in the method comprises the steps of turning transformation, random trimming, color dithering, translation transformation, scale transformation, contrast transformation, noise disturbance and rotation transformation.
2-2-2. network structure. The YOLOv5 network structure can be divided into four parts of input end, Backbone, Neck and Prediction. Wherein the Backbone part uses Focus structure and CSP structure to extract the characteristics.
2-2-3, network output. For an input image, YOLOv5 maps it to an output tensor of 3 dimensions, representing the probability of various objects being present at various locations in the image. For a 416 × 416 input image, 3 prior frames are set for each grid of the feature map at each scale, for a total of 10647 predictions, 13 × 13 × 3+26 × 26 × 3+52 × 52 × 3. Each prediction is a 4+1+ 1-6-dimensional vector, and the 6-dimensional vector includes frame coordinates (4 values), frame confidence (1 value), and object class probability (only one type of object class is set in the method).
2-2-4. loss function. Yolov5 uses the curves Loss function to compute the Loss of the object score, class probability score uses the cross entropy Loss function (BCEclsloss), and bounding box uses the GIOU Loss.
The loss function is calculated as follows:
Figure BDA0003579150540000051
the meaning of the above equation is to find a minimum closed shape C for two arbitrary boxes A, B, let C contain A, B, then calculate the ratio of the area of C that does not cover A and B to the total area of C, and then subtract this ratio from IoU of A and B.
The result of the network model recognition after the training is completed is shown in fig. 7.
The steps of filtering and outputting the network model result are as follows:
and 2-3-1, outputting the coordinates and the categories. And each prediction frame has a confidence coefficient, the preset confidence coefficient is higher than 0.3 and is a suspected target, when the intersection ratio of the two prediction frames is larger than a threshold value, the two prediction frames are considered to be the same target, a plurality of prediction frames generally exist for the same target, and the frame with the highest confidence coefficient is selected from the prediction frames as a final result. Outputting the coordinate information and the category information thereof.
Clustering threshold distributions by K-means 2-3-2. Since the solder joints are typically of a relatively regular size. Therefore, if too large or too small a prediction box is output, the recognition result is substantially invalid. Therefore, the method uses K-means clustering on the sizes of the welding spots in the training set, and the result is used as a threshold value for outputting the size of the welding spots. Experimental results show that the identification precision is effectively improved by improving the threshold value.
The change in threshold affects the change in recognition accuracy as shown in fig. 8.
The defect detection positioning route map based on machine vision is shown in figure 1.
Compared with the prior art, the invention has the following advantages and effects:
1. the welding path is automatically planned, and the welding path of the camera and the mechanical arm is optimized by adopting a path planning algorithm, so that the production efficiency is improved;
2. for the extraction of image features, a deep neural network fusing multilayer features is used, which is beneficial to the detection of many small target scenes of welding spots;
3. optimizing the training process, and for a single-class target, improving the weight of coordinate loss and improving the positioning precision;
4. and (4) filtering a threshold value of a result, screening out an interference target, and improving the identification precision.
5. The online deep reinforcement learning improves the learning efficiency of mass data, the online learning training model is continuously updated, and the model is updated only by using the current sample in each training, so that the learning complexity is reduced.
Drawings
FIG. 1 is a self-learning based automatic welding and defect detection technology roadmap.
Fig. 2 is a schematic diagram of automatic welding path planning.
FIG. 3 is a raw image of a mosaic taken by an AOI automated optical inspection device, with a size of 4863X 2874 pixels.
FIG. 4 illustrates the type of connector pad target that needs to be identified.
FIG. 5 is a sample graph of partial data enhancement
Fig. 6 is a network configuration diagram of YOLOv 5.
Fig. 7 shows the recognition result of the method at 416 × 416 resolution.
Fig. 8 is a comparison of recognition results of the method at different threshold values.
Detailed description of the invention
The invention provides a self-learning-based automatic welding and defect detection method and system, aiming at meeting the flexible welding production requirements of PCB non-standard components and realizing intelligent full-automatic welding in the true sense.
In order to achieve the purpose, the technical scheme of the invention comprises the following steps:
step 1, adopting knowledge-based welding spot coarse positioning, planning a welding optimal path, and providing a running direction for a vision system and a mechanical arm.
And 2, fine positioning of welding points based on machine vision, judgment of welding point types, accurate guidance of the mechanical arm to find the welding point position, and targeted implementation of automatic welding.
And 3, detecting the welding spot defects and judging types automatically by adopting welding spot defect detection based on online deep reinforcement learning, and providing basis and guidance for secondary repair welding at the same station.
And in the step 1, welding spot coarse positioning and optimal welding path planning based on knowledge.
In this embodiment, the specific implementation steps are as follows:
1) firstly, a non-standard component knowledge base is established, and the base comprises names, information and welding means of all kinds of non-standard components. And reading the PCB file to obtain the information of the required welding components and welding spots, and identifying the welding spots of all non-standard components in the PCB by using a knowledge base. And establishing a self-defined PCB coordinate system, and automatically marking all non-standard component welding spots to enable each welding spot to obtain unique coordinate information so as to finish coarse positioning of the welding spots.
2) To minimize the total working time, multiple welding paths are planned, searching for the optimal path of the vision system camera movement. The welding spots in the PCB are densely distributed, and in order to prevent other welding spots from interfering with the welding spots of the target nonstandard component, the only Field of view (FOV) of the target welding spot needs to be determined. The field of view is the largest image area that can be obtained by one camera in a single shot. After the PCB is loaded and secured, the camera first moves to the full board MARK point (the MARK printed on the board with copper) as the initial point of the camera seek path on the PCB.
3) And moving to the target field of view area according to the planned path sequence. The problem of solder joint sequential access is modeled as a standard traveler problem. And (3) obtaining an optimal path by using a Hopfield neural network based on the coordinate information of the welding point, and automatically planning the welding sequence of the welding point, as shown in FIG. 2.
In the step 2, the welding spot fine positioning and the welding spot shape discrimination are carried out based on the machine vision.
In the embodiment, the YOLOv5 is used as a target detection model for fine positioning of the target, and the model performs some application improvements on the basis of YOLOv4, including multi-scale detection, multi-label classification and the like, wherein in the Neck structure of YOLOv5, a CSP2 structure designed by using CSPNet is adopted to enhance the capability of network feature fusion. The algorithm also becomes one of the most excellent target detection algorithms so far.
The welding spot fine positioning based on the machine vision comprises the following steps:
data set making, network model training, filtering and identifying results and outputting.
In this embodiment, the data set production comprises the following steps:
1) data acquisition, the data used by the invention is from an AOI automatic optical detection device to shoot the original image of the PCB, and for each PCB, a camera shoots a plurality of local view fields which are spliced into a complete image by an image splicing method, as shown in figure 3.
2) Data preprocessing, YOLOv5 unifies images into 416 × 416 sizes when network input, and in order to ensure that the images are not distorted in the process, the images are divided into 416 × 416 sizes and then manually labeled.
3) Data labeling, in which a large amount of image data is needed for neural network training, a part of image is randomly selected and manually labeled by using a labeling tool LabelImg, and a connector welding spot target is labeled, as shown in FIG. 4.
4) And (4) storing data, and generating an xml file according to the result of the completion of the marking, wherein the stored key information comprises a target category name, and coordinates xmin, xmax, ymin and ymax of four end points of a target frame. The marked data is stored according to a VOC data format, one image corresponds to one label file, the image storage format is img, and the label file storage format is xml.
In this embodiment, the network model training process includes the following steps:
1) network input and data enhancement. In YOLOv5 there are 5 downsampling processes, 25Since the network input image size is a multiple of 32, YOLOv3 divides the input image into a 13 × 13 grid, the input image size requirement is 32 × 13 — 416.
To ensure that the trained model has sufficient generalization, enough data needs to be ensured for training, and data enhancement needs to be performed on the limited data. The data enhancement method used in the method comprises the steps of turning transformation, random trimming, color dithering, translation transformation, scale transformation, contrast transformation, noise disturbance and rotation transformation.
2) A network structure. The YOLOv5 network structure can be divided into four parts of input end, Backbone, Neck and Prediction. Wherein the Backbone part uses Focus structure and CSP structure to extract the characteristics.
3) And (6) network output. For one input image, YOLOv5 maps it to an output tensor of 3 scales, representing the probability of various objects being present at various locations in the image. For a 416 × 416 input image, 3 prior frames are set for each grid of the feature map at each scale, for a total of 10647 predictions, 13 × 13 × 3+26 × 26 × 3+52 × 52 × 3. Each prediction is a 4+1+ 1-6-dimensional vector, and the 6-dimensional vector includes frame coordinates (4 values), frame confidence (1 value), and object class probability (only one type of object class is set in the method).
4) A loss function. Yolov5 uses the curves Loss function to compute the Loss of the object score, class probability score uses the cross entropy Loss function (BCEclsloss), and bounding box uses the GIOU Loss.
The loss function is calculated as follows:
Figure BDA0003579150540000091
the meaning of the above equation is to find the smallest closed shape C for two arbitrary boxes A, B, allowing C to include A, B, then calculate the ratio of the area of C not covering A and B to the total area of C, and subtract this ratio from IoU for A and B.
The effect of this example can be further illustrated by the following experiment:
the experimental environment and conditions of the present invention are as follows:
CPU:Core i7 i7-8700K Hexa-core 3.70GHz
GPU:NVIDIA GeForce GTX 1080 Ti 11G
memory: 32G
Software environment
Operating the system: ubuntu 18.04 LTS
The image data used for the experiment and the images used for training are from the same AOI automatic optical inspection equipment. To compare the recognition effect of the model on pictures of different resolution sizes, an original image of 4863 × 2874 size was divided into 3 different resolutions, 416 × 416, 832 × 832, 1024 × 1024, respectively. The number of the sheets was 126, 48 and 35, respectively. And then, carrying out manual labeling on the welding spot targets of the three pictures, and taking the manual labeling result as the Ground Truth. And comparing the model identification result with the Ground Truth, and calculating the model identification accuracy.
The results of the experiment were measured using five sets of parameters, and the formula is as follows:
the performance evaluation of the model mainly has two aspects, namely recognition accuracy and recognition efficiency, in the invention, the cross-over ratio mIOU, the precision P, the recall ratio R and the F1 are used for evaluating the recognition accuracy of the model, and the frame rate fps is used for evaluating the recognition efficiency of the model.
Wherein: t isPFor the true case, i.e. samples predicted to be 1 with true value and also 1, FPFor false positive cases, i.e. samples predicted to be 1 with true value 0, FNFor false negative cases, i.e. predictionSamples with 0 and true value of 1. The intersection-to-union ratio IOU is the overlapping rate of the prediction frame DT (detection result) generated by the model and the original mark frame GT (ground Truth), namely the ratio of the intersection to the union of the prediction frame DT (detection result) and the original mark frame GT (ground Truth). The optimal situation is complete overlap, i.e. a ratio of 1. An index of the processing speed of the algorithm is evaluated using the frame rate fps, n is the total number of images processed, T is the total time consumed, and the result is the number of images processed by the algorithm per second, in frames per second (f/s).
Precision:
Figure BDA0003579150540000101
the recall ratio is as follows:
Figure BDA0003579150540000102
f1 score:
Figure BDA0003579150540000103
cross-over ratio:
Figure BDA0003579150540000104
frame rate:
Figure BDA0003579150540000105
as can be seen from table 1, the test picture precision of 416 × 416 size 97.348%, recall rate 90.813%, F1 score 93.967%. When tested under the size of 832 multiplied by 832, the precision, the recall rate and the F1 score are all reduced, the precision is reduced by 1.72 percent, the recall rate is reduced by 1.249 percent and the F1 score is reduced by 1.47 percent. Compared with the 416 × 416 test picture, the 832 × 832 test picture has four times the picture size, and the target to be detected is 4 times the original target, so the frame rate is also reduced.
TABLE 1
Figure BDA0003579150540000106
Figure BDA0003579150540000111
The model identification results are shown in fig. 7.
In this embodiment, the network model result filtering and outputting steps are as follows:
and (6) coordinates and categories are obtained. And each prediction frame has a confidence coefficient, the preset confidence coefficient is higher than 0.3 and is a suspected target, when the intersection ratio of the two prediction frames is larger than a threshold value, the two prediction frames are considered to be the same target, a plurality of prediction frames generally exist for the same target, and the frame with the highest confidence coefficient is selected from the prediction frames as a final result. Outputting the coordinate information and the category information thereof.
K-means cluster out a threshold distribution. Since the solder joints are typically of a relatively regular size. Therefore, if too large or too small a prediction box is output, the recognition result is substantially invalid. Therefore, the method uses K-means clustering on the sizes of the welding spots in the training set, and the result is used as a threshold value for outputting the size of the welding spots. Experimental results show that the threshold is improved, and the identification precision is effectively improved.
The change in threshold affects the change in recognition accuracy as shown in fig. 8.

Claims (6)

1. A welding spot detection and positioning method based on machine vision is characterized by comprising the following steps:
step 1, adopting welding spot coarse positioning based on prior knowledge, planning a welding optimal path and providing a running direction for a vision system and a mechanical arm;
step 2, fine positioning of welding spots based on machine vision, judgment of welding spot types, accurate guiding of a mechanical arm to find welding spot positions and targeted automatic welding implementation;
and 3, detecting the welding spot defects and judging types automatically by adopting welding spot defect detection based on online deep reinforcement learning, and providing basis and guidance for secondary repair welding at the same station.
2. The welding spot detecting and positioning method based on machine vision according to claim 1, characterized in that in step 1, knowledge-based welding spot rough positioning and optimal welding path planning are implemented as follows:
1-1, firstly, establishing a non-standard component knowledge base, wherein the knowledge base comprises names, information and welding means of all kinds of non-standard components; after reading the PCB file, obtaining the information of the required welding components and welding spots, and identifying the welding spots of all non-standard components in the PCB by using a knowledge base; establishing a self-defined PCB coordinate system, and marking all non-standard component welding spots to enable each welding spot to obtain unique coordinate information so as to complete coarse positioning of the welding spots;
1-2, in order to minimize the total working time, planning multiple welding paths, and searching the optimal path moved by a camera of a vision system; welding spots in the PCB are densely distributed, and in order to prevent other welding spots from interfering the welding spots of the target nonstandard component, the only view field of the target welding spot needs to be determined; after the PCB enters the board and is fixed, the camera firstly moves to a MARK point of the whole board and is used as an initial point of a camera point-finding path on the PCB;
1-3, moving to a target field of view area according to the planned path sequence; modeling the problem of welding spot sequential access into a standard traveler problem; and obtaining an optimal path by using a Hopfield neural network according to the coordinate information of the welding spot, and automatically planning the welding sequence of the welding spot.
3. The welding spot detection and positioning method based on machine vision according to claim 1 or 2, characterized in that the welding spot fine positioning and the welding spot shape discrimination based on machine vision in step 2 are realized as follows:
and carrying out fine positioning on the target by using YOLOv5 as a target detection model, wherein a Neck structure of YOLOv5 adopts a CSP2 structure, and the specific fine positioning of the welding spots comprises data set production, network model training, filtering and recognition results and outputting.
4. The method of claim 3, wherein the data set generation comprises the following steps:
2-1-1, acquiring data, namely acquiring all types of welding spot data required by project detection in a mode of shooting and acquiring a component object picture by inspecting a PCB production line of an enterprise on the spot;
2-1-2. data preprocessing, the image size commonly used by YOLOv5 at network input is 416 × 416 or 608 × 608; because the resolution ratio of the shot and obtained component object image is very high, the object image is cut into sub-pictures with small resolution ratio, and then the sub-pictures are subdivided according to different welding spot types, and finally, manual data annotation is uniformly carried out; in the whole, the real object image is divided into 416 multiplied by 416 sizes, and then manual marking is carried out;
2-1-3, marking data, wherein a large amount of image data is needed for neural network training, randomly selecting partial images, and manually marking by using a marking tool LabelImg to mark welding spot targets of the connector assembly;
2-1-4, data enhancement;
2-1-5, storing data, and generating an xml file according to the result after marking, wherein the stored key information comprises a target category name, and coordinates xmin, xmax, ymin and ymax of four end points of a target frame; the marked data is stored according to a VOC data format, one image corresponds to one label file, the image storage format is img, and the label file storage format is xml.
5. The welding spot detecting and positioning method based on machine vision as claimed in claim 3, characterized in that the network model training process comprises the following steps:
2-2-1, network input and data enhancement; in YOLOv5 there are 5 downsampling processes, 25Since the input image size is 416 × 416, YOLOv5 divides the input image into 13 × 13 grids, the grid calculation formula is 416/32 — 13;
2-2-2. network architecture; the YOLOv5 network structure is divided into four parts of an input end, a Backbone, a Neck and a Prediction; wherein the Backbone part uses Focus structure and CSP structure to extract features;
2-2-3, network output; for an input image, YOLOv5 maps it to an output tensor of 3 scales, representing the probability of various objects existing at various positions of the image; for a 416 × 416 input image, 3 prior frames are set in each grid of the feature map at each scale, and there are 10647 predictions in total, 13 × 13 × 3+26 × 26 × 3+52 × 52 × 3; each prediction is a 4+1+1 ═ 6-dimensional vector, and the 6-dimensional vector contains the coordinates of the frame, the confidence of the frame, and the probability of the object class;
2-2-4. loss function; YOLOv5 adopts BECLOGLITS LOSS function to calculate the Loss of object score, class probability score adopts cross entropy Loss function, bounding box adopts GIOU LOSS;
the loss function is calculated as follows:
Figure FDA0003579150530000031
the meaning of the above equation is to find a minimum closed shape C for two arbitrary boxes A, B, allowing C to include A, B, then calculate the ratio of the area of C not covering A and B to the total area of C, and subtract this ratio from IoU of A to B.
6. The welding spot detecting and positioning method based on machine vision as claimed in claim 3, characterized in that the network model result is filtered and output as follows:
2-3-1, outputting coordinates and categories; the method comprises the steps that each prediction frame has a confidence coefficient, the preset confidence coefficient is higher than 0.3 and is a suspected target, when the intersection ratio of the two prediction frames is larger than a threshold value, the two prediction frames are considered to be the same target, the multiple prediction frames generally exist for the same target, and the frame with the highest confidence coefficient is selected from the prediction frames to serve as a final result; outputting the coordinate information and the category information of the mobile terminal;
clustering threshold distribution by using K-means 2-3-2; k-means clustering is used for the sizes of the welding spots in the training set, and the result is used as a threshold value for outputting the size of the welding spots.
CN202210349689.3A 2022-04-02 2022-04-02 Welding spot detection and positioning method based on machine vision Pending CN114723706A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210349689.3A CN114723706A (en) 2022-04-02 2022-04-02 Welding spot detection and positioning method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210349689.3A CN114723706A (en) 2022-04-02 2022-04-02 Welding spot detection and positioning method based on machine vision

Publications (1)

Publication Number Publication Date
CN114723706A true CN114723706A (en) 2022-07-08

Family

ID=82242740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210349689.3A Pending CN114723706A (en) 2022-04-02 2022-04-02 Welding spot detection and positioning method based on machine vision

Country Status (1)

Country Link
CN (1) CN114723706A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115014205A (en) * 2022-08-04 2022-09-06 江苏塔帝思智能科技有限公司 Visual detection method and detection system for tower tray and automatic welding guiding system thereof
CN115219520A (en) * 2022-07-19 2022-10-21 南京航空航天大学 Aviation connector welding spot quality detection system and method based on deep learning
CN115816466A (en) * 2023-02-02 2023-03-21 中国科学技术大学 Method for improving control stability of visual observation robot
CN117078620A (en) * 2023-08-14 2023-11-17 正泰集团研发中心(上海)有限公司 PCB welding spot defect detection method and device, electronic equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115219520A (en) * 2022-07-19 2022-10-21 南京航空航天大学 Aviation connector welding spot quality detection system and method based on deep learning
CN115219520B (en) * 2022-07-19 2023-08-29 南京航空航天大学 Aviation connector welding spot quality detection system and method based on deep learning
CN115014205A (en) * 2022-08-04 2022-09-06 江苏塔帝思智能科技有限公司 Visual detection method and detection system for tower tray and automatic welding guiding system thereof
CN115816466A (en) * 2023-02-02 2023-03-21 中国科学技术大学 Method for improving control stability of visual observation robot
CN117078620A (en) * 2023-08-14 2023-11-17 正泰集团研发中心(上海)有限公司 PCB welding spot defect detection method and device, electronic equipment and storage medium
CN117078620B (en) * 2023-08-14 2024-02-23 正泰集团研发中心(上海)有限公司 PCB welding spot defect detection method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110636715B (en) Self-learning-based automatic welding and defect detection method
CN114723706A (en) Welding spot detection and positioning method based on machine vision
CN111179251A (en) Defect detection system and method based on twin neural network and by utilizing template comparison
CN111899241B (en) Quantitative on-line detection method and system for defects of PCB (printed Circuit Board) patches in front of furnace
Benedek et al. Solder paste scooping detection by multilevel visual inspection of printed circuit boards
Li et al. Automatic industry PCB board DIP process defect detection with deep ensemble method
CN112304952B (en) Image recognition device, image recognition method and computer program product thereof
CN113409250A (en) Solder joint detection method based on convolutional neural network
CN112621765B (en) Automatic equipment assembly control method and device based on manipulator
Chen et al. A comprehensive review of deep learning-based PCB defect detection
CN115170497A (en) PCBA online detection platform based on AI visual detection technology
Huang et al. Deep learning object detection applied to defect recognition of memory modules
CN116563230A (en) Weld defect identification method and system
CN113705564B (en) Pointer type instrument identification reading method
CN112560902A (en) Book identification method and system based on spine visual information
CN110148133B (en) Circuit board fragment image identification method based on feature points and structural relationship thereof
JP4814116B2 (en) Mounting board appearance inspection method
CN114782431B (en) Printed circuit board defect detection model training method and defect detection method
CN113362388A (en) Deep learning model for target positioning and attitude estimation
Fabrice et al. SMD Detection and Classification Using YOLO Network Based on Robust Data Preprocessing and Augmentation Techniques
Novoselov et al. Automated module for product identification by their visual characteristics
Noroozi et al. Towards Optimal Defect Detection in Assembled Printed Circuit Boards Under Adverse Conditions
CN114820613B (en) Incoming material measuring and positioning method for SMT (surface mount technology) patch processing
CN117670887B (en) Tin soldering height and defect detection method based on machine vision
Aliev et al. A low computational approach for price tag recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination