CN113554054A - Deep learning-based semiconductor chip gold wire defect classification method and system - Google Patents

Deep learning-based semiconductor chip gold wire defect classification method and system Download PDF

Info

Publication number
CN113554054A
CN113554054A CN202110626530.7A CN202110626530A CN113554054A CN 113554054 A CN113554054 A CN 113554054A CN 202110626530 A CN202110626530 A CN 202110626530A CN 113554054 A CN113554054 A CN 113554054A
Authority
CN
China
Prior art keywords
gold wire
data set
image
semiconductor chip
gold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110626530.7A
Other languages
Chinese (zh)
Inventor
周洪宇
李浩天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yimu Shanghai Technology Co ltd
Original Assignee
Yimu Shanghai Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yimu Shanghai Technology Co ltd filed Critical Yimu Shanghai Technology Co ltd
Priority to CN202110626530.7A priority Critical patent/CN113554054A/en
Publication of CN113554054A publication Critical patent/CN113554054A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a semiconductor chip gold wire defect classification method and system based on deep learning, which comprises the following steps: shooting chips by using a light field camera to obtain central view images and depth information, wherein each central view image comprises two complete chips; dividing the central visual angle image to obtain a gray scale image of the single chip; respectively marking the outlines of gold wires of the gray scale image of the single chip; classifying the defects of the gray level image marked with the outline by combining the depth information to obtain a data set; and classifying the gold wire defects of the semiconductor chip diagram by using the data set. The method has high accuracy in test concentration, and can effectively judge three defects and perfect class characteristics of the gold wires.

Description

Deep learning-based semiconductor chip gold wire defect classification method and system
Technical Field
The invention relates to the field of image processing and semiconductors, in particular to a method and a system for classifying defects of gold wires of a semiconductor chip based on deep learning.
Background
With the rapid development of semiconductor technology and the wide application of integrated circuit IC chips, the packaging process of semiconductor chips is challenged severely, and the requirements of packaging technology are becoming stricter and stricter in the trend of miniaturization of electronic products. Semiconductor chips are interconnected by die and chip leads, among which some high conductivity metal wire is required for connection.
In the assembly line, the wire bonding technology starts to work after the parts such as the crystal grains, the leads and the like are perfect. Generally, the environment used for different chips will be different, and some chips are bonded under a hot pressing condition or an ultrasonic condition.
In the actual process welding process, the gold wire is defective due to the formation of an insulating layer on the interface, the defect of a metallization layer, the contamination of the surface of a lead or a crystal grain, the improper contact stress between materials, the unreasonable process parameters, the selection of ultrasonic power and the like.
The process requires that the diameter of the gold wire is less than 75 microns, the height of a wire arc is 150 microns, and the distance between bonding pads is 40-100 microns, and with the development of IC chip technology, the application scene of the wire bonding technology with lower wire arc height and smaller distance is appearing in the market. Such high precision processing requirements and the various problems that existing processing techniques may face result in the need for relatively testing the resulting chips to determine compliance with the relevant requirements and specifications. The detection of the morphology of semiconductor gold wires becomes an important process before the chip is used.
For detecting the defects of the gold wires, firstly, relevant imaging and observation are carried out on the appearance characteristics of the gold wires. Two-dimensional methods include manual inspection and AOI automatic optical inspection, while existing microscopic three-dimensional measurement methods include white light interference, confocal and super-depth of field.
Early chip detection adopted relies on the microscope to image after by the manual work to carry out visual inspection, and the gold thread quantity on the production line is huge, and the observation efficiency of people's eye is low, and the result is unstable to the microscope can only carry out two-dimentional detection from single visual angle, and the three-dimensional topography characteristic of gold thread can not catch completely.
An automatic optical detection technology under a traditional algorithm, namely AOI (automated optical inspection), adopts a traditional optical principle, utilizes a machine to replace manpower on the cost of manpower and material resources, and liberates productivity. The automatic optical detection technology adopts a manually manufactured light source, light rays reach a CCD (charge coupled device) element after being refracted by an AOI (optical imaging) optical lens, and optical data are processed by adopting a related algorithm so as to obtain information to be detected. However, the automatic optical inspection method is mainly applied to the inspection of a Printed Circuit Board (PCB), and the three-dimensional shape of the gold wire of the semiconductor IC chip is different from the two-dimensional image of a general PCB, and although the automation degree of the method reduces the cost of manpower and material resources for the inspection, the traditional algorithm matched with the method is only suitable for the processing of the two-dimensional image, and the defect of the gold wire relates to the complicated three-dimensional trend and shape, and is somewhat immature in the aspect of defect inspection.
The white light interference utilizes the interference effect of waves, and position information is obtained through the superposition result of two lines of reflected waves. Interference is a unique physical property of waves, and when two trains of waves have the same frequency and a certain phase difference, the waves can be overlapped or weakened in a certain area to form a certain amplitude, so that a stable interference pattern is obtained. The method comprises the steps of utilizing a displacement vector superposition principle, enabling laser to form coherent interference after being reflected by a reference plane and a target plane respectively, detecting the coherent interference by a detector, continuously moving the reference plane, enabling the interference intensity to be maximum when the two planes are at the same height, enabling the reference plane and the target plane to be at the same height under a world coordinate system, and judging the corresponding depth of each layer according to analysis of the intensity of reflected light. However, each scanning can only obtain information of one plane, and a gold wire has a height difference in height with respect to a reference plane, which is relatively large in distance of each movement, and in order to obtain the three-dimensional shape of a complete gold wire, a plurality of reference planes need to be determined, and the actual three-dimensional shape and characteristics need to be continuously updated through multiple scanning.
Confocal microscopy is also a common detection means at present. Which is abbreviated LSCM, confocal microscopes are more accurate than conventional optical microscopes. Confocal microscopes use a laser as the light source and contain two holes in the light path, one behind the light source, commonly referred to as the illumination aperture, through which the artificial light from the laser needs to pass before it can be focused. And the other hole is positioned in front of the detector, the light from the focusing point transmits information to the CCD plane after passing through the small hole in front of the detector, the information is converted into a digital image signal through a signal conversion algorithm, the interference of stray light is avoided by adopting a confocal microscope, and the imaging definition is higher. However, the above image collection and conversion algorithms are only used at a certain height, and to obtain overall shape information of a sample to be detected, continuous light-section images of multiple layers are required, so that the confocal microscope is required to perform mechanical scanning layer by layer in the Z-axis direction.
The Depth of field is the Depth of a scene to be photographed, and in the photographic imaging, the portion between the nearest object and the farthest object that can be clearly imaged is the Depth of field (Depth of field). And continuously acquiring layer by layer in the Z-axis direction by using the super-depth-of-field microscope, and then rendering and stacking to finally obtain the three-dimensional topography of the acquired object. By adopting the zoom lens, the mental microscope can conveniently acquire clearer images with more light information through continuous self-adjustment of the focal length.
The first two chip detection methods are mostly used for judging and processing two-bit pictures, and are beyond the reach when the defects in three-dimensional space, such as gold wire defects, are faced. Although the three-dimensional defect information of the chip can be obtained by the last three-dimensional imaging microscopes, the three-dimensional information can be reconstructed only by obtaining focused images of objects at different heights through multiple times of scanning and processing in the Z-axis direction. The three types of the images have higher resolution and accuracy, but the cost is too high, because images with different depths are required to be shot and processed for many times, inevitable vibration errors are contained in the images, and in addition, the detection efficiency is not high due to complex synthesis operation in the scanning process, and the images cannot be used in the current chip detection field in a large scale.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a method and a system for classifying the gold wire defects of a semiconductor chip based on deep learning.
The invention provides a deep learning-based semiconductor chip gold wire defect classification method, which comprises the following steps:
a data acquisition step: shooting chips by using a light field camera to obtain central view images and depth information, wherein each central view image comprises two complete chips;
a pretreatment step: dividing the central visual angle image to obtain a gray scale image of the single chip;
gold wire segmentation step: respectively marking the outlines of gold wires of the gray scale image of the single chip;
a data set construction step: classifying the defects of the gray level image marked with the outline by combining the depth information to obtain a data set;
and (3) classification step: and classifying the gold wire defects of the semiconductor chip diagram by using the data set.
Preferably, the data set constructing step further includes augmenting the data set to obtain an augmented data set, and the neural network training step trains the neural network by using the augmented data set.
Preferably, the pre-treating step comprises:
s2.1, performing binarization operation on the obtained central visual angle diagram;
s2.2, performing image enhancement on the binarized central view angle image by adopting morphological filtering;
and S2.3, carrying out edge detection on the enhanced central view angle image.
Preferably, the gold wire dividing step includes: respectively naming N gold wires on the chip as L1-LN from left to right and from top to bottom, marking a gray scale image of the single chip in a polygon or point form by adopting segmentation software, respectively marking the gold wires on the chip with outlines in batch, and segmenting through a segmentation model.
Preferably, the method further comprises the following steps:
training a neural network: training a neural network according to the data set;
and in the classification step, gold wire defect classification is carried out on the semiconductor chip diagram by using the trained neural network.
The invention provides a semiconductor chip gold wire defect classification system based on deep learning, which comprises:
a data acquisition module: shooting chips by using a light field camera to obtain central view images and depth information, wherein each central view image comprises two complete chips;
a preprocessing module: dividing the central visual angle image to obtain a gray scale image of the single chip;
gold wire segmentation module: respectively marking the outlines of gold wires of the gray scale image of the single chip;
a data set construction module: classifying the defects of the gray level image marked with the outline by combining the depth information to obtain a data set;
a classification module: and classifying the gold wire defects of the semiconductor chip diagram by using the data set.
Preferably, the data set constructing module further amplifies the data set to obtain an amplified data set, and the neural network training module trains the neural network by using the amplified data set.
Preferably, the preprocessing module comprises:
a module S2.1, performing binarization operation on the obtained central visual angle diagram;
a module S2.2, adopting morphological filtering to carry out image enhancement on the binarized central view angle image;
and the module S2.3 is used for carrying out edge detection on the enhanced central visual angle diagram.
Preferably, the gold wire dividing module includes: respectively naming N gold wires on the chip as L1-LN from left to right and from top to bottom, marking a gray scale image of the single chip in a polygon or point form by adopting segmentation software, respectively marking the gold wires on the chip with outlines in batch, and segmenting through a segmentation model.
Preferably, the method further comprises the following steps:
a neural network training module: training a neural network according to the data set;
and the classification module is used for classifying the gold thread defects of the semiconductor chip diagram by using the trained neural network.
Compared with the prior art, the invention has the following beneficial effects:
the method has high accuracy in test concentration, and can effectively judge three defects and perfect class characteristics of the gold wires.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a flow chart of the operation of the present invention;
FIG. 2 is a central view of the present embodiment;
FIG. 3 is a single-chip grayscale diagram;
FIG. 4 is a depth view of gold wires;
FIG. 5 is a schematic view of a gold wire for solder leakage;
FIG. 6 is a schematic view of a broken gold wire;
FIG. 7 is a schematic diagram of a gold wire for bias;
FIG. 8 is a diagram illustrating a single residual block;
FIG. 9 is a diagram of an Identity Block structure;
FIG. 10 is a Conv Block diagram;
FIG. 11 is a Resnet-50 network architecture;
FIG. 12 is a graph showing the variation of each index during the L1 training process;
FIG. 13 is a graph showing the variation of each index during the L2 training process;
FIG. 14 is a graph showing the variation of each index during the L3 training process;
FIG. 15 is a graph showing the variation of each index during the L4 training process;
FIG. 16 shows the "Defect-bias" test results in the L1 test set;
FIG. 17 shows the results of the "Defect-skip weld" test set for L1;
FIG. 18 shows the "Defect-Break" test results in the L1 test set;
fig. 19 shows the results of the "good" test set for L1.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
As shown in fig. 1, the method for classifying a semiconductor chip gold wire defect based on deep learning provided by the present invention comprises:
a data acquisition step: the method comprises the steps of shooting chips by using a light field camera to obtain central view images and depth information, wherein each central view image comprises two complete chips.
A pretreatment step: and dividing the central visual angle image to obtain a single-chip gray scale image.
Gold wire segmentation step: and respectively marking the gold lines of the gray scale image of the single chip with outlines.
A data set construction step: and classifying the defects of the gray level image marked with the outline by combining the depth information to obtain a data set.
And (3) classification step: and classifying the gold wire defects of the semiconductor chip diagram by using the data set.
1. Data acquisition
And acquiring a chip picture, segmenting different gold wires in the chip image obtained by the light field camera, and manufacturing a multi-classification data set for the segmented gold wires. The original image obtained by shooting with the light field camera cannot be used directly, and the original data needs to be preprocessed to obtain the depth information of each gold wire which can be used as network input. The following information is preliminarily obtained by using the square software library of the light field camera: a central view map and depth information.
In order to ensure the integrity of the information of the light field image, the light field image is not compressed in the acquisition process, the light field image should be stored in an uncompressed BMP format, and the loading speed is ensured.
The central view diagram ensures that each picture has two complete IC chips, which include the position information and two-dimensional shape information of the chip gold wires in the image. Each picture has a resolution of 800 × 540 and a size of 440KB, which is a central view as shown in fig. 2.
Because the chip gold wires have three-dimensional defects and are difficult to accurately judge by using two-dimensional pictures, the light field camera obtains depth information corresponding to each chip picture by using the excellent capability of capturing the light direction and stores the depth information in a binary txt file.
2. Data pre-processing
The gold thread information of at least two chips is arranged on a central visual angle image directly shot by a camera, and no depth information exists, so that the gold thread information cannot be directly read by a training network.
Two complete chip pictures on each central view are positioned in the rectangular frame, and the traditional image segmentation method, namely threshold-based image segmentation, is considered to be adopted to separate the image of each chip according to the obvious gray level difference between the two complete chip pictures and the surrounding outline.
The opencv computer vision library is adopted to carry out threshold segmentation on the chip image, the basic principle is that image pixels are divided into a plurality of classes through the difference of gray values at each point of a gray image, and the effect is good for the situation that a segmentation target and a segmentation background respectively occupy large gray levels.
The specific algorithm steps are as follows:
(1) and (3) carrying out binarization operation on the obtained central visual angle diagram: a fixed threshold segmentation method is adopted, the selected threshold parameter is 64, all pixel points with the gray levels larger than or equal to 64 are judged as special points, the gray values of the pixel points are changed into 255, the gray values of the pixel points with the original gray values smaller than 64 are changed into 0, the pixel points on all central visual angle images obtained by the optical field camera in batch are changed into two gray values of 0 and 255, and contrast is enhanced.
(2) And (3) performing image enhancement on the binary image by adopting morphological filtering, and corroding and expanding the highlight part. Firstly, open operation (Opening) is adopted, the parameters of the structural elements are selected to be 10 multiplied by 10 to eliminate smaller lump objects, and the object areas which do not contain the structural elements are deleted, so that the outlines are more smoothly and completely connected. And then, closed operation (cloning) is adopted, the structural elements are selected to be 3 multiplied by 3, small black blocks in the image are eliminated, and holes smaller than the structural elements are filled, so that the operation object obtains a smoother outline. The mathematical formulas for the expansion and erosion operations are shown below.
Figure BDA0003101418950000071
Figure BDA0003101418950000072
(3) The Canny edge detection algorithm is used for detecting the edge of the chip, but the edge is greatly influenced by noise, so that the original image is subjected to Gaussian smoothing processing to reduce noise, the gradient of the image is calculated after the noise is reduced, possible edges are obtained, then the maximum threshold value and the minimum threshold value are specified, and a Canny function is called to detect the edges. Finding the outline of the chip uses a cv2.findcontours () function in an Opencv-Python interface to extract the area of the chip, perform rectangle fitting on the area of the chip, and remove rectangle frames with overlarge or undersize areas (remove the parts larger than 108000 and smaller than 105000). A single chip grayscale image can be obtained by extracting and dividing the central view directly captured by the light field camera, as shown in fig. 3.
Because the neural network has requirements on the pixel size of an input picture, and the convolution operation is carried out on an input array inside the neural network, the resolution of the length and the width of the picture in the input network are multiples of 2 exponential times, all the acquired central view images are processed in batch, all single chip images for manufacturing a data set sample can be obtained, total 363 images are obtained, the resolution is 216 × 328, the file type is a BMP file, and the size is 70.2 KB.
According to the gray scale image observation of a single chip, nine gold wires are arranged on a complete chip to connect a bonding pad and a tin ball of a pin, the bonding pad and the tin ball of the pin are respectively named as L1 to L9 from left to right and from top to bottom, and as the positions of the tin balls at two ends of different gold wires on the chip are different, for the gold wires at each position of L1 to L9, the judgment of the intact gold wires and the defective gold wires have different judgment standards at different positions, so that the nine gold wires are required to be divided, nine corresponding data sets are respectively manufactured and input into a training network to train nine models aiming at different positions. The segmentation of the gold threads adopts segmentation software LabelMe of an offline version of a graphical interface to label the image in a polygonal or point form, nine gold threads on each chip are respectively labeled with outlines in batch, and each gold thread is labeled as close as possible to the edge outline of the gray image.
After the labeling is successfully carried out by the LabelMe software, json files generated by labeling of each chip can be obtained, and coordinate points of the outlines of the nine gold threads in each chip picture are recorded. And (3) using the marked outline of each gold wire as a data set, and obtaining a model capable of segmenting the gold wires by using neural network training.
The information obtained from the shooting of the light field camera together with the central view angle image also comprises depth information, and in order to ensure the accuracy of the data, save the storage space and load and store the data quickly, the depth information obtained according to the light field software library is a binary file in a txt format, and the binary file cannot be directly opened and checked, so that the binary file needs to be read specifically.
The data arrangement format of the binary file storing the depth information is a series of float arrays, the float arrays are directly read into two-dimensional arrays by adopting a fread function in C + +, and the sizes of the numbers are 0 to 255, which represent the depths of the pictures. The depth map of each gold wire on each chip can be obtained according to the two-dimensional array, the picture width is 216 pixels, the height is 328 pixels, the bit depth is 8, the picture type is the BMP format, and fig. 4 is the depth map of one gold wire on one chip.
3. Defect classification basis and classification dataset generation
Because the chip has a tiny volume and the requirement on the precision of the gold wire processing technology is high, the types of the common gold wire defects in the chip are as follows (in the schematic diagram, the dotted line represents the fluctuation standard of a normal gold wire, and the solid line represents the visual image of three defect types):
(1) welding leakage: no gold wire is used to connect the two solder balls between the chip die and the lead, as shown in FIG. 5.
(2) Breaking the wire: the other end of the gold wire extending from one section of the chip die or the lead is not welded on the lead or the die at the corresponding position, or the gold wire of the chip is disconnected from the middle, and the gold wires exist among the corresponding leads or the die but are not completely communicated, as shown in fig. 6.
(3) Line deflection: the extending direction of the gold wires between the chip crystal grains and the pins is inclined and twisted, the radian of the gold wires is higher or lower than the normal radian range, and depth information needs to be utilized, as shown in fig. 7.
The missed lines and broken lines can be judged and classified by adopting a traditional two-dimensional detection method, and some line deviation situations are easily misjudged as normal gold lines by utilizing two-dimensional microscope observation. The three common defects are that the two-dimensional arrays on the depth data have larger difference, the missing line is the depth data of the whole image without non-0, the broken line is a series of pixel points with the depth data of 0 in the extending direction, the falling line is the variation trend different from the normal gold thread depth information along the variation trend of the depth information, and the depth information can be used for comprehensively judging the defect type of the gold thread to avoid the wrong judgment caused by single visual angle under the two-dimensional visual angle.
Nine gold wires in each of the divided chips were stored in 9 folders according to gold wires L1 to L9, and 341 depth images were collected from the gold wires at each position. A certain proportion of the data was randomly retrieved in different classes in the gold lines at nine positions as a test set for the neural network. The rest gold wires are used as a part of the training set to be input into the neural network, and then 10% of the data set input into the network is randomly selected as a verification set.
The gold wires at different positions from L1 to L9 have different defects, the distribution quantity of various types of the original data set is different, and the types of the gold wire defects at the positions from L1 to L4 are three types: defect-missing weld, defect-broken wire, defect-off wire, indicated by the numbers 1, 2, 3, respectively; at bond line locations L5-L9, there are two types of bond line defects: the defect-missing weld and defect-off-set lines are indicated by the numbers 1, 2, respectively, as shown in tables 3-1 and 3-2, and thus 9 data sets need to be created.
TABLE 3-1 gold wire positions L1 through L4 four-Classification of raw data sets
Figure BDA0003101418950000091
TABLE 3-2 three classified raw data sets of gold wire positions L5 through L9
Figure BDA0003101418950000092
As the collected data amount is small and the number of the categories is seriously uneven, as shown in tables 3-1 and 3-2, the data amounts of the three categories of 'defect-missing welding', 'intact', 'defect-broken wire' and 'defect-broken wire' in the categories L1 to L4 are different from the number of the categories of 'defect-broken wire', because the data in the training set is small and the categories are very unbalanced, if the original data set is directly labeled and classified and input into the network, the network cannot capture the characteristics of certain classification information, the accuracy of the finally obtained model is extremely low, the categories with few samples cannot be correctly predicted, and the categories with large proportion in the samples have too high AP values. In other embodiments, in order to make the data set as diverse as possible and make the obtained model have stronger generalization capability, data expansion and augmentation of the pictures in the training set are required.
The defect category of 'defect-missing welding' has certain particularity, and as the data adopts depth information and no related information exists on the picture under the condition of missing welding, an array read from a data set of the picture with the defect is 0, the picture with the defect is copied to increase the number of the picture with the defect, and the picture does not need to be subjected to related change.
The gold wires in the remaining three categories need to be subjected to data amplification, and the data amplification method of the picture information includes the following categories: the image translation, the image inversion, the image rotation, the image scaling and the corresponding combined transformation are carried out, and the results obtained by adopting the combined transformation in different sequences are different, so that the problem that the data volume of a certain type is too small can be solved to a great extent.
The mathematical matrices for the various image transformations are shown below.
Image translation mathematical matrix:
Figure BDA0003101418950000101
image flipping mathematic matrix:
Figure BDA0003101418950000102
image rotation mathematical matrix:
Figure BDA0003101418950000103
image scaling mathematical matrix:
Figure BDA0003101418950000104
by carrying out single transformation or combined transformation on images in training sets of 'intact', 'defect-missing welding' and 'defect-broken wire' in gold wire data at positions L1-L4, the proportion difference of each of the four classifications is small, and the model has high generalization and accuracy. The data volume of each class in the training set after the expansion is shown in tables 3-3 and 3-4 below.
TABLE 3-3 gold wire positions L1-L4 four-Classification training dataset
Figure BDA0003101418950000111
TABLE 3-4 three sets of classified training data for gold wire positions L5-L9
Figure BDA0003101418950000112
Through the augmentation of data, the classification in 9 groups of training data sets L1-L9 is relatively balanced, the 9 groups of training data sets are marked with corresponding labels 0, 1, 2 and 3 respectively, and the picture paths, names and labels are written in a txt file to prepare a training data set for inputting into a neural network for training. The test set is a set of random samples from each class of the original data set, and the number of the gold wires of each class in each test set is shown in tables 3-5 and tables 3-6.
Tables 3-5L 1-L4 number of gold wires of each type in the test set
Figure BDA0003101418950000113
Tables 3-6L 5-L9 number of gold wires of each type in the test set
Figure BDA0003101418950000114
4. Neural network building framework
In this embodiment, the neural network is trained by using the data set, and then the trained neural network is used for classification, but the invention is not limited thereto. The software and hardware devices of the computing platform used in this experiment and their versions are shown in table 4-2 below:
table 4.2 computing platform using software, hardware and version
Figure BDA0003101418950000121
5. Resnet network structure design
And according to the depth information obtained by processing, building a deep learning classification network taking single-channel depth information as input. In the development history of classification networks, representative networks such as AlexNet, VGG, *** lenet and the like appear. The Resnet network designs a plain network and a residual network on the basis of the VGG network.
The receptive field of CNN is calculated by the formula:
F(i)=(F(i+1)-1)×Stride+Ksize
wherein F (i) is the receptive field of the ith layer;
stride is the step size of the ith layer;
-Ksize is the size of the convolution or pooling kernel.
VGG networks all use a 3 × 3 convolution kernel and a 2 × 2 max pooling kernel to improve performance by continually deepening the network structure. The method adopts the accumulation of a plurality of small convolution kernels to replace a large convolution kernel, and reduces parameters needing to be trained under the condition of ensuring the receptive field.
In this network, assume that stride of conv is 1 and padding is 1; maxpool has a size of 2 and stride of 2. The size of the feature matrix that can be calculated to give a convolution of 3x3 does not change:
outsize=(insize-Fsize+2P)/S+1=(insize-3+2)/1+1=insize
stacking two 3 × 3 convolution kernels replaces the 5 × 5 convolution kernel, and stacking three 3 × 3 convolution kernels replaces the 7 × 7 convolution kernel, with the before and after replacement fields being the same.
Feature map:F=1
Conv3×3(3):F=(1-1)×1+3=3
Conv3 × 3 (2): f ═ 3-1 × 1+3 ═ 5(5 × 5 convolution kernel field)
Conv3 × 3 (3): f ═ 5-1 × 1+3 ═ 7(7 × 7 convolution kernel field)
However, after stacking the 3 × 3 convolution kernels, the training parameters are reduced, and assuming that the input feature matrix depth and the output feature matrix depth are both a, the following can be obtained by calculation:
number of parameters required to use a 7 × 7 convolution kernel:
7×7×a×a=49a2
the number of parameters required to stack three 3 × 3 convolution kernels:
3×3×3×a×a=27a2
in the network model of VGG, it can be seen that the VGG network repeatedly uses convolution kernels of uniform size many times to extract more complex and expressive features. In the third, fourth and fifth blocks of the VGG-16 network: 256. 512 and 512 filters are used to extract complex features in turn, which is equivalent to a large 512 x 512 classifier with 3 convolutional layers. The importance of depth to the neural network is self evident from the structure of the VGG.
In view of the inevitable degradation problem in deep networks, it is an artificial practice to let some layers in the network skip the next layer to the second layer, so that the connection between adjacent layers is weakened. The Residual network is composed of basic Residual blocks (Residual blocks), as shown in fig. 8.
Establishing the interlayer relation of a through shortcut branches, wherein the calculation steps of forward propagation are as follows:
z[l+1]=W[l+1]a[l]+b[l+1]
a[l+1]=g(z[l+1])
z[l+2]=W[l+2]a[l+1]+b[l+2]
a[l+2]=g(z[l+2]+a[l])
the residual structure of ResNet also has two corresponding residual structures, BasicBlock and Bottleneck.
BasicBlock is used for networks with a small number of layers, two convolution networks of 3x3 are directly connected together, and for the condition that the number of channels is different, a larger channel is directly used as a main part, and zero filling operation is directly carried out on small channel parameters at a missing part.
And the purpose of bottleeck is to reduce the number of parameters and to perform calculation optimization on the residual block, and three convolution networks of 1x1, 3x3 and 1x1 are connected in series. Firstly, one 1x1 convolutional layer is used for dimensionality reduction so as to reduce calculation, and finally, the other 1x1 convolutional layer is used for restoration, so that the precision is maintained, and the calculation amount is reduced. In the case where the depths of the input feature and output feature matrices of both residual structures are 256 dimensions, the first convolution of 1x1 reduces the 256 dimension channel to 64 dimensions, and then finally restored by convolution of 1x1, the number of parameters used as a whole:
1×1×256×64+3×3×64×64+1×1×64×256=69632
the number of parameters without using bottleeck is:
the 3 × 3 × 256 × 256 × 2 is 1179648, which is different by about 17 times, and the network parameters are greatly reduced.
The Resnet network is divided into a plurality of types according to the use of two residual error structures and the layer number of the network, and finally the Resnet-50 network using the Bottleneck residual error structure is determined to be used for building a deep learning model through investigation and comparison.
The Bottleneck residual error structure is a basic structure in a Resnet-50 network, and is also divided into two types of structures, namely Conv Block and Identity Block, wherein the input dimension and the output dimension of the Conv Block are different; the Identity Block input dimension and output dimension are the same.
The Identity Block has 2 variable parameters C and W, i.e., C and W in the input shape (C, W).
C represents the number of channels (channels) of the input image; and the last two parameters of the input shape represent the height and width of the picture information, respectively, and may be represented as (C, W) when the height and width of the picture are the same. When the input reaches the Identity Block with only two variable parameters C and W, as shown in FIG. 9, the input is recorded as x, the left side of the Block sequentially performs convolution operations with the convolution kernel size of 1x1, the convolution kernel number of C/4 and the step size of 1 on the input, and then the input is subjected to BN operation and a ReLU activation function; performing convolution operation with convolution kernel size of 3x3, convolution kernel number of C/4 and step length of 1, and performing BN operation and ReLU activation function; and finally, after convolution operation with the convolution kernel size of 1x1, the number of convolution kernels of C and the step length of 1, and then BN operation, the three convolution blocks and the corresponding BN and RELU are made to be functions F (x), the input in the Identity Block is not processed on the right side, F (x) + x is made to pass through an activation function, the output of the Identity Block can be obtained, the output result is still (C, W, W), and the same input structure as the Block is kept.
When the input reaches the Conv Block structure, as shown in FIG. 10, the Block structure has four parameters: c, W are the input channel and width, respectively, and C1, S are the convolution kernel parameters of the Block and the step size of the convolution layer on the right side, respectively. (C, W, W) is input into the Block, the input is sequentially subjected to convolution kernel size of 1x1 on the left side of the Block, and the number of convolution kernels is C1Convolution operation with step length S, and then BN operation and ReLU activation function are carried out; then the convolution kernel size is 3x3, and the number of the convolution kernels is C1Convolution operation with step size of 1, then BN operation and ReLU activation function; finally, the size of the convolution kernel is 1x1, and the number of the convolution kernels is 4C1The convolution operation with step size 1 is followed by the BN operation, and the three volume blocks and corresponding BN and RELU are made to be the function F (x), while the right side is subjected to the convolution kernel size of 1x1, and the number of convolution kernels is 4C1The convolution operation with step size S is carried out, then through BN operation, the convolution Block and the pictograph BN are made to be the same as the functions G (x), F (x) and G (x) in channel number, but are different from x, and Conv Block changes the number and the size of the input channels.
The overall network structure of Resnet-50 is shown in FIG. 11 below, and is divided into 5 phases:
the first stage is processing of an input picture, C represents the number of channels (channels) of the input image; and the last two parameters of the input shape represent the height and width of the picture information, respectively, and may be represented as (C, W) when the height and width of the picture are the same. In the first stage, firstly, the input picture is convoluted, the step size of the convolution kernel is 2, 7x 7 convolution kernels are adopted, the number of convolution kernels in the convolution layer is 64, therefore, the number of output channels of the convolution layer is also 64, the data is subjected to batch normalization operation through the BN layer, the data is input into the second layer of the first stage through the activation of the ReLU linear rectification function, the picture is subjected to maximum pooling operation with the kernel size of 3x3 and the step size of 2, and the data is output as (64, 56, 56) through the processing of the first stage.
In the second stage, the Conv Block with parameters (64, 56, 64, 1), the Identity Block with parameters (256 ), the Identity Block with parameters (64, 56, 64, 1) are performed on the output of the first stage, and the output of the second stage is (256, 56, 56).
In the third stage, the output of the second stage is subjected to Conv Block with parameters (256, 56, 128, 2), and then to Identity Block with three parameters (512, 28), and the output of the third stage is (512, 28, 28).
In the fourth stage, the output of the third stage is subjected to Conv Block with parameters (512, 28, 256, 2), and then to Identity Block with five parameters (1024, 14), and the output of the fourth stage is (1024, 14, 14).
In the fifth stage, the output of the fourth stage is subjected to Conv Block with parameters of (1024, 14, 512, 2), and then to Identity Block with three parameters of (2048, 7), and the output of the fifth stage is (2048, 7, 7).
6. Deep learning network training
The data read into the training network comprises a depth picture and a corresponding label, the network reads txt content line by adopting a trafficDataset function, and then searches the corresponding picture according to a picture path in a txt file to enable the picture to be matched with the corresponding label. Randomly extracting 10% of all read data and labels as a verification set of the network, loading pre-training weights before training, wherein an SGD optimizer is adopted as the optimizer, and the momentum is 0.9.
Using a cross-entropy function as the loss function, in the multi-classification problem, the cross-entropy function is as follows:
Figure BDA0003101418950000151
wherein M represents the number of categories; y isicIndicating an indicator variable (0 or 1), wherein the category is 1 if the category is the same as the sample i, and otherwise, the category is 0; p is a radical oficRepresenting the prediction probability that sample i belongs to the class c.
Setting the training set batch _ size to be 10, setting the verification set batch _ size to be 2, and when training gold wire data sets from L1 to L4, adopting a four-classification network, wherein the learning rate parameter is 0.001, and the training parameter epoch is set to be 80; in training the golden line data sets at positions L5 to L9, a three-classification network is used, the learning rate parameter is 0.001, and the epoch parameter is 50 because the data size is smaller and the convergence rate is faster than those of the data sets L1 to L4.
Putting the nine groups of data sets into a classification network respectively to train corresponding epochs, and recording accuracy, call, precision, f-score and loss obtained by each cycle of epochs in the training process to obtain the following results shown in the figures 12 to 15:
according to the index change graphs of the four experiments, it can be seen that with the increase of the training epochs, the loss of the network has basically the same change trend and is continuously reduced, the first 10 epochs are quickly reduced, when the epochs are between 10 and 30, the reduction is relatively slow, approximately, the reduction is stabilized at about 0.05 when the epochs are 40 epochs, and finally, the reduction is stabilized at about 0.03; the f-score parameters of the training set samples are all stable at about 80% and the accureal is about 90%.
Test and results
Nine models from L1 to L9 are obtained through Resnet neural network training, and are loaded into a test network to detect a test set.
Since the original data set has a small number of samples and a severely non-uniform distribution of the samples, and the test set is a part of the picture information randomly extracted from each category, the data classification in the test set still faces the problem of non-uniformity, and the test sets of the nine sets of experiments are all small, so that a single-sheet test is adopted for the defect classification data set with a small number, taking the L1 test set as an example, the test results are shown in fig. 16 to 19, the left side is the picture in the test set, and the right side is the corresponding detected defect and the confidence.
And (4) respectively testing the nine groups of test sets, wherein the accuracy of the test sets on the four classifications and the three classifications is close to 100%, and the error rate of the test sets is not more than one. And the chip gold thread can not be missed, the detection rate of the normal gold thread is 100%, the chip with certain defect in the input model can not be judged as the normal chip, the quality inspection efficiency of the chip production process is ensured, and the chips with three defect types can not be judged as 'good' due to the detected 'net-missing fish', so that the defective chips can not flow into the market.
And finally, packaging the test code for subsequent UI interface design of the detection system.
Those skilled in the art will appreciate that, in addition to implementing the system and its various devices, modules, units provided by the present invention as pure computer readable program code, the system and its various devices, modules, units provided by the present invention can be fully implemented by logically programming method steps in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system and various devices, modules and units thereof provided by the invention can be regarded as a hardware component, and the devices, modules and units included in the system for realizing various functions can also be regarded as structures in the hardware component; means, modules, units for performing the various functions may also be regarded as structures within both software modules and hardware components for performing the method.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. A semiconductor chip gold wire defect classification method based on deep learning is characterized by comprising the following steps:
a data acquisition step: shooting chips by using a light field camera to obtain central view images and depth information, wherein each central view image comprises two complete chips;
a pretreatment step: dividing the central visual angle image to obtain a gray scale image of the single chip;
gold wire segmentation step: respectively marking the outlines of gold wires of the gray scale image of the single chip;
a data set construction step: classifying the defects of the gray level image marked with the outline by combining the depth information to obtain a data set;
and (3) classification step: and classifying the gold wire defects of the semiconductor chip diagram by using the data set.
2. The deep learning-based semiconductor chip gold wire defect classification method according to claim 1, wherein the data set construction step further comprises augmenting the data set to obtain an augmented data set, and the neural network training step trains a neural network by using the augmented data set.
3. The deep learning-based semiconductor chip gold wire defect classification method according to claim 1, wherein the preprocessing step comprises:
s2.1, performing binarization operation on the obtained central visual angle diagram;
s2.2, performing image enhancement on the binarized central view angle image by adopting morphological filtering;
and S2.3, carrying out edge detection on the enhanced central view angle image.
4. The deep learning-based semiconductor chip gold wire defect classification method according to claim 1, wherein the gold wire segmentation step comprises: respectively naming N gold wires on the chip as L1-LN from left to right and from top to bottom, marking a gray scale image of the single chip in a polygon or point form by adopting segmentation software, respectively marking the gold wires on the chip with outlines in batch, and segmenting through a segmentation model.
5. The deep learning-based semiconductor chip gold wire defect classification method according to claim 1, further comprising:
training a neural network: training a neural network according to the data set;
and in the classification step, gold wire defect classification is carried out on the semiconductor chip diagram by using the trained neural network.
6. A semiconductor chip gold wire defect classification system based on deep learning is characterized by comprising:
a data acquisition module: shooting chips by using a light field camera to obtain central view images and depth information, wherein each central view image comprises two complete chips;
a preprocessing module: dividing the central visual angle image to obtain a gray scale image of the single chip;
gold wire segmentation module: respectively marking the outlines of gold wires of the gray scale image of the single chip;
a data set construction module: classifying the defects of the gray level image marked with the outline by combining the depth information to obtain a data set;
a classification module: and classifying the gold wire defects of the semiconductor chip diagram by using the data set.
7. The deep learning-based semiconductor chip gold wire defect classification system of claim 6, wherein the data set construction module further comprises an augmentation module for augmenting the data set to obtain an augmented data set, and the neural network training module trains the neural network by using the augmented data set.
8. The deep learning-based semiconductor chip gold wire defect classification system of claim 6, wherein the preprocessing module comprises:
a module S2.1, performing binarization operation on the obtained central visual angle diagram;
a module S2.2, adopting morphological filtering to carry out image enhancement on the binarized central view angle image;
and the module S2.3 is used for carrying out edge detection on the enhanced central visual angle diagram.
9. The deep learning-based semiconductor chip gold wire defect classification system of claim 6, wherein the gold wire segmentation module comprises: respectively naming N gold wires on the chip as L1-LN from left to right and from top to bottom, marking a gray scale image of the single chip in a polygon or point form by adopting segmentation software, respectively marking the gold wires on the chip with outlines in batch, and segmenting through a segmentation model.
10. The deep learning-based semiconductor chip gold wire defect classification system of claim 6, further comprising:
a neural network training module: training a neural network according to the data set;
and the classification module is used for classifying the gold thread defects of the semiconductor chip diagram by using the trained neural network.
CN202110626530.7A 2021-06-04 2021-06-04 Deep learning-based semiconductor chip gold wire defect classification method and system Pending CN113554054A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110626530.7A CN113554054A (en) 2021-06-04 2021-06-04 Deep learning-based semiconductor chip gold wire defect classification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110626530.7A CN113554054A (en) 2021-06-04 2021-06-04 Deep learning-based semiconductor chip gold wire defect classification method and system

Publications (1)

Publication Number Publication Date
CN113554054A true CN113554054A (en) 2021-10-26

Family

ID=78101990

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110626530.7A Pending CN113554054A (en) 2021-06-04 2021-06-04 Deep learning-based semiconductor chip gold wire defect classification method and system

Country Status (1)

Country Link
CN (1) CN113554054A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598815A (en) * 2019-09-17 2019-12-20 西南科技大学 UHF passive RFID-based metal structure health detection method
CN115375679A (en) * 2022-10-24 2022-11-22 广东工业大学 Edge finding and point searching positioning method and device for defective chip

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110060238A (en) * 2019-04-01 2019-07-26 桂林电子科技大学 Pcb board based on deep learning marks print quality inspection method
CN110930390A (en) * 2019-11-22 2020-03-27 郑州智利信信息技术有限公司 Chip pin missing detection method based on semi-supervised deep learning
CN111429408A (en) * 2020-03-11 2020-07-17 苏州杰锐思智能科技股份有限公司 Method for detecting gold wire of packaged chip
CN112701060A (en) * 2021-03-24 2021-04-23 惠州高视科技有限公司 Method and device for detecting bonding wire of semiconductor chip
CN112767399A (en) * 2021-04-07 2021-05-07 惠州高视科技有限公司 Semiconductor bonding wire defect detection method, electronic device and storage medium
CN112816493A (en) * 2020-05-15 2021-05-18 奕目(上海)科技有限公司 Chip routing defect detection method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110060238A (en) * 2019-04-01 2019-07-26 桂林电子科技大学 Pcb board based on deep learning marks print quality inspection method
CN110930390A (en) * 2019-11-22 2020-03-27 郑州智利信信息技术有限公司 Chip pin missing detection method based on semi-supervised deep learning
CN111429408A (en) * 2020-03-11 2020-07-17 苏州杰锐思智能科技股份有限公司 Method for detecting gold wire of packaged chip
CN112816493A (en) * 2020-05-15 2021-05-18 奕目(上海)科技有限公司 Chip routing defect detection method and device
CN112701060A (en) * 2021-03-24 2021-04-23 惠州高视科技有限公司 Method and device for detecting bonding wire of semiconductor chip
CN112767399A (en) * 2021-04-07 2021-05-07 惠州高视科技有限公司 Semiconductor bonding wire defect detection method, electronic device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周小萌: "基于深度学习的IC芯片外观缺陷识别算法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
爱学习的数据喵: "泰迪杯论文B题(特等奖)", 《CSDN》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598815A (en) * 2019-09-17 2019-12-20 西南科技大学 UHF passive RFID-based metal structure health detection method
CN110598815B (en) * 2019-09-17 2022-03-25 西南科技大学 UHF passive RFID-based metal structure health detection method
CN115375679A (en) * 2022-10-24 2022-11-22 广东工业大学 Edge finding and point searching positioning method and device for defective chip

Similar Documents

Publication Publication Date Title
CN108520274B (en) High-reflectivity surface defect detection method based on image processing and neural network classification
CN111179251B (en) Defect detection system and method based on twin neural network and by utilizing template comparison
Cui et al. SDDNet: A fast and accurate network for surface defect detection
CN111415329B (en) Workpiece surface defect detection method based on deep learning
KR20210132566A (en) Method and system for classifying defects in wafer using wafer-defect images, based on deep learning
Wan et al. Ceramic tile surface defect detection based on deep learning
CN107966454A (en) A kind of end plug defect detecting device and detection method based on FPGA
CN111932511B (en) Electronic component quality detection method and system based on deep learning
CN112150460B (en) Detection method, detection system, device and medium
CN113554054A (en) Deep learning-based semiconductor chip gold wire defect classification method and system
CN110929795B (en) Method for quickly identifying and positioning welding spot of high-speed wire welding machine
CN113781415B (en) Defect detection method, device, equipment and medium for X-ray image
CN113807378A (en) Training data increment method, electronic device and computer readable recording medium
CN112200790B (en) Cloth defect detection method, device and medium
Zhou et al. DeepInspection: Deep learning based hierarchical network for specular surface inspection
CN115035081B (en) Industrial CT-based metal internal defect dangerous source positioning method and system
CN113076989A (en) Chip defect image classification method based on ResNet network
CN107507130A (en) A kind of quickly QFN chip pins image obtains and amplification method
CN113205511B (en) Electronic component batch information detection method and system based on deep neural network
CN114429445A (en) PCB defect detection and identification method based on MAIRNet
CN112763506A (en) Flaw detection method and device with AOI and AI functions
CN113034432A (en) Product defect detection method, system, device and storage medium
Anitha et al. Solder Joint Defect Detection in PCBA Chip Components Based on the Histogram Analysis
US12020418B2 (en) Image processing method and system, and non-transitory computer readable medium
Peng et al. Research on Image Recognition and Grading Method of Apple Based on Machine Vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20211026