CN114723757A - High-precision wafer defect detection method and system based on deep learning algorithm - Google Patents
High-precision wafer defect detection method and system based on deep learning algorithm Download PDFInfo
- Publication number
- CN114723757A CN114723757A CN202210643143.9A CN202210643143A CN114723757A CN 114723757 A CN114723757 A CN 114723757A CN 202210643143 A CN202210643143 A CN 202210643143A CN 114723757 A CN114723757 A CN 114723757A
- Authority
- CN
- China
- Prior art keywords
- wafer
- precision
- image
- images
- deep learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000007547 defect Effects 0.000 title claims abstract description 70
- 238000001514 detection method Methods 0.000 title claims abstract description 57
- 238000004422 calculation algorithm Methods 0.000 title claims abstract description 51
- 238000013135 deep learning Methods 0.000 title claims abstract description 25
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 12
- 238000012545 processing Methods 0.000 claims abstract description 9
- 238000000605 extraction Methods 0.000 claims abstract description 8
- 238000000034 method Methods 0.000 claims description 33
- 230000008569 process Effects 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 6
- 238000003860 storage Methods 0.000 claims description 5
- 238000005520 cutting process Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 2
- 238000007689 inspection Methods 0.000 claims 1
- 239000013078 crystal Substances 0.000 description 11
- 238000002372 labelling Methods 0.000 description 8
- 238000011176 pooling Methods 0.000 description 6
- 238000012549 training Methods 0.000 description 6
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 5
- 238000004519 manufacturing process Methods 0.000 description 5
- 239000000463 material Substances 0.000 description 5
- 239000004065 semiconductor Substances 0.000 description 5
- 229910052710 silicon Inorganic materials 0.000 description 5
- 239000010703 silicon Substances 0.000 description 5
- 230000004927 fusion Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000005498 polishing Methods 0.000 description 3
- XLOMVQKBTHCTTD-UHFFFAOYSA-N Zinc monoxide Chemical compound [Zn]=O XLOMVQKBTHCTTD-UHFFFAOYSA-N 0.000 description 2
- 238000000137 annealing Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000010438 heat treatment Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000000750 progressive effect Effects 0.000 description 2
- 230000000630 rising effect Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- HPTJABJPZMULFH-UHFFFAOYSA-N 12-[(Cyclohexylcarbamoyl)amino]dodecanoic acid Chemical compound OC(=O)CCCCCCCCCCCNC(=O)NC1CCCCC1 HPTJABJPZMULFH-UHFFFAOYSA-N 0.000 description 1
- JBRZTFJDHDCESZ-UHFFFAOYSA-N AsGa Chemical compound [As]#[Ga] JBRZTFJDHDCESZ-UHFFFAOYSA-N 0.000 description 1
- 229910002601 GaN Inorganic materials 0.000 description 1
- 229910001218 Gallium arsenide Inorganic materials 0.000 description 1
- JMASRVWKEDWRBT-UHFFFAOYSA-N Gallium nitride Chemical compound [Ga]#N JMASRVWKEDWRBT-UHFFFAOYSA-N 0.000 description 1
- GPXJNWSHGFTCBW-UHFFFAOYSA-N Indium phosphide Chemical compound [In]#P GPXJNWSHGFTCBW-UHFFFAOYSA-N 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000005229 chemical vapour deposition Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- PMHQVHHXPFUNSP-UHFFFAOYSA-M copper(1+);methylsulfanylmethane;bromide Chemical compound Br[Cu].CSC PMHQVHHXPFUNSP-UHFFFAOYSA-M 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 229910003460 diamond Inorganic materials 0.000 description 1
- 239000010432 diamond Substances 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 229910052732 germanium Inorganic materials 0.000 description 1
- GNPVGFCGXDBREM-UHFFFAOYSA-N germanium atom Chemical compound [Ge] GNPVGFCGXDBREM-UHFFFAOYSA-N 0.000 description 1
- 238000000227 grinding Methods 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 229910021421 monocrystalline silicon Inorganic materials 0.000 description 1
- 238000012858 packaging process Methods 0.000 description 1
- 238000001259 photo etching Methods 0.000 description 1
- 229910021420 polycrystalline silicon Inorganic materials 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- HBMJWWWQQXIZIP-UHFFFAOYSA-N silicon carbide Chemical compound [Si+]#[C-] HBMJWWWQQXIZIP-UHFFFAOYSA-N 0.000 description 1
- 229910010271 silicon carbide Inorganic materials 0.000 description 1
- 239000002210 silicon-based material Substances 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 239000011787 zinc oxide Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30148—Semiconductor; IC; Wafer
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Testing Or Measuring Of Semiconductors Or The Like (AREA)
- Image Analysis (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
Abstract
The invention belongs to the technical field of wafer detection, and provides a high-precision wafer defect detection method and system based on a deep learning algorithm, which are used for collecting a plurality of local images on the surface of a wafer; the local images are spliced and fused through the extraction and matching of the feature points to generate a high-precision global detail image; inputting the high-precision global detail image into a trained convolutional neural network, and outputting a defect detection result; according to the invention, a high-precision global detail image formed by splicing and fusing a plurality of high-definition local images is input into the convolutional neural network for defect detection, so that the problems of low detection precision of a global image directly shot by an industrial camera, inaccurate identification, low processing speed and the like caused by a conventional image detection algorithm are solved, and the accuracy of wafer defect detection is improved.
Description
Technical Field
The invention belongs to the technical field of wafer detection, and particularly relates to a high-precision wafer defect detection method and system based on a deep learning algorithm.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
The wafer refers to a chip used for manufacturing a semiconductor circuit, and the original material of the wafer is divided into a first generation semiconductor material, a second generation semiconductor material and a third generation semiconductor material which are complementary, have different characteristics and different purposes; common materials include silicon, germanium, gallium arsenide, silicon carbide, aluminum nitride, zinc oxide, diamond, indium phosphide, gallium nitride, and the like; taking a silicon material as an example, high-purity polycrystalline silicon is dissolved and then doped into a silicon crystal seed crystal, and then the silicon crystal seed crystal is slowly pulled out to form cylindrical monocrystalline silicon; after the silicon crystal bar is ground, polished and sliced, a silicon wafer, namely a wafer, is formed; in the wafer manufacturing process, defects may be generated on the surface of the wafer in a series of processes of pulling a single crystal, slicing, lapping, polishing, adding layers, photoetching, doping, heat treatment, scribing and the like by chemical vapor deposition, optical development and chemical mechanical grinding; in order to prevent the wafer with defects from flowing into the packaging process, the optical detection equipment is used for identifying the defects on the surface of the wafer, classifying and marking the defects, assisting in sorting the wafer and analyzing the causes of the defects so as to improve the manufacturing process; among the defect types of the wafer, wafer surface redundancy, crystal defects, mechanical damage (scratch patterns) are more common defects; the generation of the crystal defects is often caused by uneven heating during crystal growth, and compared with other wafer surface defects, the crystal defects have larger influence on the manufacturing process of the wafer due to the characteristics of the crystal defects; mechanical damage is generally generated in the steps of polishing, slicing, etc. in the wafer manufacturing process, and is caused by chemical mechanical polishing, which is a serious defect on the wafer surface and can cause serious influence on the integrated circuit chip.
After the current wafer detection equipment finishes scanning, only a single imaging result under a fixed multiple is detected, so that the defects which can be observed and identified are single in type, small in quantity and incapable of being analyzed in more detail; meanwhile, when a conventional image visual detection method is used, the number or the position of detection tasks are greatly different when the detection tasks are operated under different imaging conditions, so that the result is unstable, and the detection effect is not ideal; therefore, it is necessary to provide an efficient, highly accurate, reliable, applicable defect detection apparatus and detection method.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a high-precision wafer defect detection method and system based on a deep learning algorithm, which are used for inputting a high-precision global detail image formed by splicing and fusing a plurality of high-definition local images into a convolutional neural network for defect detection so as to solve the problems of low detection precision of a global image directly shot by an industrial camera, inaccurate identification caused by a conventional image detection algorithm, low processing speed and the like.
In order to achieve the above object, one or more embodiments of the present invention provide the following technical solutions:
the invention provides a high-precision wafer defect detection method based on a deep learning algorithm;
a high-precision wafer defect detection method based on a deep learning algorithm comprises the following steps:
collecting a plurality of local images of the surface of a wafer;
the local images are spliced and fused through the extraction and matching of the feature points to generate a high-precision global detail image;
and inputting the high-precision global detail image into the trained convolutional neural network, and outputting a defect detection result.
Furthermore, a CCD high-definition digital camera is adopted, the number of pixels is 1600 thousands, the acquisition frame rate is 60fps, and the acquisition resolution is 1080P.
Furthermore, images of different positions on the surface of the wafer are collected by a camera, and the collected images are stored according to a progressive naming mode of the coordinate array.
Further, the process of stitching and fusing the local images comprises:
extracting feature points of each image by using a feature detection algorithm;
performing feature matching on the image according to the extracted feature points;
and splicing and fusing the matched images, and outputting a spliced high-precision global detail image.
Further, extracting the feature points of each image through an improved SIFT feature point detection algorithm;
and matching the images by an improved fast nearest neighbor algorithm feature matching algorithm and a RANSAC algorithm.
Further, the step of splicing and fusing the matched images comprises:
carrying out perspective transformation on the images to enable different images to be mapped to the same coordinate system, and carrying out mask processing;
and converting the image into a binary gray map, taking out a characteristic region, calculating a gradient map and a difference image, calculating a minimum intensity value, searching for an optimal suture line, and overlapping region masks.
And cutting the black edges of the synthesized picture by using median filtering to obtain a spliced image.
Further, the wafer defect test set image labeled with the defect category is input into the trained YOLO network model, and the score, the category and the position of the defect are output.
The invention provides a high-precision wafer defect detection system based on a deep learning algorithm.
A high-precision wafer defect detection system based on a deep learning algorithm comprises an acquisition module, a splicing module and a detection module;
an acquisition module configured to: collecting a plurality of local images of the surface of a wafer;
a stitching module configured to: the local images are spliced and fused through the extraction and matching of the feature points to generate a high-precision global detail image;
a detection module configured to: and inputting the high-precision global detail image into the trained convolutional neural network, and outputting a defect detection result.
A third aspect of the present invention provides a computer readable storage medium, on which a program is stored, wherein the program, when executed by a processor, implements the steps in the method for detecting wafer defects with high precision based on deep learning algorithm according to the first aspect of the present invention.
A fourth aspect of the present invention provides an electronic device, including a memory, a processor, and a program stored in the memory and executable on the processor, wherein the processor executes the program to implement the steps in the method for detecting wafer defects based on deep learning algorithm according to the first aspect of the present invention.
The above one or more technical solutions have the following beneficial effects:
according to the method, the convolutional neural network improvement algorithm based on deep learning is trained through the existing data set samples, the weight with high confidence level is obtained, high-precision semiconductor wafer defect detection is carried out in an artificial intelligence mode, and compared with the traditional image recognition and machine learning method, the method is high in detection speed, high in recognition accuracy, low in deployment cost and high in working efficiency, and can help scientific research personnel to quickly analyze the wafer defect problem to a great extent, so that a corresponding solution improvement process is generated.
According to the invention, a more excellent image splicing and fusing algorithm is adopted, and a more detailed high-precision global detail image can be obtained during image acquisition and synthesis, so that the accuracy of defect detection is improved.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
FIG. 1 is a flow chart of a method of the first embodiment;
FIG. 2 is a diagram of a YOLO network structure of the first embodiment;
FIG. 3 is a high-precision global detail image of a wafer;
FIG. 4 shows the defect detection result of the wafer;
fig. 5 is a system configuration diagram of the second embodiment.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present disclosure. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and it should be understood that the terms "comprises" and "comprising", and any variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiments and features of the embodiments of the present invention may be combined with each other without conflict.
Example one
The embodiment discloses a high-precision wafer defect detection method based on a deep learning algorithm;
as shown in fig. 1, a method for detecting a wafer defect with high precision based on a deep learning algorithm includes:
s1: collecting a plurality of local images of the surface of a wafer;
a CCD high-definition digital camera is adopted, the number of pixels is 1600 thousands, the acquisition frame rate is 60fps, and the acquisition resolution is 1080P.
The method comprises the steps of collecting images of different positions on the surface of a wafer by a camera, storing the images according to a progressive naming mode of a coordinate array, constructing a local image set, judging whether the images belong to the same row or column or not through the naming array, grouping the images, extracting feature points and matching the images of each row or column, and finally obtaining a high-precision global detail image through step-by-step splicing and fusion.
S2: the method comprises the following steps of carrying out splicing fusion on local images to generate a high-precision global detail image, and specifically comprising the following steps:
s2-1: extracting feature points of each image by using a feature detection algorithm;
extracting feature points by an improved SIFT feature point detection algorithm, constructing a DoG scale space by using a Gaussian kernel, detecting extreme points in the DoG scale space, and after detecting the extreme points, removing the extreme points and edge points with low contrast by using two thresholds to obtain the feature points of the image and construct feature vectors.
S2-2: according to the extracted feature points, performing coarse feature matching and fine feature matching on the image;
performing coarse matching of features by adopting an improved fast nearest neighbor algorithm feature matching algorithm:
sequentially storing the feature vectors by using a K-D tree data structure, then finding out nearest neighbor points and next nearest neighbor points by using a fast approximate K nearest neighbor (FLANN) algorithm, limiting feature matching by using a distance ratio of the nearest neighbor points to the next nearest neighbor points, calculating the distance ratio of the nearest neighbor points to the next nearest neighbor points, if the distance ratio is smaller than a threshold value, retaining, otherwise, rejecting.
And (3) performing feature fine matching by adopting an RANSAC algorithm:
randomly choosing a RANSAC sample from a sample set, namely 4 matching point pairs;
calculating a transformation matrix M according to the 4 matching point pairs;
calculating consistent set consensus meeting the current transformation matrix according to the sample set, the transformation matrix M and the error measurement function, and returning the number of elements in the consistent set;
judging whether the optimal (maximum) consistent set exists according to the number of elements in the current consistent set, and if so, updating the current optimal consistent set;
and updating the current error probability p, and if the current error probability p is greater than the allowed minimum error probability, repeating the iteration until the current error probability p is less than the minimum error probability.
S2-3: and splicing and fusing the matched images, and outputting a spliced high-precision global detail image.
The splicing and fusion of the two paired images comprises the following steps:
carrying out perspective transformation on the images to enable different images to be mapped to the same coordinate system, and carrying out mask processing;
converting into a gray scale image, taking out a characteristic region, calculating a gradient image and a difference image, calculating a minimum intensity value, searching for an optimal suture line, and overlapping region masks;
and cutting the black edges of the synthesized picture by using median filtering to obtain a spliced image.
And for all images in the local image set, splicing and fusing the images in a mode of splicing the images from left to right into a long image and then splicing the whole image from top to bottom.
S3: and inputting the high-precision global detail image into the trained convolutional neural network, and outputting a defect detection result.
The trained convolutional neural network is trained by inputting an improved YOLO convolutional neural network algorithm by utilizing a data set established by image arrangement to obtain a weight file with 98% confidence coefficient.
The construction process of the data set comprises the following steps:
collecting multiple defect sample pictures under different conditions, and specifically comprising the following steps:
collecting about 200 pictures of a plurality of defect samples under different conditions and different multiples in a dust-free laboratory;
and carrying out 7 times of data enhancement expansion on the acquired picture data by utilizing a Mosaic data enhancement method and the like to generate about 1400 final data set pictures.
Labeling the collected data set, specifically comprising the following steps:
compiling and generating open-source LabelImg software in the Windows 1064 bit operating system environment, manually labeling the defects in the wafer image sequence by using the LabelImg software, and ensuring that each defect is positioned in the center of a labeling frame during labeling;
and after marking is finished, saving the generated txt or xml file, wherein the file comprises the center coordinates and the relative width and height of the wafer defects.
Processing a labeling box of the data set by using a K-means clustering algorithm, and specifically comprising the following steps of:
clustering labeling boxes of a wafer defect training data set by using a K-Means algorithm, initializing K clustering centers { C1, C2, C3, … and Ck } for K-Means, wherein 1< K < n, n is the number of pictures in the data set, and then calculating the Euclidean distance from each object to each clustering center, wherein the formula is as follows:
wherein Xi represents the ith object, Cj represents the jth cluster center, and XitThe t-th attribute, C, representing the ith objectjtThe t-th attribute representing the j-th cluster center.
Sequentially comparing the distance from each object to each cluster center, distributing the objects to the class cluster of the closest cluster center to obtain k class clusters { S1, S2, S3, …, Sk }, and calculating the mean value of all the objects in the class clusters in each dimension, wherein the calculation formula is as follows:
wherein, ClIs the firstThe mean of all objects in a cluster class in each dimension,denotes the firstNumber of objects in a cluster, XiIs shown asThe ith object in each cluster.
Through the calculation of the relation among the labeling frame, the clustering center and the intersection ratio, the sizes of the obtained 9 prior frames are respectively as follows: (7,7),(9, 9),(12,11),(13,14),(20,15),(16,20),(24,23),(31,31),(43,41).
In the original YOLO algorithm, labeling boxes in the VOC dataset were clustered using the K-Means algorithm, and the resulting 9 anchor boxes were (12, 16), (19, 36), (40, 28), (36, 75), (76, 55), (72,146), (142,110), (192,243), (459,401), respectively.
The VOC data set comprises a plurality of targets, but the volume of the wafer defect data set is small, so that clustering needs to be carried out again according to the characteristics of a data set sample made by the VOC data set, and the prior frame obtained by clustering the mark frame of the wafer defect training data set by adopting a K-Means algorithm replaces the original prior frame of the SPP.
The YOLO network is improved, and the network structure is shown in fig. 2.
The CSPDarknet53 is used as a backbone network of the YOLO, a spatial pyramid pooling layer (SPP) is added into the backbone network to be used as an additional module of the Neck, and a path aggregation network (PANet) is added into the backbone network to be used as a feature fusion module of the Neck.
The Darknet is an open source neural network framework written by C language and CUDA language, is easy to install and supports CPU and GPU calculation, and uses the Darknet neural network framework with the depth of 53 layers as a backbone network of YOLO.
The spatial pyramid pooling layer mainly aims to generate output with fixed size for input with any size, and the processing idea is as follows: for a feature map with any size, the feature map is divided into 16 blocks, 4 blocks and 1 block, then the feature map is pooled maximally on each block, and the pooled features are spliced to obtain an output with a fixed dimension so as to meet the requirement of a full connection layer.
PANet creates a bottom-up path enhancement in order to shorten the information path and use the pinpoint signal enhancement feature pyramid present in the lower levels; in order to recover damaged information between each proposal region and all the feature levels, an adaptive feature pooling technology is used for integrating the features in all the feature levels into each proposal region, thereby avoiding the result of arbitrary distribution; in order to capture the different views of each proposed region, Mask prediction is enhanced using smaller fully-connected layers, so that a better quality Mask can be produced to make the lower layer information more easily disseminated.
Training a YOLO network model, and debugging model parameters, wherein the method mainly comprises the following steps:
s3-1: the training data set was input into the CSPDarknet53 and processed using the Swish activation function.
S3-2: continuously inputting the data processed by the S3-1 into the characteristic pyramid part for pooling operation; the part consists of SPP and PANET, the SPP structure is contained in the convolution of the last characteristic layer of CSPdacrnon 53, after the last characteristic layer of CSPdacrnon 53 is convoluted for three times, the SPP structure is respectively processed by utilizing the maximum pooling of four different scales, the sizes of the pooled kernels of the maximum pooling are respectively 13x13, 9x9, 5x5 and 1x1, the receptive field can be greatly increased, and the most obvious contextual characteristics can be separated; the PANET can be regarded as an example segmentation algorithm, and has the structural characteristic of repeated feature extraction, and the feature segmentation and extraction can be better carried out by using the PANET structure on three effective feature layers.
S3-3: and Yolo-Head carries out prediction according to the extracted features, wherein Yolo has three feature layers which are respectively positioned at the middle layer, the middle lower layer and the bottom layer, the shape of the three feature layers is respectively (76, 76, 256), (38, 38, 512), (19, 19, 1024), and the shape of the output layer is respectively (19, 19, 75), (38, 38, 75), (76, 76, 75).
S3-4: and (3) decoding the prediction result, wherein the prediction result of the feature layer corresponds to the positions of the three prediction frames, the reshape processing result is (N, 19,19,3, 85), (N, 38,38,3, 85), (N, 76,76,3, 85), each grid point is added with the corresponding x _ offset and y _ offset, the center of the prediction frame is obtained after the calculation is finished, and then the length and the width of the prediction frame are calculated by utilizing the priori frame and the combination of h and w, so that the position of the whole prediction frame is obtained.
S3-5: in the process of multiple iterations, methods such as Label smoothening, CIOU, learning rate cosine annealing attenuation and the like are used for optimizing and enhancing the training effect. Wherein, Label smoothening is to carry out a smoothening process on the Label, the original Label is 0,1, and becomes 0.005 (if it is a binary classification), 0.995 after smoothening, so that the model will not be over-fitted; the CIOU takes the distance, the overlapping rate, the scale and the punishment between the target and the anchor into consideration, so that the regression of the target frame becomes more stable; the cosine annealing attenuation method is characterized in that the learning rate rises first and then falls during training, linear rising is used during rising, and a simulated cos function falls during falling.
S3-6: and processing the predicted frame position results obtained through calculation in S3-4 and S3-5, taking out frames and scores with each type of score larger than obj _ threshold, and carrying out non-maximum inhibition by using the positions and scores of the frames to obtain a final result, wherein obj _ threshold is a set threshold.
Inputting the high-precision global detail image of the wafer to be detected into the trained YOLO network model, calculating the score and the category, and outputting the score, the category and the position of the defect, wherein the high-precision global detail image of the wafer to be detected is shown in FIG. 3, and the obtained wafer defect detection result is shown in FIG. 4.
Example two
The embodiment discloses a high-precision wafer defect detection system based on a deep learning algorithm;
as shown in fig. 5, a deep learning algorithm-based high-precision wafer defect detection system includes an acquisition module, a stitching module, and a detection module;
an acquisition module configured to: collecting a plurality of local images of the surface of a wafer;
a stitching module configured to: the local images are spliced and fused through the extraction and matching of the feature points to generate a high-precision global detail image;
a detection module configured to: and inputting the high-precision global detail image into the trained convolutional neural network, and outputting a defect detection result.
EXAMPLE III
An object of the present embodiment is to provide a computer-readable storage medium.
A computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in a deep learning algorithm based high precision wafer defect detection method according to embodiment 1 of the present disclosure.
Example four
An object of the present embodiment is to provide an electronic device.
Electronic equipment comprises a memory, a processor and a program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the steps of the deep learning algorithm-based high-precision wafer defect detection method in embodiment 1 of the disclosure.
The steps involved in the apparatuses of the above second, third and fourth embodiments correspond to the first embodiment of the method, and the detailed description thereof can be found in the relevant description of the first embodiment. The term "computer-readable storage medium" should be taken to include a single medium or multiple media containing one or more sets of instructions; it should also be understood to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by a processor and that cause the processor to perform any of the methods of the present invention.
Those skilled in the art will appreciate that the modules or steps of the present invention described above can be implemented using general purpose computer means, or alternatively, they can be implemented using program code that is executable by computing means, such that they are stored in memory means for execution by the computing means, or they are separately fabricated into individual integrated circuit modules, or multiple modules or steps of them are fabricated into a single integrated circuit module. The present invention is not limited to any specific combination of hardware and software.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive changes in the technical solutions of the present invention.
Claims (10)
1. A high-precision wafer defect detection method based on a deep learning algorithm is characterized by comprising the following steps:
collecting a plurality of local images of the surface of a wafer;
the local images are spliced and fused through the extraction and matching of the feature points to generate a high-precision global detail image;
and inputting the high-precision global detail image into the trained convolutional neural network, and outputting a defect detection result.
2. The method as claimed in claim 1, wherein a CCD high-definition digital camera with 1600 ten thousand pixels, 60fps acquisition frame rate and 1080P acquisition resolution is adopted.
3. The method as claimed in claim 2, wherein the camera is used to collect images of different positions on the wafer surface, and the images are stored in a named mode of coordinate array progression.
4. The method for detecting the defects of the high-precision wafer based on the deep learning algorithm as claimed in claim 1, wherein the process of stitching and fusing the local images comprises the following steps:
extracting feature points of each image by using a feature detection algorithm;
performing feature matching on the image according to the extracted feature points;
and splicing and fusing the matched images, and outputting a spliced high-precision global detail image.
5. The method for detecting the defects of the high-precision wafer based on the deep learning algorithm as claimed in claim 4, wherein the feature points of each image are extracted through an improved SIFT feature point detection algorithm;
and matching the images by an improved fast nearest neighbor algorithm feature matching algorithm and a RANSAC algorithm.
6. The method for detecting the defects of the high-precision wafer based on the deep learning algorithm as claimed in claim 4, wherein the step of splicing and fusing the matched images comprises the following steps:
carrying out perspective transformation on the images to enable different images to be mapped to the same coordinate system, and carrying out mask processing;
converting the image into a binary gray map, taking out a characteristic region, calculating a gradient map and a difference image, calculating a minimum intensity value, searching an optimal suture line, and overlapping region masks;
and cutting the black edges of the synthesized picture by using median filtering to obtain a spliced image.
7. The method as claimed in claim 1, wherein the wafer defect test set image labeled with the defect category is input into the trained YOLO network model, and the defect score, the category and the position are output.
8. A high-precision wafer defect detection system based on a deep learning algorithm is characterized in that: the device comprises an acquisition module, a splicing module and a detection module;
an acquisition module configured to: collecting a plurality of local images of the surface of a wafer;
a stitching module configured to: the local images are spliced and fused through the extraction and matching of the feature points to generate a high-precision global detail image;
a detection module configured to: and inputting the high-precision global detail image into the trained convolutional neural network, and outputting a defect detection result.
9. Computer readable storage medium, on which a program is stored, which program, when being executed by a processor, carries out the steps of a method for high precision wafer defect inspection based on deep learning algorithm as claimed in any one of claims 1 to 7.
10. Electronic equipment comprising a memory, a processor and a program stored on the memory and executable on the processor, wherein the processor executes the program to implement the steps of the method for detecting wafer defects based on deep learning algorithm as claimed in any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210643143.9A CN114723757A (en) | 2022-06-09 | 2022-06-09 | High-precision wafer defect detection method and system based on deep learning algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210643143.9A CN114723757A (en) | 2022-06-09 | 2022-06-09 | High-precision wafer defect detection method and system based on deep learning algorithm |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114723757A true CN114723757A (en) | 2022-07-08 |
Family
ID=82232378
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210643143.9A Pending CN114723757A (en) | 2022-06-09 | 2022-06-09 | High-precision wafer defect detection method and system based on deep learning algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114723757A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116071360A (en) * | 2023-03-10 | 2023-05-05 | 苏州振畅智能科技有限公司 | Workpiece appearance defect detection method, electronic equipment and storage medium |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103997609A (en) * | 2014-06-12 | 2014-08-20 | 四川川大智胜软件股份有限公司 | Multi-video real-time panoramic fusion splicing method based on CUDA |
CN104655641A (en) * | 2015-01-31 | 2015-05-27 | 华南理工大学 | High-precision full-automatic FPC (Flexible Printed Circuit) defect detecting device and detecting process |
CN107301620A (en) * | 2017-06-02 | 2017-10-27 | 西安电子科技大学 | Method for panoramic imaging based on camera array |
CN109407547A (en) * | 2018-09-28 | 2019-03-01 | 合肥学院 | Multi-cam assemblage on-orbit test method and system towards panoramic vision perception |
CN110020985A (en) * | 2019-04-12 | 2019-07-16 | 广西师范大学 | A kind of video-splicing system and method for Binocular robot |
CN111080631A (en) * | 2019-12-20 | 2020-04-28 | 中国烟草总公司北京市公司 | Fault positioning method and system for detecting floor defects of spliced images |
CN111127425A (en) * | 2019-12-23 | 2020-05-08 | 北京至真互联网技术有限公司 | Target detection positioning method and device based on retina fundus image |
CN112862685A (en) * | 2021-02-09 | 2021-05-28 | 北京迈格威科技有限公司 | Image stitching processing method and device and electronic system |
CN113160052A (en) * | 2021-04-01 | 2021-07-23 | 华南理工大学 | Offshore culture area image splicing method based on non-uniform precision |
CN113160048A (en) * | 2021-02-02 | 2021-07-23 | 重庆高新区飞马创新研究院 | Suture line guided image splicing method |
CN113222982A (en) * | 2021-06-02 | 2021-08-06 | 上海应用技术大学 | Wafer surface defect detection method and system based on improved YOLO network |
-
2022
- 2022-06-09 CN CN202210643143.9A patent/CN114723757A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103997609A (en) * | 2014-06-12 | 2014-08-20 | 四川川大智胜软件股份有限公司 | Multi-video real-time panoramic fusion splicing method based on CUDA |
CN104655641A (en) * | 2015-01-31 | 2015-05-27 | 华南理工大学 | High-precision full-automatic FPC (Flexible Printed Circuit) defect detecting device and detecting process |
CN107301620A (en) * | 2017-06-02 | 2017-10-27 | 西安电子科技大学 | Method for panoramic imaging based on camera array |
CN109407547A (en) * | 2018-09-28 | 2019-03-01 | 合肥学院 | Multi-cam assemblage on-orbit test method and system towards panoramic vision perception |
CN110020985A (en) * | 2019-04-12 | 2019-07-16 | 广西师范大学 | A kind of video-splicing system and method for Binocular robot |
CN111080631A (en) * | 2019-12-20 | 2020-04-28 | 中国烟草总公司北京市公司 | Fault positioning method and system for detecting floor defects of spliced images |
CN111127425A (en) * | 2019-12-23 | 2020-05-08 | 北京至真互联网技术有限公司 | Target detection positioning method and device based on retina fundus image |
CN113160048A (en) * | 2021-02-02 | 2021-07-23 | 重庆高新区飞马创新研究院 | Suture line guided image splicing method |
CN112862685A (en) * | 2021-02-09 | 2021-05-28 | 北京迈格威科技有限公司 | Image stitching processing method and device and electronic system |
CN113160052A (en) * | 2021-04-01 | 2021-07-23 | 华南理工大学 | Offshore culture area image splicing method based on non-uniform precision |
CN113222982A (en) * | 2021-06-02 | 2021-08-06 | 上海应用技术大学 | Wafer surface defect detection method and system based on improved YOLO network |
Non-Patent Citations (3)
Title |
---|
杨化超著: "《图像局部不变性特征及其匹配问题研究与应用》", 31 December 2013, 测绘出版社 * |
王佩军等编著: "《摄影测量学》", 31 May 2016, 武汉大学出版社 * |
秦绪佳等: "基于最佳缝合线的序列遥感图像拼接融合方法", 《计算机科学》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116071360A (en) * | 2023-03-10 | 2023-05-05 | 苏州振畅智能科技有限公司 | Workpiece appearance defect detection method, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Xia et al. | DOTA: A large-scale dataset for object detection in aerial images | |
CN111179217A (en) | Attention mechanism-based remote sensing image multi-scale target detection method | |
US9036915B2 (en) | Architectural pattern detection and modeling in images | |
US9002072B2 (en) | System for detection of non-uniformities in web-based materials | |
Nie et al. | Pavement distress detection based on transfer learning | |
CN112949338A (en) | Two-dimensional bar code accurate positioning method combining deep learning and Hough transformation | |
CN109033944B (en) | Method and system for classifying all-sky aurora images and positioning key local structure | |
Lomio et al. | Classification of building information model (BIM) structures with deep learning | |
CN108428220A (en) | Satellite sequence remote sensing image sea island reef region automatic geometric correction method | |
CN105574545B (en) | The semantic cutting method of street environment image various visual angles and device | |
Zhu et al. | Deep residual text detection network for scene text | |
CN111275010A (en) | Pedestrian re-identification method based on computer vision | |
CN116091946A (en) | Yolov 5-based unmanned aerial vehicle aerial image target detection method | |
CN114723757A (en) | High-precision wafer defect detection method and system based on deep learning algorithm | |
CN111882000A (en) | Network structure and method applied to small sample fine-grained learning | |
CN116071389A (en) | Front background matching-based boundary frame weak supervision image segmentation method | |
CN117152484A (en) | Small target cloth flaw detection method for improving YOLOv5s | |
Li et al. | Lightweight automatic identification and location detection model of farmland pests | |
Mansour et al. | Hierarchical SVM for Semantic Segmentation of 3D Point Clouds for Infrastructure Scenes | |
Kajabad et al. | YOLOv4 for urban object detection: Case of electronic inventory in St. Petersburg | |
CN116563844A (en) | Cherry tomato maturity detection method, device, equipment and storage medium | |
Maggiori et al. | Optimizing partition trees for multi-object segmentation with shape prior | |
Petitjean et al. | Clustering of satellite image time series under time warping | |
CN114913504A (en) | Vehicle target identification method of remote sensing image fused with self-attention mechanism | |
CN112465821A (en) | Multi-scale pest image detection method based on boundary key point perception |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |