WO2022121736A1 - 一种基于机器学习的cdsem图像虚拟测量方法 - Google Patents

一种基于机器学习的cdsem图像虚拟测量方法 Download PDF

Info

Publication number
WO2022121736A1
WO2022121736A1 PCT/CN2021/134560 CN2021134560W WO2022121736A1 WO 2022121736 A1 WO2022121736 A1 WO 2022121736A1 CN 2021134560 W CN2021134560 W CN 2021134560W WO 2022121736 A1 WO2022121736 A1 WO 2022121736A1
Authority
WO
WIPO (PCT)
Prior art keywords
cdsem
image
lithography
neural network
network model
Prior art date
Application number
PCT/CN2021/134560
Other languages
English (en)
French (fr)
Inventor
李立人
时雪龙
燕燕
许博闻
周涛
Original Assignee
上海集成电路装备材料产业创新中心有限公司
上海集成电路研发中心有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海集成电路装备材料产业创新中心有限公司, 上海集成电路研发中心有限公司 filed Critical 上海集成电路装备材料产业创新中心有限公司
Publication of WO2022121736A1 publication Critical patent/WO2022121736A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer

Definitions

  • the invention belongs to the field of semiconductor integrated circuit production and manufacture, and relates to a CDSEM image virtual measurement method based on machine learning.
  • the lithography aerial image (Aerial Image) of the photoresist on the wafer is also determined when the lithography machine is focused and the dose is determined.
  • the photoresist is determined, the three-dimensional structure of the photoresist after development is determined.
  • the CDSEM image captured by the Scanning Electronic Microscope (SEM) is also determined.
  • the quality of the lithography pattern is usually determined by The imaging results of the CDSEM images were confirmed.
  • the Optical Proximity Correction OPC (Optical Proximity Correction, OPC) model includes a series of parameters that are calibrated through test patterns (eg, CDSEM images).
  • test patterns eg, CDSEM images.
  • the test pattern cannot cover all kinds of lithography patterns, and there are experienced components in the model, so the accuracy of the OPC model cannot be guaranteed. Therefore, other models independent of the OPC model are required to ensure the quality of graphic data after OPC.
  • the present invention provides a virtual measurement method for CDSEM images based on machine learning.
  • a virtual measurement method for CDSEM images based on machine learning.
  • a CDSEM image virtual measurement method based on machine learning of the present invention includes:
  • Step S1 the generation step of training set and verification set; it includes:
  • Step S11 providing a wafer, and presetting the number of photolithography processes to be K times; wherein, K is an integer greater than or equal to 1;
  • Step S12 Complete a photolithography process flow on the wafer, and use a scanning electron microscope to scan different coordinates at Mi of the wafer after photolithography to obtain Mi CDSEM images; where Mi is an integer greater than or equal to 10 , i is a value in 1,2,3...K;
  • Step S14 Determine whether the number of groups M of the lithography aerial image-CDSEM image data pair is equal to N, if not, go to step S12; if so, go to step S15; wherein
  • Step S15 Divide the N groups of the lithography aerial image-CDSEM image data pairs into a training set for model training and a validation set for validating the model in proportion; wherein, the lithography set in the training set and the validation set
  • Step S2 aligning the lithography aerial image and the coordinate position of the CDSEM image
  • Step S3 Based on the neural network model, take the lithography aerial image as input and the corresponding CDSEM image as target output, and traverse N1 groups of the lithography aerial image-CDSEM image data in the training set Completing the training of the neural network model; traversing the N2 groups of the lithography aerial image-CDSEM image data pairs in the validation set to complete the verification of the neural network model.
  • step S3 specifically includes:
  • Step S31 provide an initial neural network model
  • Step S32 taking the lithography aerial image in the training set as input, and the corresponding CDSEM image as the target output, traversing the lithography aerial image-CDSEM image data pair in the training set, Start training from the initial neural network model to obtain the trained neural network model;
  • Step S33 Traverse the lithography aerial image-CDSEM image data pair in the verification set, verify the trained neural network model, and calculate the loss function of the verification set;
  • Step S34 determine whether the loss function is less than the set value, if so, stop the training of the neural network model to obtain the final neural network model; if not, repeat steps S15 to S34; wherein, the neural network model includes mapping between the lithography aerial image and the CDSEM image;
  • the neural network model is a convolution-based deep convolutional neural network DCNN model or a generative adversarial network GAN model, and ReLU is used as an activation function; if the neural network model adopts the deep convolutional neural network In the DCNN model, the loss function is a mean square error loss function. If the neural network model adopts the generative adversarial network GAN model, the loss function is a cross entropy loss function.
  • the group number N1 of the lithography aerial image-CDSEM image data pair in the training set is a multiple of 7
  • the group number N2 of the lithography aerial image-CDSEM image data pair in the validation set is 3. multiples of .
  • Step S4 Based on the final neural network model, when a new lithography aerial image is input, the final neural network model generates a corresponding virtual CDSEM image.
  • step S5 is also included: detecting the critical dimension of the virtual CDSEM image generated by the final neural network model, and determining whether the OPC optical model needs to be corrected according to the critical dimension. Specifically include:
  • step S5' is also included: judging whether the random effect of a photolithography process is acceptable. Specifically include:
  • S51' carries out a lithography process flow with the lithography process conditions corresponding to the new lithography aerial image, and measures to obtain an actual CDSEM image;
  • S52' compares the virtual CDSEM image generated by the final neural network model with the actual CDSEM image, and if the mean square error of the two pixel values meets the accuracy requirements, it is determined that the random effect of the lithography process is acceptable.
  • the image size and resolution of the photolithography aerial image and the CDSEM image are the same.
  • the beneficial effect of the present invention is to establish a mapping between the mask pattern after OPC and the SEM image after the lithography process, and generate the SEM image through machine learning to ensure the quality of the pattern after lithography.
  • a validation model independent of the OPC model that can generate virtual reference SEM images from post-OPC geometric data.
  • FIG. 1 is a schematic flowchart of a virtual measurement method for CDSEM images based on machine learning in an embodiment of the present invention
  • FIG. 2 is a block diagram showing the architecture of a CDSEM image after a photolithography process is established based on machine learning in an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a lithography aerial image obtained by calculating the post-OPC mask pattern through a strict optical model in an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an experimental CDSEM image after photolithography of the mask pattern after OPC in the embodiment of the present invention
  • FIG. 5 is a schematic diagram of a virtual CDSEM image generated by the deep learning model after learning in the embodiment of the present invention
  • the present invention discloses a method for virtual measurement of CDSEM images based on machine learning.
  • the lithography machine maps the pattern on the reticle to the photoresist-coated wafer ( Wafer), if the process flow is determined, there is a certain correspondence between the CDSEM image and the lithography process parameters.
  • FIG. 1 is a schematic flowchart of a virtual measurement method for CDSEM images based on machine learning in an embodiment of the present invention.
  • the virtual measurement method for CDSEM images based on machine learning includes:
  • Step S1 generating steps of training set and verification set; specifically including:
  • Step S11 providing a wafer, and presetting the number of photolithography processes to be K times; wherein, K is an integer greater than or equal to 1;
  • Step S12 Complete a photolithography process flow on the wafer, and use a scanning electron microscope to scan different coordinates at Mi of the wafer after photolithography to obtain Mi CDSEM images; where Mi is an integer greater than or equal to 10 , i is a value in 1,2,3...K;
  • Step S14 Determine whether the number of groups M of the lithography aerial image-CDSEM image data pair is equal to N, if not, go to step S12; if so, go to step S15; wherein
  • Step S15 Divide the N groups of the lithography aerial image-CDSEM image data pairs into a training set for model training and a validation set for validating the model in proportion; wherein, the lithography set in the training set and the validation set
  • FIG. 2 is a functional block diagram of an image-based offline photolithography process stability control in an embodiment of the present invention.
  • the ratio of the training set and the validation set is 7:3, wherein the training set includes 700 sets of the lithography space image-CDSEM image data pairs, and the validation set includes 300 sets of the lithography space image data pairs. Image-CDSEM image data pair.
  • the lithography aerial image-SEM image data pair is obtained, and for the mapping relationship between the two, a deep convolutional neural network (DCNN: deep convolutional neural networks) or a generative adversarial network ( GAN: Generative adversarial networks) and other methods to determine.
  • DCNN deep convolutional neural networks
  • GAN generative adversarial network
  • FIG. 3 is a schematic diagram of obtaining a lithography aerial image through strict optical model calculation on the mask pattern after OPC in an embodiment of the present invention
  • FIG. 4 is an implementation of the present invention.
  • FIG. 5 is a schematic diagram of a virtual CDSEM image generated by the deep learning model after learning in the embodiment of the present invention.
  • Step S2 Align the coordinate positions of the lithography aerial image and the CDSEM image.
  • the lithography aerial image and the CDSEM image have the same image size and resolution.
  • the image size depends on the situation, including 512*512, for example.
  • Step S3 Based on the neural network model, take the lithography aerial image as input and the corresponding CDSEM image as target output, and traverse N1 groups of the lithography aerial image-CDSEM image data in the training set Completing the training of the neural network model; traversing the N2 groups of the lithography aerial image-CDSEM image data pairs in the validation set to complete the verification of the neural network model.
  • step S3 specifically includes:
  • Step S31 Provide an initial neural network model; the initial neural network model may include an untrained neural network model.
  • Step S33 Traverse the lithography aerial image-CDSEM image data pair in the verification set, verify the trained neural network model, and calculate the loss function of the verification set;
  • Step S34 Determine whether the loss function is less than the set value, if so, stop the training of the neural network model to obtain the final neural network model; if not, repeat steps S15 to S34; wherein, the final neural network model Including the mapping between the lithography aerial image and the CDSEM image.
  • step S15 includes re-dividing the training set and the validation set, and does not add new lithography data, so as to improve the robustness and accuracy of the model, and avoid similar types of graphics being assigned to the training set, while other types of untrained patterns are not assigned. Graphs are assigned to the validation set.
  • the neural network model is a convolution-based deep convolutional neural network DCNN model or a generative adversarial network GAN model, and ReLU is used as an activation function; if the neural network model adopts the deep convolutional neural network In the DCNN model, the loss function is a mean square error loss function. If the neural network model adopts the generative adversarial network GAN model, the loss function is a cross entropy loss function.
  • the deep convolutional neural network DCNN model may include an input layer, 13 intermediate layers and an output layer, the intermediate layers have the same structure, the convolution kernel size is 3*3, and the width of each layer is 64. There are 1 or 128 feature maps, and batch normalization is performed after each convolution layer.
  • the input layer only performs convolution and activation operations, and the output layer only performs convolution operations.
  • step S4 is performed again.
  • Step S4 Based on the final neural network model, when a new lithography aerial image is input, the final neural network model generates a virtual CDSEM image corresponding to it; wherein the lithography aerial image is based on the EUV mask.
  • the lithography aerial image at the same coordinates as the CDSEM image calculated by the template pattern and process parameters, or, the lithography aerial image is the same as the CDSEM image calculated by the wafer through the OPC optical model Photolithography aerial map at coordinates.
  • the method further includes the step of: S5: Detect the critical dimension of the virtual CDSEM image generated by the final neural network model, and determine whether the OPC optical model needs to be corrected according to the critical dimension.
  • S5 specifically includes:
  • the new lithography aerial image is a lithography aerial image at the same coordinates as the CDSEM image calculated according to the EUV reticle pattern and process parameters, it further includes step S5': determining once Whether the random effects of the lithography process are acceptable.
  • the final neural network model for the lithography aerial image of the EUV reticle, use the final neural network model to generate a virtual CDSEM image as a reference CDSEM image for electron beam defect scanning, and compare the CDSEM image after EUV lithography with the reference CDSEM image
  • the mean square error of the pixel value of the CDSEM image after photolithography and the reference CDSEM image meets the accuracy requirements, it is determined that the geometric data of the photolithography pattern generated after the EUV photolithography has no defect problem, otherwise it is determined that the EUV photolithography
  • the geometric data of the post-generated lithographic pattern has random defects.
  • Described step S5 ' specifically includes:
  • S51' carries out a lithography process flow with the lithography process conditions corresponding to the new lithography aerial image, and measures to obtain an actual CDSEM image;
  • S52' compares the virtual CDSEM image generated by the final neural network model with the actual CDSEM image, and if the mean square error of the two pixel values meets the accuracy requirements, it is determined that the random effect of the lithography process is acceptable.
  • the lithography aerial image (Aerial image) is input, and the model outputs the corresponding CDSEM virtual image after lithography.
  • the CDSEM virtual image can be used as a standard independent of the OPC model.
  • the lithographic images are compared, and the OPC model whose mean square error is less than the preset precision is an acceptable OPC model, otherwise the OPC model will be recalibrated with more experimental data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Exposure And Positioning Against Photoresist Photosensitive Materials (AREA)

Abstract

一种基于机器学习的CDSEM图像虚拟测量方法,其包括训练集和验证集的生成步骤、将光刻空间像图和CDSEM图像坐标位置对齐步骤以及根据光刻空间像图-CDSEM图像数据组完成神经网络模型的训练和验证。该方法通过建立OPC后光罩图案和光刻工艺后的CDSEM图像的映射,实现采用机器学习生成CDSEM图像,得到一个独立于OPC模型的验证模型,以保证光刻后的图形质量。

Description

一种基于机器学习的CDSEM图像虚拟测量方法
交叉引用
本申请要求2020年12月11日提交的申请号为202011459003.3的中国专利申请的优先权。上述申请的内容以引用方式被包含于此。
技术领域
本发明属于半导体集成电路生产制造领域,涉及一种基于机器学习的CDSEM图像虚拟测量方法。
技术背景
在半导体集成电路制造的光刻工艺流程中,对于给定的图案,在光刻机聚焦和剂量确定的情况下,晶圆上光刻胶的光刻空间像图(Aerial Image)也得到确定,在光刻胶确定的情况下,光刻胶显影后的三维结构得到确定,此时通过扫描电子显微镜(Scanning Electronic Microscope,SEM)拍摄出的CDSEM图像便也确定了,光刻的图形质量通常以该CDSEM图像的成像结果来确认。
光学邻近效应修正OPC(Optical Proximity Correction,OPC)模型包括一系列参数,通过测试图形(例如,CDSEM图像)进行校准。然而,测试图形不可能覆盖光刻图形的全部种类,且模型中有经验的组成部分,OPC模型的精确度无法保证。因此,需要独立于OPC模型的其他模型以保证OPC后的图形数据质量。
发明概要
针对上述OPC模型的问题,本发明提供一种基于机器学习的CDSEM图像虚拟测量方法,通过建立OPC后光罩图案和光刻工艺后的CDSEM图像的映射,采用机器学习生成CDSEM图像,得到一个独立于OPC模型的验证模型,以保证光刻后的图形质量。
为实现上述目的,本发明的一种基于机器学习的CDSEM图像虚拟测量方法,包括:
步骤S1:训练集和验证集的生成步骤;其包括:
步骤S11:提供晶圆,并预设光刻工艺次数为K次;其中,K为大于等于1的整数;
步骤S12:在所述晶圆上完成一次光刻工艺流程,使用扫描电子显微镜在光刻后晶圆的Mi处不同坐标进行扫描,得到的Mi张CDSEM图像;其中,Mi为大于等于10的整数,i为1,2,3…K中的一个值;
步骤S13:计算与所述CDSEM图像相同坐标的光刻空间像图,将一张所述CDSEM图像与对应的光刻空间像图组成一组光刻空间像图-CDSEM图像数据对,最终得到M=∑(Mi)组所述光刻空间像图-CDSEM图像数据对,其中,所述光刻空间像图包括一张光刻胶某一深度处的二维图像或至少二张光刻胶不同深度处的二维图像;
步骤S14:判断所述光刻空间像图-CDSEM图像数据对的组数M是否等于N,若否,执行步骤S12;若是,执行步骤S15;其中
Figure PCTCN2021134560-appb-000001
步骤S15:将N组所述光刻空间像图-CDSEM图像数据对按比例分成用于模型训练的训练集和用于验证模型的验证集;其中,所述训练集和验证集 中所述光刻空间像图-CDSEM图像数据对的组数比例为N1:N2,N=N1+N2;
步骤S2:将所述光刻空间像图和所述CDSEM图像坐标位置对齐;
步骤S3:基于神经网络模型,将所述光刻空间像图作为输入,与之对应的所述CDSEM图像作为目标输出,遍历所述训练集中的N1组所述光刻空间像图-CDSEM图像数据对完成所述神经网络模型的训练;遍历验证集中的N2组所述光刻空间像图-CDSEM图像数据对完成所述神经网络模型的验证。
进一步地,所述步骤S3具体包括:
步骤S31:提供初始神经网络模型;
步骤S32:以所述训练集中的所述光刻空间像图作为输入,与之对应的所述CDSEM图像作为目标输出,遍历所述训练集中的所述光刻空间像图-CDSEM图像数据对,从所述初始神经网络模型开始进行训练,得到训练后的神经网络模型;
步骤S33:遍历所述验证集中的所述光刻空间像图-CDSEM图像数据对,对所述训练后的神经网络模型进行验证,并计算所述验证集的损失函数;
步骤S34:判断所述损失函数是否小于设定值,若是,停止对所述神经网络模型的训练,得到最终神经网络模型;若否,重复执行步骤S15至S34;其中,所述神经网络模型包括光刻空间像图与所述CDSEM图像之间的映射;
进一步地,所述神经网络模型为以卷积为主的深度卷积神经网络DCNN模型或者生成式对抗网络GAN模型,使用ReLU为激活函数;若所述神经网络模型采用所述深度卷积神经网络DCNN模型,所述损失函数为均方误差损失函数,若所述神经网络模型采用所述生成式对抗网络GAN模型,所 述损失函数为交叉熵损失函数。
进一步地,所述训练集中所述光刻空间像图-CDSEM图像数据对的组数N1是7的倍数,所述验证集中所述光刻空间像图-CDSEM图像数据对的组数N2为3的倍数。
进一步地,还包括:
步骤S4:基于所述最终神经网络模型,当新的光刻空间像图输入时,所述最终神经网络模型生成与之对应的虚拟CDSEM图像。
进一步地,还包括步骤S5:检测所述最终神经网络模型生成的所述虚拟CDSEM图像的关键尺寸,并根据所述关键尺寸确定所述OPC光学模型是否需要进行校正。具体包括:
S51:获取所述虚拟CDSEM图像轮廓,通过所述轮廓确定关键尺寸;
S52:判断所述关键尺寸是否符合工艺要求,若不符合则对所述OPC光学模型进行校正。
进一步地,还包括步骤S5’:判断一次光刻工艺的随机效应是否可以接受。具体包括:
S51’用所述新的光刻空间像图对应的光刻工艺条件进行一次光刻工艺流程,量测得到一张实际CDSEM图像;
S52’将所述最终神经网络模型生成的所述虚拟CDSEM图像与实际CDSEM图像作比较,若两者像素值的均方误差满足精度要求,则判定该次光刻工艺的随机效应可以接受。
进一步地,所述光刻空间像图和CDSEM图像的图像大小和分辨率相同。
从上述技术方案可以看出,本发明的有益效果为,建立OPC后光罩图 案和光刻工艺后的SEM图像的映射,通过机器学习生成SEM图像,以保证光刻后的图形质量,即建立一个独立于OPC模型的验证模型,可以从OPC后的几何数据生成虚拟参考SEM图像。
附图说明
图1所示为本发明实施例中基于机器学习的CDSEM图像虚拟测量方法的流程示意图
图2所示为本发明实施例中基于机器学习建立光刻工艺后CDSEM图像的架构框图
图3所示为本发明实施例中对OPC后光罩图案通过严格的光学模型计算得到光刻空间像图的示意图
图4所示为本发明实施例中OPC后光罩图案光刻后的实验CDSEM图像示意图
图5所示为本发明实施例中学习后的深度学习模型生成的虚拟CDSEM图像示意图
发明内容
下面结合附图1-5,对本发明的具体实施方式作进一步的详细说明。
需要说明的是,本发明公开的一种基于机器学习的CDSEM图像虚拟测量方法,在光刻工艺中,光刻机通过EUV或UV等将掩模版上的图案映射到涂有光刻胶的晶圆(Wafer)上,若工艺流程确定,CDSEM图像和光刻工艺参数之间存在一定的对应关系。
请参阅图1,图1所示为本发明实施例中基于机器学习的CDSEM图像虚拟测量方法的流程示意图。如图1所示,所述基于机器学习的CDSEM图像虚拟测量方法,包括:
步骤S1:训练集和验证集的生成步骤;具体包括:
步骤S11:提供晶圆,并预设光刻工艺次数为K次;其中,K为大于等于1的整数;
步骤S12:在所述晶圆上完成一次光刻工艺流程,使用扫描电子显微镜在光刻后晶圆的Mi处不同坐标进行扫描,得到的Mi张CDSEM图像;其中,Mi为大于等于10的整数,i为1,2,3…K中的一个值;
步骤S13:计算与所述CDSEM图像相同坐标的光刻空间像图,将一张所述CDSEM图像与对应的光刻空间像图组成一组光刻空间像图-CDSEM图像数据对,最终得到M=∑(Mi)组所述光刻空间像图-CDSEM图像数据对,其中,所述光刻空间像图包括一张光刻胶某一深度处的二维图像或至少二张光刻胶不同深度处的二维图像;
所述至少二张光刻胶不同深度处的二维图像包括三维不同深度处的光强信息。
步骤S14:判断所述光刻空间像图-CDSEM图像数据对的组数M是否等于N,若否,执行步骤S12;若是,执行步骤S15;其中
Figure PCTCN2021134560-appb-000002
步骤S15:将N组所述光刻空间像图-CDSEM图像数据对按比例分成用于模型训练的训练集和用于验证模型的验证集;其中,所述训练集和验证集中所述光刻空间像图-CDSEM图像数据对的组数比例为N1:N2,N=N1+N2。
请参阅图2,图2所示为本发明实施例中基于图像的离线的光刻工艺稳 定性控制功能框图。如图2所示,用于模型训练的训练集和用于验证模型的验证集均是从多次的实际光刻工艺中得到的(例如进行5次光刻,每次扫描的晶圆坐标分别为200处,300处,50处,150处和300处,则最终会得到1000张CDSEM图像,即N=1000)。N组所述光刻空间像图-CDSEM图像数据对按比例分成用于模型训练的训练集和用于验证模型的验证集;所述训练集和验证集的比例为N1:N2,N=N1+N2。较佳地,可以是按训练集和验证集的比例为7:3进行,其中,训练集包括700组所述光刻空间像图-CDSEM图像数据对,验证集包括300组所述光刻空间像图-CDSEM图像数据对。
在一些实施例中,获取所述光刻空间像图-SEM图像数据对,针对两者之间的映射关系,可以通过深度卷积神经网络(DCNN:deep convolutional neural networks)或者生成式对抗网络(GAN:Generative adversarial networks)等方式进行确定。
请参阅图3、图4和图5,图3所示为本发明实施例中对OPC后光罩图案通过严格的光学模型计算得到光刻空间像图的示意图,图4所示为本发明实施例中OPC后光罩图案光刻后的实验CDSEM图像示意图,图5所示为本发明实施例中学习后的深度学习模型生成的虚拟CDSEM图像示意图。
在一些实施例中,由于实际光刻后的图形坐标与掩模版上对应的图形坐标可能存在偏差,在进行模型训练之前,还需执行:
步骤S2:将所述光刻空间像图和所述CDSEM图像坐标位置对齐。较佳地,所述光刻空间像图和所述CDSEM图像的图像大小和分辨率相同。图像大小视具体情况而定,例如包括512*512。
步骤S3:基于神经网络模型,将所述光刻空间像图作为输入,与之对 应的所述CDSEM图像作为目标输出,遍历所述训练集中的N1组所述光刻空间像图-CDSEM图像数据对完成所述神经网络模型的训练;遍历验证集中的N2组所述光刻空间像图-CDSEM图像数据对完成所述神经网络模型的验证。
具体地,通过图像到图像(Image To Image)的方法,主要方式是基于曝光后光刻胶中光刻空间像图以生成对应的CDSEM图像,将光刻空间像图作为神经网络模型的输入,与之对应的CDSEM图像作为神经网络模型的目标输出,通过对神经网络模型不断的训练和验证,并进行神经网络模型参数的调节,最终完成光刻空间像图到CDSEM图像之间的映射。
在一些实施例中,步骤S3具体包括:
步骤S31:提供初始神经网络模型;所述初始神经网络模型可以包括未经训练的神经网络模型。
步骤S32:以所述训练集中的所述光刻空间像图作为输入,与之对应的所述CDSEM图像作为目标输出,遍历所述训练集中的所述光刻空间像图-CDSEM图像数据对,对所述初始神经网络模型进行训练,得到训练后的神经网络模型;
步骤S33:遍历所述验证集中的所述光刻空间像图-CDSEM图像数据对,对所述训练后的神经网络模型进行验证,并计算所述验证集的损失函数;
步骤S34:判断所述损失函数是否小于设定值,若是,停止对所述神经网络模型的训练,得到最终神经网络模型;若否,重复执行步骤S15至S34;其中,所述最终神经网络模型包括光刻空间像图与所述CDSEM图像之间的映射。
所述重复执行步骤S15,包括重新划分训练集和验证集,并未新增光刻数据,以提高模型鲁棒性和精度,避免相似类型图形被分配至训练集,而未经训练的其他类型图形分配至验证集。
进一步地,所述神经网络模型为以卷积为主的深度卷积神经网络DCNN模型或者生成式对抗网络GAN模型,使用ReLU为激活函数;若所述神经网络模型采用所述深度卷积神经网络DCNN模型,所述损失函数为均方误差损失函数,若所述神经网络模型采用所述生成式对抗网络GAN模型,所述损失函数为交叉熵损失函数。
进一步地,所述深度卷积神经网络DCNN模型可以包括一输入层、13个中间层和一输出层,所述中间层的结构相同,卷积核大小为3*3,每层的宽度是64个或128个特征映射,每个卷积层后进行批归一化操作,所述输入层只进行卷积和激活操作,输出层只进行卷积操作。
根据所述最终神经网络模型,再执行步骤S4。
步骤S4:基于所述最终神经网络模型,当新的光刻空间像图输入时,所述最终神经网络模型生成与之对应的虚拟CDSEM图像;其中,所述光刻空间像图为根据EUV掩模版图形和工艺参数计算出的与所述CDSEM图像相同坐标处的光刻空间像图,或者,所述光刻空间像图为所述晶圆通过OPC光学模型计算得到的与所述CDSEM图像相同坐标处的光刻空间像图。
在一些实施例中,当新的光刻空间像图为对所述晶圆在OPC后光罩图案通过光学模型计算得到与所述CDSEM图像相同坐标处的光刻空间像图时,还包括步骤S5:检测所述最终神经网络模型生成的所述虚拟CDSEM图像的关键尺寸,并根据所述关键尺寸确定所述OPC光学模型是否需要进行 校正。所述步骤S5具体包括:
S51:获取所述虚拟CDSEM图像轮廓,通过所述轮廓确定关键尺寸;
S52:判断所述关键尺寸是否符合工艺要求,若不符合则对所述OPC光学模型进行校正。
在一些实施例中,当新的光刻空间像图为根据EUV掩模版图形和工艺参数计算出的与所述CDSEM图像相同坐标处的光刻空间像图时,还包括步骤S5’:判断一次光刻工艺的随机效应是否可以接受。
即,对EUV掩模版的光刻空间像图,使用所述最终神经网络模型生成虚拟的CDSEM图像,作为电子束缺陷扫描的参考CDSEM图像,用所述参考CDSEM图像对比EUV光刻后的CDSEM图像;当光刻后的CDSEM图像与参考CDSEM图像的像素值均方误差满足精度要求时,判定所述EUV光刻后生成的光刻图形的几何数据不存在缺陷问题,否则判定所述EUV光刻后生成的光刻图形的几何数据具有随机缺陷。
所述步骤S5’具体包括:
S51’用所述新的光刻空间像图对应的光刻工艺条件进行一次光刻工艺流程,量测得到一张实际CDSEM图像;
S52’将所述最终神经网络模型生成的所述虚拟CDSEM图像与实际CDSEM图像作比较,若两者像素值的均方误差满足精度要求,则判定该次光刻工艺的随机效应可以接受。
即,在模型应用阶段,输入光刻空间像图(Aerial image),模型输出对应的光刻后的CDSEM虚拟图像,该CDSEM虚拟图像可作为一种独立于OPC模型的标准,与OPC模型模拟得到的光刻后的图像进行比较,比较得 到的均方误差小于预设精度的OPC模型即为可接受的OPC模型,否则将利用更多的实验数据重新校准OPC模型。
以上所述仅为本发明的优选实施例,所述实施例并非用于限制本发明的专利保护范围,因此凡是运用本发明的说明书及附图内容所作的等同结构变化,同理均应包含在本发明所附权利要求的保护范围内。

Claims (10)

  1. 一种基于机器学习的CDSEM图像虚拟测量方法,其特征在于,包括:
    步骤S1:训练集和验证集的生成步骤;其包括:
    步骤S11:提供晶圆,并预设光刻工艺次数为K次;其中,K为大于等于1的整数;
    步骤S12:在所述晶圆上完成一次光刻工艺流程,使用扫描电子显微镜在光刻后晶圆的Mi处不同坐标进行扫描,得到的Mi张CDSEM图像;其中,Mi为大于等于10的整数,i为1,2,3…K中的一个值;
    步骤S13:计算与所述CDSEM图像相同坐标的光刻空间像图,将一张所述CDSEM图像与对应的光刻空间像图组成一组光刻空间像图-CDSEM图像数据对,最终得到M=∑(Mi)组所述光刻空间像图-CDSEM图像数据对,其中,所述光刻空间像图包括一张光刻胶某一深度处的二维图像或至少二张光刻胶不同深度处的二维图像;
    步骤S14:判断所述光刻空间像图-CDSEM图像数据对的组数M是否等于N,若否,执行步骤S12;若是,执行步骤S15;其中
    Figure PCTCN2021134560-appb-100001
    步骤S15:将N组所述光刻空间像图-CDSEM图像数据对按比例分成用于模型训练的训练集和用于验证模型的验证集;其中,所述训练集和验证集中所述光刻空间像图-CDSEM图像数据对的组数比例为N1:N2,N=N1+N2;
    步骤S2:将所述光刻空间像图和所述CDSEM图像坐标位置对齐;
    步骤S3:基于神经网络模型,将所述光刻空间像图作为输入,与之对应的所述CDSEM图像作为目标输出,遍历所述训练集中的N1组所述光刻空间像图-CDSEM图像数据对完成所述神经网络模型的训练;遍历验证集中 的N2组所述光刻空间像图-CDSEM图像数据对完成所述神经网络模型的验证。
  2. 根据权利要求1所述的基于机器学习的CDSEM图像虚拟测量方法,其特征在于,步骤S3包括:
    步骤S31:提供初始神经网络模型;
    步骤S32:以所述训练集中的所述光刻空间像图作为输入,与之对应的所述CDSEM图像作为目标输出,遍历所述训练集中的所述光刻空间像图-CDSEM图像数据对,从所述初始神经网络模型开始进行训练,得到训练后的神经网络模型;
    步骤S33:遍历所述验证集中的所述光刻空间像图-CDSEM图像数据对,对所述训练后的神经网络模型进行验证,并计算所述验证集的损失函数;
    步骤S34:判断所述损失函数是否小于设定值,若是,停止对所述神经网络模型的训练,得到最终神经网络模型;若否,重复执行步骤S15至S34;其中,所述神经网络模型包括光刻空间像图与所述CDSEM图像之间的映射。
  3. 根据权利要求2所述的基于机器学习的CDSEM图像虚拟测量方法,其特征在于,所述神经网络模型为以卷积为主的深度卷积神经网络DCNN模型或者生成式对抗网络GAN模型,使用ReLU为激活函数;若所述神经网络模型采用所述深度卷积神经网络DCNN模型,所述损失函数为均方误差损失函数,若所述神经网络模型采用所述生成式对抗网络GAN模型,所述损失函数为交叉熵损失函数。
  4. 根据权利要求1所述的基于机器学习的CDSEM图像虚拟测量方法,其特征在于,所述训练集中所述光刻空间像图-CDSEM图像数据对的组数 N1是7的倍数,所述验证集中所述光刻空间像图-CDSEM图像数据对的组数N2为3的倍数。
  5. 根据权利要求1所述的基于机器学习的CDSEM图像虚拟测量方法,其特征在于,还包括:
    步骤S4:基于所述最终神经网络模型,当新的光刻空间像图输入时,所述最终神经网络模型生成与之对应的虚拟CDSEM图像。
  6. 根据权利要求5所述的基于机器学习的CDSEM图像虚拟测量方法,其特征在于,还包括步骤S5:检测所述最终神经网络模型生成的所述虚拟CDSEM图像的关键尺寸,并根据所述关键尺寸确定所述OPC光学模型是否需要进行校正。
  7. 根据权利要求6所述的基于机器学习的CDSEM图像虚拟测量方法,其特征在于,所述步骤S5包括:
    S51:获取所述虚拟CDSEM图像轮廓,通过所述轮廓确定关键尺寸;
    S52:判断所述关键尺寸是否符合工艺要求,若不符合则对所述OPC光学模型进行校正。
  8. 根据权利要求5所述的基于机器学习的CDSEM图像虚拟测量方法,其特征在于,还包括步骤S5’:判断一次光刻工艺的随机效应是否可以接受。
  9. 根据权利要求8所述的基于机器学习的CDSEM图像虚拟测量方法,其特征在于,所述步骤S5’包括:
    S51’用所述新的光刻空间像图对应的光刻工艺条件进行一次光刻工艺流程,量测得到一张实际CDSEM图像;
    S52’将所述最终神经网络模型生成的所述虚拟CDSEM图像与实际 CDSEM图像作比较,若两者像素值的均方误差满足精度要求,则判定该次光刻工艺的随机效应可以接受。
  10. 根据权利要求1所述的基于机器学习的CDSEM图像虚拟测量方法,其特征在于,所述光刻空间像图和CDSEM图像的图像大小和分辨率相同。
PCT/CN2021/134560 2020-12-11 2021-11-30 一种基于机器学习的cdsem图像虚拟测量方法 WO2022121736A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011459003.3 2020-12-11
CN202011459003.3A CN112561873B (zh) 2020-12-11 2020-12-11 一种基于机器学习的cdsem图像虚拟测量方法

Publications (1)

Publication Number Publication Date
WO2022121736A1 true WO2022121736A1 (zh) 2022-06-16

Family

ID=75062324

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/134560 WO2022121736A1 (zh) 2020-12-11 2021-11-30 一种基于机器学习的cdsem图像虚拟测量方法

Country Status (2)

Country Link
CN (1) CN112561873B (zh)
WO (1) WO2022121736A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114841378A (zh) * 2022-07-04 2022-08-02 埃克斯工业(广东)有限公司 晶圆特征参数预测方法、装置、电子设备及可读存储介质
CN117669473A (zh) * 2024-01-29 2024-03-08 全智芯(上海)技术有限公司 用于模型校准的方法、电子设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112561873B (zh) * 2020-12-11 2022-11-25 上海集成电路装备材料产业创新中心有限公司 一种基于机器学习的cdsem图像虚拟测量方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017171891A1 (en) * 2016-04-02 2017-10-05 Intel Corporation Systems, methods, and apparatuses for modeling reticle compensation for post lithography processing using machine learning algorithms
WO2019219826A1 (en) * 2018-05-18 2019-11-21 Carl Zeiss Smt Gmbh Method and apparatus for evaluating an unknown effect of defects of an element of a photolithography process
WO2019238372A1 (en) * 2018-06-15 2019-12-19 Asml Netherlands B.V. Machine learning based inverse optical proximity correction and process model calibration
WO2020135988A1 (en) * 2018-12-28 2020-07-02 Asml Netherlands B.V. Determining pattern ranking based on measurement feedback from printed substrate
US20200278604A1 (en) * 2019-02-28 2020-09-03 Taiwan Semiconductor Manufacturing Co., Ltd. Lithography model calibration
CN111985611A (zh) * 2020-07-21 2020-11-24 上海集成电路研发中心有限公司 基于物理特征图与dcnn机器学习逆向光刻解的计算方法
CN112561873A (zh) * 2020-12-11 2021-03-26 上海集成电路装备材料产业创新中心有限公司 一种基于机器学习的cdsem图像虚拟测量方法

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4351928B2 (ja) * 2004-02-23 2009-10-28 株式会社東芝 マスクデータの補正方法、フォトマスクの製造方法及びマスクデータの補正プログラム
CN100474115C (zh) * 2006-04-04 2009-04-01 上海微电子装备有限公司 光刻机成像光学***像差现场测量方法
US8196068B2 (en) * 2009-04-30 2012-06-05 Synopsys, Inc. Modeling critical-dimension (CD) scanning-electron-microscopy (CD-SEM) CD extraction
JP6751871B2 (ja) * 2014-11-25 2020-09-09 ピーディーエフ ソリューションズ,インコーポレイテッド 半導体製造プロセスのための改善されたプロセス制御技術
US10648924B2 (en) * 2016-01-04 2020-05-12 Kla-Tencor Corp. Generating high resolution images from low resolution images for semiconductor applications
DE102017220872B4 (de) * 2017-11-22 2022-02-03 Carl Zeiss Smt Gmbh Verfahren und System zur Qualifizierung einer Maske für die Mikrolithographie
CN108228981B (zh) * 2017-12-19 2021-07-20 上海集成电路研发中心有限公司 基于神经网络的opc模型生成方法及实验图案的预测方法
US11222415B2 (en) * 2018-04-26 2022-01-11 The Regents Of The University Of California Systems and methods for deep learning microscopy
CN111310407A (zh) * 2020-02-10 2020-06-19 上海集成电路研发中心有限公司 基于机器学习进行逆向光刻最优特征向量设计的方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017171891A1 (en) * 2016-04-02 2017-10-05 Intel Corporation Systems, methods, and apparatuses for modeling reticle compensation for post lithography processing using machine learning algorithms
WO2019219826A1 (en) * 2018-05-18 2019-11-21 Carl Zeiss Smt Gmbh Method and apparatus for evaluating an unknown effect of defects of an element of a photolithography process
WO2019238372A1 (en) * 2018-06-15 2019-12-19 Asml Netherlands B.V. Machine learning based inverse optical proximity correction and process model calibration
WO2020135988A1 (en) * 2018-12-28 2020-07-02 Asml Netherlands B.V. Determining pattern ranking based on measurement feedback from printed substrate
US20200278604A1 (en) * 2019-02-28 2020-09-03 Taiwan Semiconductor Manufacturing Co., Ltd. Lithography model calibration
CN111985611A (zh) * 2020-07-21 2020-11-24 上海集成电路研发中心有限公司 基于物理特征图与dcnn机器学习逆向光刻解的计算方法
CN112561873A (zh) * 2020-12-11 2021-03-26 上海集成电路装备材料产业创新中心有限公司 一种基于机器学习的cdsem图像虚拟测量方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YAN YAN, SHI XUELONG, ZHOU TAO, XU BOWEN, LI CHEN, LU YIFEI, GAO YING: "Machine Learning Virtual SEM Metrology", 2020 INTERNATIONAL WORKSHOP ON ADVANCED PATTERNING SOLUTIONS (IWAPS), IEEE, 5 November 2020 (2020-11-05) - 6 November 2020 (2020-11-06), pages 1 - 4, XP055941410, ISBN: 978-1-7281-7577-5, DOI: 10.1109/IWAPS51164.2020.9286804 *
ZHOU TAO, SHI XUELONG, YANYAN, LI CHEN, CHEN SHOUMIAN, ZHAO YUHANG, ZHOU WENZHAN, ZHOU KAN, ZENG XUAN: "An effective method of contour extraction for SEM image based on DCNN", 2020 INTERNATIONAL WORKSHOP ON ADVANCED PATTERNING SOLUTIONS (IWAPS), IEEE, 5 November 2020 (2020-11-05) - 6 November 2020 (2020-11-06), pages 1 - 4, XP055941418, ISBN: 978-1-7281-7577-5, DOI: 10.1109/IWAPS51164.2020.9286798 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114841378A (zh) * 2022-07-04 2022-08-02 埃克斯工业(广东)有限公司 晶圆特征参数预测方法、装置、电子设备及可读存储介质
CN114841378B (zh) * 2022-07-04 2022-10-11 埃克斯工业(广东)有限公司 晶圆特征参数预测方法、装置、电子设备及可读存储介质
CN117669473A (zh) * 2024-01-29 2024-03-08 全智芯(上海)技术有限公司 用于模型校准的方法、电子设备及存储介质
CN117669473B (zh) * 2024-01-29 2024-04-19 全智芯(上海)技术有限公司 用于模型校准的方法、电子设备及存储介质

Also Published As

Publication number Publication date
CN112561873A (zh) 2021-03-26
CN112561873B (zh) 2022-11-25

Similar Documents

Publication Publication Date Title
WO2022121736A1 (zh) 一种基于机器学习的cdsem图像虚拟测量方法
KR102441582B1 (ko) Mpc 검증 방법 및 그 검증 방법을 포함한 마스크 제조방법
JP4856047B2 (ja) マスクパターン寸法検査方法およびマスクパターン寸法検査装置
JP2001516898A (ja) 目視検査及び照合システム
JP2005309140A (ja) フォトマスク製造方法、フォトマスク欠陥修正箇所判定方法、及びフォトマスク欠陥修正箇所判定装置
CN112485976B (zh) 基于逆向刻蚀模型确定光学临近修正光刻目标图案的方法
US20080304029A1 (en) Method and System for Adjusting an Optical Model
CN111430261A (zh) 一种工艺检测方法及装置
JP5224853B2 (ja) パターン予測方法、パターン補正方法、半導体装置の製造方法、及びプログラム
JP3432639B2 (ja) マスクパターンの作成方法
CN111507059B (zh) 一种图形图像联合优化的光刻掩模优化方法、装置及电子设备
US7930654B2 (en) System and method of correcting errors in SEM-measurements
CN114326288A (zh) 增大光刻工艺窗口的方法、电子设备和存储介质
JP2016021008A (ja) マルチパターニング用マスクのパターン評価方法およびパターン評価装置
US7222327B2 (en) Photo mask, method of manufacturing photo mask, and method of generating mask data
CN112541545B (zh) 基于机器学习预测刻蚀工艺后cdsem图像的方法
CN115933305A (zh) 一种光掩模版图形的修正方法、装置、设备及介质
US20220283496A1 (en) Photomask and method for inspecting photomask
US20070141476A1 (en) More accurate and physical method to account for lithographic and etch contributions in OPC models
CN112578646B (zh) 一种基于图像的离线的光刻工艺稳定性控制方法
KR20080102648A (ko) 광근접효과 보정 방법
TW201931003A (zh) 微影光罩、用於確定此光罩之結構之影像的邊緣位置的方法以及用於實施此方法的系統
TW200418084A (en) Integrated circuit pattern designing method, exposure mask manufacturing method, exposure mask, and integrated circuit device manufacturing method
US20230132893A1 (en) Mask layout correction methods based on machine learning, and mask manufacturing methods including the correction methods
WO2021085522A1 (ja) 処理条件推定装置、方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21902443

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 15.11.2023)