NL2024142A - Alignment method and associated alignment and lithographic apparatuses - Google Patents

Alignment method and associated alignment and lithographic apparatuses Download PDF

Info

Publication number
NL2024142A
NL2024142A NL2024142A NL2024142A NL2024142A NL 2024142 A NL2024142 A NL 2024142A NL 2024142 A NL2024142 A NL 2024142A NL 2024142 A NL2024142 A NL 2024142A NL 2024142 A NL2024142 A NL 2024142A
Authority
NL
Netherlands
Prior art keywords
pupil
alignment
substrate
intensity
sensor
Prior art date
Application number
NL2024142A
Other languages
Dutch (nl)
Inventor
Alpeggiani Filippo
Adrianus Goorden Sebastianus
Original Assignee
Asml Netherlands Bv
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Asml Netherlands Bv filed Critical Asml Netherlands Bv
Priority to NL2024142A priority Critical patent/NL2024142A/en
Publication of NL2024142A publication Critical patent/NL2024142A/en

Links

Landscapes

  • Exposure And Positioning Against Photoresist Photosensitive Materials (AREA)

Abstract

Disclosed is a method of metrology such as alignment metrology. The method comprises obtaining a pupil image comprising a measured intensity distribution in a pupil plane relating to scattered radiation resultant from a measurement of a structure. An intensity term is determined from the pupil image, the intensity term comprising a processed intensity distribution in the pupil plane relating to the measurement of the structure. The method comprises determining a measurement value or correction therefor using the intensity term and a sensor term relating to sensor optics used to perform said measurement.

Description

BACKGROUND
Field of the Invention [0001] The present invention relates to methods and apparatus usable, for example, in the manufacture of devices by lithographic techniques, and to methods of manufacturing devices using lithographic techniques. The invention relates to metrology devices, and more specifically metrology devices used for measuring position such as alignment sensors and lithography apparatuses having such an alignment sensor.
Background Art [0002] A lithographic apparatus is a machine that applies a desired pattern onto a substrate, usually onto a target portion of the substrate. A lithographic apparatus can be used, for example, in the manufacture of integrated circuits (ICs). In that instance, a patterning device, which is alternatively referred to as a mask or a reticle, may be used to generate a circuit pattern to be formed on an individual layer of the IC, This pattern can be transferred onto a target portion (e.g. including part of a die, one die, or several dies) on a substrate (e.g., a silicon wafer). Transfer of the pattern is typically via imaging onto a layer of radiation-sensitive material (resist) provided on the substrate. In general, a single substrate will contain a network of adjacent target portions that are successively patterned. These target portions are commonly referred to as “fields”.
[0003] In the manufacture of complex devices, typically many lithographic patterning steps are performed, thereby forming functional features in successive layers on the substrate. A critical aspect of performance of the lithographic apparatus is therefore the ability to place the applied pattern correctly and accurately in relation to features laid down (by the same apparatus or a different lithographic apparatus) in previous layers. For this purpose, the substrate is provided with one or more sets of alignment marks. Each mark is a structure whose position can be measured at a later time using a position sensor, typically an optical position sensor. The lithographic apparatus includes one or more alignment sensors by which positions of marks on a substrate can be measured accurately. Different types of marks and different types of alignment sensors are known from different manufacturers and different products of the same manufacturer.
[0004] In other applications, metrology sensors are used for measuring exposed structures on a substrate (either in resist and/or after etch). A fast and non-invasive form of specialized inspection tool is a scatterometer in which a beam of radiation is directed onto a target on the surface of the substrate and properties of the scattered or reflected beam are measured. Examples of known scatterometers include angle-resolved scatterometers of the type described in US2006033921 Al and US2010201963AL In addition to measurement of feature shapes by reconstruction, diffraction based overlay can be measured using such apparatus, as described in published patent application US2006066855A1. Diffraction-based overlay metrology using dark-field imaging of the diffraction orders enables overlay measurements on smaller targets. Examples of dark field i maging metrology can be found in international patent applications WO 2009/078708 and WO 2009/106279 which documents are hereby incorporated by reference in their entirety. Further developments of the technique have been described in published patent publications US20110027704A, US20110043791 A, US2011102753A1, US20120044470A, US20120123581A,
US20130258310A, US20130271740A and WO2013178422A1. These targets can be smaller than the illumination spot and may be surrounded by product structures on a wafer. Multiple gratings can be measured in one image, using a composite grating target. The contents of all these applications are also incorporated herein by reference.
[0005] In some metrology applications, such as in position metrology using alignment sensors, a phase difference between different diffraction orders arises due to the light probing different optical aberrations of the optical system. Where this is constant (intra-wafer and between wafers) it can be quantified and calibrated for. However, where this induced phase difference is process dependent and/or stack dependent, present calibration methods are insufficient, resulting in alignment errors.
[0006] It would be desirable to address this issue and improve correction for such sensor aberration induced error.
SUMMARY OF THE INVENTION [0007] The invention in a first aspect provides a method of metrology comprising: obtaining a pupil image comprising a measured intensity distribution in a pupil plane relating to scattered radiation resultant from a measurement of a structure; determining an intensity term from the pupil image, the intensity term comprising a processed intensity distribution in the pupil plane relating to the measurement of the structure; and determining a measurement value or correction therefor using the intensity term and a sensor term relating to sensor optics used to perform said measurement.
[0008] Also disclosed is a computer program, metrology apparatus and a lithographic apparatus being operable to perform the method of the first aspect.
[0009] The above and other aspects of the invention will be understood from a consideration of the examples described below.
BRIEF DESCRIPTION OF THE DRAWINGS [0010] Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
Figure 1 depicts a lithographic apparatus;
Figure 2 illustrates schematically measurement and exposure processes in the apparatus of Figure 1;
Figure 3 is a schematic illustration of a first alignment sensor adaptable according to an embodiment;
Figure 4 is a schematic illustration of a second alignment sensor adaptable according to an embodiment;
Figure 5 is a schematic illustration of an alternative metrology device also usable for alignment and adaptable according to an embodiment;
Figure 6 comprises (a) a pupil image of input radiation (b) pupil image of off-axis illumination beams illustrating an operational principle of the metrology device of Figure 5; and (c) pupil image of off-axis illumination beams illustrating another operational principle of the metrology device of Figure 5;
Figure 7 is a flowchart describing a method according to an embodiment; and
Figure 8 comprise pupil intensity plots as a function of pupil location, conceptually illustrating the intensity metrics of different embodiments.
DETAILED DESCRIPTION OF EMBODIMENTS [0011] Before describing embodiments of the invention in detail, it is instructive to present an example environment in which embodiments of the present invention may be implemented.
[0012] Figure 1 schematically depicts a lithographic apparatus LA. The apparatus includes an illumination system (illuminator) IL configured to condition a radiation beam B (e.g., UV radiation or DUV radiation), a patterning device support or support structure (e.g., a mask table) MT constructed to support a patterning device (e.g., a mask) MA and connected to a first positioner PM configured to accurately position the patterning device in accordance with certain parameters; two substrate tables (e.g., a wafer table) WTa and WTb each constructed to hold a substrate (e.g., a resist coated wafer) W and each connected to a second positioner PW configured to accurately position the substrate in accordance with certain parameters; and a projection system (e.g., a refractive projection lens system) PS configured to project a pattern imparted to the radiation beam B by patterning device MA onto a target portion C (e.g., including one or more dies) of the substrate W. A reference frame RF connects the various components, and serves as a reference for setting and measuring positions of the patterning device and substrate and of features on them.
[0013] The illumination system may include various types of optical components, such as refractive, reflective, magnetic, electromagnetic, electrostatic or other types of optical components, or any combination thereof, for directing, shaping, or controlling radiation.
[0014] The patterning device support MT holds the patterning device in a manner that depends on the orientation of the patterning device, the design of the lithographic apparatus, and other conditions, such as for example whether or not the patterning device is held in a vacuum environment. The patterning device support can use mechanical, vacuum, electrostatic or other clamping techniques to hold the patterning device. The patterning device support MT may be a frame or a table, for example, which may be fixed or movable as required. The patterning device support may ensure that the patterning device is at a desired position, for example with respect to the projection system. [0015] The term “patterning device” used herein should be broadly interpreted as referring to any device that can be used to impart a radiation beam with a pattern in its cross-section such as to create a pattern in a target portion of the substrate. It should be noted that the pattern imparted to the radiation beam may not exactly correspond to the desired pattern in the target portion of the substrate, for example if the pattern includes phase-shifting features or so called assist features. Generally, the pattern imparted to the radiation beam will correspond to a particular functional layer in a device being created in the target portion, such as an integrated circuit.
[0016] As here depicted, the apparatus is of a transmissive type (e.g., employing a transmissive patterning device). Alternatively, the apparatus may be of a reflective type (e.g., employing a programmable mirror array of a type as referred to above, or employing a reflective mask). Examples of patterning devices include masks, programmable mirror arrays, and programmable LCD panels. Any use of the terms “reticle” or “mask” herein may be considered synonymous with the more general term “patterning device.” The term “patterning device” can also be interpreted as referring to a device storing in digital form pattern information for use in controlling such a programmable patterning device.
[0017] The term “projection system” used herein should be broadly interpreted as encompassing any type of projection system, including refractive, reflective, catadioptric, magnetic, electromagnetic and electrostatic optical systems, or any combination thereof, as appropriate for the exposure radiation being used, or for other factors such as the use of an immersion liquid or the use of a vacuum. Any use of the term “projection lens” herein may be considered as synonymous with the more general term “projection system”.
[0018] The lithographic apparatus may also be of a type wherein at least a portion of the substrate may be covered by a liquid having a relatively high refractive index, e.g., water, so as to fill a space between the projection system and the substrate. An immersion liquid may also be applied to other spaces in the lithographic apparatus, for example, between the mask and the projection system. Immersion techniques are well known in the art for increasing the numerical aperture of projection systems.
[0019] In operation, the illuminator IL receives a radiation beam from a radiation source SO. The source and the lithographic apparatus may be separate entities, for example when the source is an excimer laser. In such cases, the source is not considered to form part of the lithographic apparatus and the radiation beam is passed from the source SO to the illuminator IL with the aid of a beam delivery system BD including, for example, suitable directing mirrors and/or a beam expander. In other cases the source may be an integral part of the lithographic apparatus, for example when the source is a mercury lamp. The source SO and the illuminator IL, together with the beam delivery system BD if required, may be referred to as a radiation system.
[0020] The illuminator IL may for example include an adjuster AD for adjusting the angular intensity distribution of the radiation beam, an integrator IN and a condenser CO. The illuminator may be used to condition the radiation beam, to have a desired uniformity and intensity distribution in its cross section.
[0021] The radiation beam B is incident on the patterning device MA, which is held on the patterning device support MT, and is patterned by the patterning device. Having traversed the patterning device (e.g., mask) MA, the radiation beam B passes through the projection system PS, which focuses the beam onto a target portion C of the substrate W. With the aid of the second positioner PW and position sensor IF (e.g., an interferometric device, linear encoder, 2-D encoder or capacitive sensor), the substrate table WTa or W'Tb can be moved accurately, e.g., so as to position different target portions C in the path of the radiation beam B. Similarly, the first positioner PM and another position sensor (which is not explicitly depicted in Figure 1) can be used to accurately position the patterning device (e.g., mask) MA with respect to the path of the radiation beam B, e.g., after mechanical retrieval from a mask library, or during a scan.
[0022] Patterning device (e.g., mask) MA and substrate W may be aligned using mask alignment marks Ml, M2 and substrate alignment marks Pl, P2. Although the substrate alignment marks as illustrated occupy dedicated target portions, they may be located in spaces between target portions (these are known as scribe-lane alignment marks). Similarly, in situations in which more than one die is provided on the patterning device (e.g., mask) MA, the mask alignment marks may be located between the dies. Small alignment marks may also be included within dies, in amongst the device features, in which case it is desirable that the markers be as small as possible and not require any different imaging or process conditions than adjacent features. The alignment system, which detects the alignment markers is described further below.
[0023] The depicted apparatus could be used in a variety of modes. In a scan mode, the patterning device support (e.g., mask table) MT and the substrate table WT are scanned synchronously while a pattern imparted to the radiation beam is projected onto a target portion C (i.e., a single dynamic exposure). The speed and direction of the substrate table WT relative to the patterning device support (e.g., mask table) MT may be determined by the (de-)magnification and image reversal characteristics of the projection system PS. In scan mode, the maximum size of the exposure field limits the width (in the non-scanning direction) of the target portion in a single dynamic exposure, whereas the length of the scanning motion determines the height (in the scanning direction) of the target portion. Other types of lithographic apparatus and modes of operation are possible, as is wellknown in the art. For example, a step mode is known. In so-called “maskless” lithography, a programmable patterning device is held stationary but with a changing pattern, and the substrate table WT is moved or scanned.
[0024] Combinations and/or variations on the above described modes of use or entirely different modes of use may also be employed.
[0025] Lithographic apparatus LA is of a so-called dual stage type which has two substrate tables WTa, WTb and two stations - an exposure station EXP and a measurement station MEA - between which the substrate tables can be exchanged. While one substrate on one substrate table is being exposed at the exposure station, another substrate can be loaded onto the other substrate table at the measurement station and various preparatory steps carried out. This enables a substantial increase in the throughput of the apparatus. The preparatory steps may include mapping the surface height contours of the substrate using a level sensor LS and measuring the position of alignment markers on the substrate using an alignment sensor AS. If the position sensor IF is not capable of measuring the position of the substrate table while it is at the measurement station as well as at the exposure station, a second position sensor may be provided to enable the positions of the substrate table to be tracked at both stations, relative to reference frame RF. Other arrangements are known and usable instead of the dual-stage arrangement shown. For example, other lithographic apparatuses are known in which a substrate table and a measurement table are provided. These are docked together when performing preparatory measurements, and then undocked while the substrate table undergoes exposure.
[0026] Figure 2 illustrates the steps to expose target portions (e.g. dies) on a substrate W in the dual stage apparatus of Figure 1. On the left hand side within a dotted box are steps performed at a measurement station MEA, while the right hand side shows steps performed at the exposure station EXP. From time to time, one of the substrate tables WTa, WTb will be at the exposure station, while the other is at the measurement station, as described above. For the purposes of this description, it is assumed that a substrate W has already been loaded into the exposure station. At step 200, a new substrate W' is loaded to the apparatus by a mechanism not shown. These two substrates are processed in parallel in order to increase the throughput of the lithographic apparatus.
[0027] Referring initially to the newly-loaded substrate W’, this may be a previously unprocessed substrate, prepared with a new photo resist for first time exposure in the apparatus. In general, however, the lithography process described will be merely one step in a series of exposure and processing steps, so that substrate W’ has been through this apparatus and/or other lithography apparatuses, several times already, and may have subsequent processes to undergo as well. Particularly for the problem of improving overlay performance, the task is to ensure that new patterns are applied in exactly the correct position on a substrate that has already been subjected to one or more cycles of patterning and processing. These processing steps progressively introduce distortions in the substrate that must be measured and corrected for, to achieve satisfactory overlay performance.
[0028] The previous and/or subsequent patterning step may be performed in other lithography apparatuses, as just mentioned, and may even be performed in different types of lithography apparatus. For example, some layers in the device manufacturing process which are very demanding in parameters such as resolution and overlay may be performed in a more advanced lithography tool than other layers that are less demanding. Therefore some layers may be exposed in an immersion type lithography tool, while others are exposed in a ‘dry’ tool. Some layers may be exposed in a tool working at DUV wavelengths, while others are exposed using EUV wavelength radiation.
[0029] At 202, alignment measurements using the substrate marks P1 etc. and image sensors (not shown) are used to measure and record alignment of the substrate relative to substrate table WTa/WTb. In addition, several alignment marks across the substrate W’ will be measured using alignment sensor AS. These measurements are used in one embodiment to establish a “wafer grid”, which maps very accurately the distribution of marks across the substrate, including any distortion relative to a nominal rectangular grid.
[0030] At step 204, a map of wafer height (Z) against X-Y position is measured also using the level sensor LS. Conventionally, the height map is used only to achieve accurate focusing of the exposed pattern. It may be used for other purposes in addition.
[0031] When substrate W’ was loaded, recipe data 206 were received, defining the exposures to be performed, and also properties of the wafer and the patterns previously made and to be made upon it. To these recipe data are added the measurements of wafer position, wafer grid and height map that were made at 202,204, so that a complete set of recipe and measurement data 208 can be passed to the exposure station EXP. The measurements of alignment data for example comprise X and Y positions of alignment targets formed in a fixed or nominally fixed relationship to the product patterns that are the product of the lithographic process. These alignment data, taken just before exposure, are used to generate an alignment model with parameters that fit the model to the data. These parameters and the alignment model will be used during the exposure operation to correct positions of patterns applied in the current lithographic step. The model in use interpolates positional deviations between the measured positions. A conventional alignment model might comprise four, five or six parameters, together defining translation, rotation and scaling of the ‘ideal’ grid, in different dimensions. Advanced models are known that use more parameters.
[0032] At 210, wafers W’ and W are swapped, so that the measured substrate W’ becomes the substrate W entering the exposure station EXP. In the example apparatus of Figure 1, this swapping is performed by exchanging the supports WTa and WTb within the apparatus, so that the substrates W, W’ remain accurately clamped and positioned on those supports, to preserve relative alignment between the substrate tables and substrates themselves. Accordingly, once the tables have been swapped, determining the relative position between projection system PS and substrate table WTb (formerly WTa) is all that is necessary to make use of the measurement information 202, 204 for the substrate W (formerly W’) in control of the exposure steps. At step 212, reticle alignment is performed using the mask alignment marks Ml, M2. In steps 214, 216,218, scanning motions and radiation pulses are applied at successive target locations across the substrate W, in order to complete the exposure of a number of patterns.
[0033] By using the alignment data and height map obtained at the measuring station in the performance of the exposure steps, these patterns are accurately aligned with respect to the desired locations, and, in particular, with respect to features previously laid down on the same substrate. The exposed substrate, now labeled W” is unloaded from the apparatus at step 220, to undergo etching or other processes, in accordance with the exposed pattern.
[0034] The skilled person will know that the above description is a simplified overview of a number of very detailed steps involved in one example of a real manufacturing sit uation. For example rather than measuring alignment in a single pass, often there will be separate phases of coarse and fine measurement, using the same or different marks. The coarse and/or fine alignment measurement steps can be performed before or after the height measurement, or interleaved.
[0035] In the manufacture of complex devices, typically many lithographic patterning steps are performed, thereby forming functional features in successive layers on the substrate. A critical aspect of performance of the lithographic apparatus is therefore the ability to place the applied pattern correctly and accurately in relation to features laid down in previous layers (by the same apparatus or a different lithographic apparatus). For this purpose, the substrate is provided with one or more sets of marks. Each mark is a structure whose position can be measured at a later time using a position sensor, typically an optical position sensor. The position sensor may be referred to as “alignment sensor” and marks may be referred to as “alignment marks”.
[0036] A lithographic apparatus may include one or more (e.g. a plurality of) alignment sensors by which positions of alignment marks provided on a substrate can be measured accurately. Alignment (or position) sensors may use optical phenomena such as diffraction and interference to obtain position information from alignment marks formed on the substrate. An example of an alignment sensor used in current lithographic apparatus is based on a self-referencing interferometer as described in US6961116. Various enhancements and modifications of the position sensor have been developed, for example as disclosed in US2015261097A1. The contents of all of these publications are incorporated herein by reference.
[0037] A mark, or alignment mark, may comprise a series of bars formed on or in a layer provided on the substrate or formed (directly) in the substrate. The bars may be regularly spaced and act as grating lines so that the mark can be regarded as a diffraction grating with a well-known spatial period (pitch). Depending on the orientation of these grating lines, a mark may be designed to allow measurement of a position along the X axis, or along the Y axis (which is oriented substantially perpendicular to the X axis). A mark comprising bars that are arranged at +45 degrees and/or -45 degrees with respect to both the X- and Y-axes allows for a combined X- and Y- measurement using techniques as described in US2009/195768A, which is incorporated by reference.
[0038] The alignment sensor may scan each mark optically with a spot of radiation to obtain a periodically varying signal, such as a sine wave. The phase of this signal is analyzed, to determine the position of the mark and, hence, of the substrate relative to the alignment sensor, which, in turn, is fixated relative to a reference frame of a lithographic apparatus. So-called coarse and fine marks may be provided, related to different (coarse and fine) mark dimensions, so that the alignment sensor can distinguish between different cycles of the periodic signal, as well as the exact position (phase) within a cycle. Marks of different pitches may also be used for this purpose.
[0039] Measuring the position of the marks may also provide information on a deformation of the substrate on which the marks tire provided, for example in the form of a wafer grid. Deformation of the substrate may occur by, for example, electrostatic clamping of the substrate to the substrate table and/or heating of the substrate when the substrate is exposed to radiation.
[0040] Figure 3 is a schematic block diagram of an embodiment of a known alignment sensor AS. Radiation source RSO provides a beam RB of radiation of one or more wavelengths, which is diverted by diverting optics onto a mark, such as mark AM located on substrate W, as an illumination spot SP. In this example the diverting optics comprises a spot mirror SM and an objective lens OL. The illumination spot SP, by which the mark AM is illuminated, may be slightly smaller in diameter than the width of the mark itself.
[0041] Radiation diffracted by the mark AM is collimated (in this example via the objective lens OL) into an information-carrying beam IB. The term “diffracted” is intended to include zero-order diffraction from the mark (which may be referred to as reflection). A self-referencing interferometer SRI, e.g. of the type disclosed in US6961116 mentioned above, interferes the beam IB with itself after which the beam is received by a photodetector PD. Additional optics (not shown) may be included to provide separate beams in case more than one wavelength is created by the radiation source RSO. The photodetector may be a single element, or it may comprise a number of pixels, if desired. The photodetector may comprise a sensor array.
[0042] The diverting optics, which in this example comprises the spot minor SM, may also serve to block zero order radiation reflected from the mark, so that the information-carrying beam IB comprises only higher order diffracted radiation from the mark AM (this is not essential to the measurement, but improves signal to noise ratios).
[0043] Intensity signals SI are supplied to a processing unit PU. By a combination of optical processing in the block SRI and computational processing in the unit PU, values for X- and Y-position on the substrate relative to a reference frame are output.
[0044] A single measurement of the type illustrated only fixes the position of the mark within a certain range corresponding to one pitch of the mark. Coarser measurement techniques are used in conjunction with this to identify which period of a sine wave is the one containing the marked position. The same process at coarser and/or finer levels are repeated at different wavelengths for increased accuracy and/or for robust detection of the mark irrespective of the materials from which the mark is made, and materials on and/or below which the mark is provided.
[0045] Figure 4 illustrates a schematic of a cross-sectional view of another known alignment apparatus 400. In an example of this embodiment, alignment apparatus 400 may be configured to align a substrate (e.g., substrate W) with respect to a patterning device (e.g., patterning device MA). Alignment apparatus 400 may be further configured to detect positions of alignment marks on the substrate and to align the substrate with respect to the patterning device or other components of lithographic apparatus 100 or 100’ using the detected positions of the alignment marks. Such alignment of the substrate may ensure accurate exposure of one or more patterns on the substrate [0046] According to an embodiment, alignment apparatus 400 may include an illumination system 402, a beam splitter 414, an interferometer 426, a detector 428, and a signal analyzer 430, according to an example of this embodiment. Illumination system 402 may be configured to provide an electromagnetic narrow band radiation beam 404 having one or more passbands. In an example, the one or more passbands may be within a spectrum of wavelengths between about 400nm to about 2.0pm. In another example, the one or more passbands may be discrete narrow passbands within a spectrum of wavelengths between about 400nm to about 2.0μιη.
[0047] Beam splitter 414 may be configured to receive radiation beam 404 and direct a radiation subbeam 415 onto a substrate 420 placed on a stage 422. In one example, the stage 422 is movable along direction 424. Radiation sub-beam 415 may be configured to illuminate an alignment mark or a target 418 located on substrate 420. Alignment mark or target 418 may be coated with a radiation sensitive film in an example of this embodiment. In another example, alignment mark or target 418 may have one hundred and eighty degrees (i.e., 180°) symmetry. That is, when alignment mark or target 418 is rotated 180° about an axis of symmetry perpendicular to a plane of alignment mark or target 418, rotated alignment mark or target 418 may be substantially identical to an imrotated alignment mark or target 418. The target 418 on substrate 420 may be (a) a resist layer grating comprising bars that are formed of solid resist lines, or (b) a product layer grating, or (c) a composite grating stack in an overlay target structure comprising a resist grating overlaid or interleaved on a product layer grating. The bars may alternatively be etched into the substrate.
[0048] Beam splitter 414 may be further configured to receive diffraction radiation beam 419 and direct diffracted radiation sub-beam 429 towards interferometer 426, according to an embodiment [0049] In an example embodiment, diffracted radiation sub-beam 429 may be at least a portion of radiation sub-beam 415 that may be reflected from alignment mark or target 418. In an example of this embodiment, interferometer 426 comprises any appropriate set of optical-elements, for example, a combination of prisms that may be configured to form two images of alignment mark or target 418 based on the received diffracted radiation sub-beam 429. Interferometer 426 may be further configured to rotate one of the two images with respect to the other of the two images 180° and recombine the rotated and unrotated images interferometrically. In some embodiments, the interferometer 426 can be a self-referencing interferometer (SRI), which is disclosed in US patent No. 6,628,406 (Kreuzer) and is incorporated by reference herein in its entirety.
[0050] In an embodiment, detector 428 may be configured to receive the recombined image via interferometer signal 427 and detect interference as a result of the recombined image when an alignment axis 421 of alignment apparatus 400 passes through a center of symmetry (not shown) of alignment mark or target 418. Such interference may be due to alignment mark or target 418 being 180° symmetrical, and the recombined image interfering constructively or destructively, according to an example embodiment. Based on the detected interference, detector 428 may be further configured to determine a position of the center of symmetry of alignment mark or target 418 and consequently, detect aposition of substrate 420. According to an example, alignment axis 421 may be aligned with an optical beam perpendicular to substrate 420 and passing through a center of image rotation interferometer 426. Detector 428 may be further configured to estimate the positions of alignment mark or target 418 by implementing sensor characteristics and interacting with wafer mark process variations.
[0051] Another specific type of metrology sensor, which has both alignment and product/process monitoring metrology applications, has recently been recently described in European applications EP18195488.4 and EP19150245.9, which are incorporated herein by reference. This describes a metrology device with optimized coherence. More specifically, the metrology device is configured to produce a plurality of spatially incoherent beams of measurement illumination, each of said beams (or both beams of measurement pairs of said beams, each measurement pair corresponding to a measurement direction) having corresponding regions within their cross-section for which the phase relationship between the beams at these regions is known; i.e., there is mutual spatial coherence for the corresponding regions.
[0052] Such a metrology device is able to measure small pitch targets with acceptable (minimal) interference artifacts (speckle) and will also be operable in a dark-field mode. Such a metrology device may be used as a position or alignment sensor for measuring substrate position (e.g., measuring the position of a periodic structure or alignment mark with respect to a fixed reference position). However, the metrology device is also usable for measurement of overlay (e.g., measurement of relative position of periodic structures in different layers, or even the same layer in the case of stitching marks). The metrology device is also able to measure asymmetry in periodic structures, and therefore could be used to measure any parameter which is based on a target asymmetry measurement (e.g., overlay using diffraction based overlay (DBO) techniques or focus using diffraction based focus (DBF) techniques).
[0053] Figure 5 shows a possible implementation of such a metrology device. The metrology device essentially operates as a standard microscope with a novel illumination mode. The metrology device 500 comprises an optical module 505 comprising the main components of the device. An illumination source 510 (which may be located outside the module 505 and optically coupled thereto by a multimode fiber 515) provides a spatially incoherent radiation beam 520 to the optical module 505. Optical components 517 deliver the spatially incoherent radiation beam 520 to a coherent off-axis illumination generator 525. This component is of particular importance to the concepts herein and will be described in greater detail. The coherent off-axis illumination generator
525 generates a plurality (e.g., four) off-axis beams 530 from the spatially incoherent radiation beam 520. The characteristics of these off-axis beams 530 will be described in detail further below. Tire zeroth order of the illumination generator may be blocked by an illumination zero order block element 575. This zeroth order will only be present for some of the coherent off-axis illumination generator examples described in this document (e.g, phase grating based illumination generators), and therefore may be omitted when such zeroth order illumination is not generated. The off-axis beams 530 are delivered (via optical components 535 and) a spot mirror 540 to an (e.g, high NA) objective lens 545. Tire objective lens focusses the off-axis beams 530 onto a sample (e.g, periodic structure/alignment mark) located on a substrate 550, where they scatter and diffract. The scattered higher diffraction orders 555+, 555- (e.g, +1 and -1 orders respectively), propagate back via the spot mirror 540, and are focused by optical component 560 onto a sensor or camera 565 where they interfere to form an interference pattern. A processor 580 running suitable software can then process the image(s) of the interference pattern captured by camera 565.
[0054] The zeroth order diffracted (specularly reflected) radiation is blocked at a suitable location in the detection branch; e.g, by the spot minor 540 and/or a separate detection zero-order block element. It should be noted that there is a zeroth order reflection for each of the off-axis illumination beams, i.e. in the current embodiment there are four of these zeroth order reflections in total. As such, the metrology device operated as a “dark field” metrology device.
[0055] In one embodiment, the metrology device may also comprise a pupil imaging branch 542, with corresponding pupil camera 547. There are a number of reasons why pupil imaging may be desirable. By way of a single example, the illumination spot size on the substrate might be tunable. One application of such a tunable illumination spot size is to better implement a pupil metrology mode, as such a mode may benefit from having the illumination spot underfilling the target (to avoid unwanted scattering in overlapping pupil coordinates).
[0056] In an embodiment, a coherence scrambler may be provided such that the incoherent beams may actually be pseudo-spatially incoherent, e.g, generated from a coherent illumination source such as a laser, while undergoing one or more processes to mimic spatial incoherence. This may comprise making the coherent radiation multimode and ensemble averaging different realizations during the integration time of the detector. More specifically, in an embodiment, many (e.g, random) realizations of speckle patterns (which are spatially coherent patterns) are generated with, e.g, a laser and a multimode fiber and/or a rotating diffuser plate. An ensemble average over these random speckle pattern realizations is determined which averages out interference effects and therefore effectively mimics spatial incoherence (the interference is averaged out on the detector plane during its integration time).
[0057] A main concept of the proposed metrology device is to induce spatial coherence in the measurement illumination only where required. More specifically, spatial coherence is induced between corresponding sets of pupil points in each of the off-axis beams 530. More specifically, a set of pupil points comprises a corresponding single pupil point in each of the off-axis beams, the set of pupil points being mutually spatially coherent, but where each pupil point is incoherent with respect to all other pupil points in the same beam. By optimizing the coherence of the measurement illumination in this manner, it becomes feasible to perform dark-field off-axis illumination on small pitch targets, but with minimal speckle artifacts as each off-axis beam 530 is spatially incoherent.
[0058] Figure 6 shows three pupil images to illustrate the concept. Figure 6(a) shows a first pupil image which relates to pupil plane Pl in Figure 5, and Figures 6(b) and 6(c) each show a second pupil image which relates to pupil plane P2 in Figure 5. Figure 6(a) shows (in cross-section) the spatially incoherent radiation beam 520, and Figures 6(b) and 6(c) show (in cross-section) the off-axis beams 530 generated by coherent off-axis illumination generator 525 in two different embodiments. In each case, the extent of the outer circle 595 corresponds to the maximum detection NA of the microscope objective; this may be, purely by way of an example 0.95 NA.
[0059] The triangles 600 in each of the pupils indicate a set of pupil points that are spatially coherent with respect to each other. Similarly, the crosses 605 indicate another set of pupil points which are spatially coherent with respect to each other. The triangles are spatially incoherent with respect to crosses and all other pupil points corresponding to beam propagation. The general principle (in the example shown in Figure 6(b)) is that each set of pupil points which are mutually spatially coherent (each coherent set of points) have identical spacings within the illumination pupil P2 as all other coherent sets of points. As such, in this embodiment, each coherent sets of points is a translation within the pupil of all other coherent sets of points.
[0060] In Figure 6(b), the spacing between each pupil point of the first coherent set of points represented by triangles 600 must be equal to the spacing between each pupil point of the coherent set of points represented by crosses 605. ‘Spacing’ in this context is directional, i.e., the set of crosses (second set of points) is not allowed to be rotated with respect to the set of triangles (first set of points). As such, each of the off-axis beams 530 comprises by itself incoherent radiation; however the off-axis beams 530 together comprise identical beams having corresponding sets of points within their cross-section that have a known phase relationship (spatial coherence). It should be noted that it is not necessary for the points of each set of points to be equally spaced (e.g., the spacing between the four triangles 605 in this example is not required to be equal). As such, the offaxis beams 530 do not have to be arranged symmetrically within the pupil.
[0061] Figure 6(c) shows that this basic concept can be extended to providing for a mutual spatial coherence between only the beams corresponding to a single measurement direction where beams 530X correspond to a first direction (X-direction) and beams 530Y correspond to a second direction (Y-direction). In this example, the squares and plus signs each indicate a set of pupil points which correspond to, but are not necessarily spatially coherent with, the sets of pupil points represented by the triangles and crosses. However, the crosses are mutually spatially coherent, as are the plus signs, and the crosses are a geometric translation in the pupil of the plus signs. As such, in Figure 6(c), the off-axis beams are only pair-wise coherent.
[0062] In this embodiment, the off-axis beams are considered separately by direction, e.g., X direction 530X and Y direction 530Y. The pair of beams 530X which generate the captured X direction diffraction orders need only be coherent with one another (such that pair of points 600X are mutually coherent, as are pair of points 605X). Similarly the pair of beams 530Y which generate the captured Y direction diffraction orders need only be coherent with one another (such that pair of points 600Y are mutually coherent, as are pair of points 605Y). However, there does not need to be coherence between the pairs of points 600X and 600Y, nor between the pairs of points 605X and 605Y. As such there are pairs of coherent points comprised in the pairs of off-axis beams corresponding to each considered measurement direction. As before, for each pair of beams corresponding to a measurement direction, each pair of coherent points is a geometric translation within the pupil of all the other coherent pairs of points.
[0063] Alignment sensors such as those illustrated in Figures 3, 4 or 5, for example, measure the position of an alignment target on a substrate or wafer, by detecting and observing the interference pattern of the diffracted orders diffracted by a grating. Since the light coming from different diffracted orders follows different optical paths in the sensor, a phase difference between orders arises due to the light probing different optical aberrations of the optical system. The effect of this phase difference is a deviation of the detected aligned position from the “true” physical position of the target. The constant term of this variation (both intra-wafer and wafer-to-wafer) can be calibrated out with existing calibration procedures. Problems arise, however, when there is a variation in the intensity of the diffracted light as a function of the diffracted angle. Common causes for this variation are process-induced effects (such as stack thickness variation), which result in a variation of the angular reflectivity of the grating. Such variation results in a variation of the light intensity distribution at the pupil plane or Fourier plane of the objective lens of the system.
[0064] The intensity variation within the pupil results in a variation of the aligned position deviation. This variation represents a source of alignment accuracy error which is not correctable with the current calibration procedures.
[0065] To address this issue, a data-driven approach for error correction is proposed, which is able to better correct such process and/or stack dependent aligned position deviation. In some optional embodiments, it is proposed to use coherence properties and the symmetry of the sensor in determining the correction; two specific such embodiments will be explicitly described, a first relating to a self-referencing interferometer based alignment sensor such as illustrated in Figure 3 and 4 and the second relating to an optimized coherence alignment sensor such as illustrated by Figures 5 and 6.
[0066] The proposal disclosed herein comprises developing a fully scalable correction platform which can be extended to higher orders in the pupil intensity variations. Errors due to a complex redistribution of the intensity within the pupil or any possible inaccuracy in the knowledge of the optical aberrations of the sensor may be overcome by the proposed data- driven approach where a model is trained based on pupil images acquired under the same conditions as the target measurements.
[0067] Figure 7 is a flowchart of a method according to an embodiment. The method will be described in more detail below, but briefly the steps are:
• 700- perform a calibration to determine a sensor term for the optical system, the sensor term being defined for each scattering angle (i.e., for each position in the pupil plane) within an angle range comparable with the angular spread of light propagating through the sensor.
• 710- perform an alignment measurement while obtaining an image of the pupil plane comprising a measured intensity distribution of radiation scattered (i.e., diffracted and/or specularly reflected) from an alignment mark or other structure;
• 720- determine an intensity term from the pupil plane image, the intensity term being defined for each scattering angle (i.e., comprising a processed intensity distribution over each position in the pupil plane) within a given angular range, determined by the angular spread of light propagating through the system;
• 730- determining a correction for the aligned position by application of a model, such as a linear regression model, using the intensity term and sensor term.
[0068] Step 710 comprises obtaining an image of the pupil plane of the objective (light scattered from the target as a function of angular coordinates). This image can be obtained using an external device or an internal branch of the sensor if the sensor is so equipped. For example, such a pupil imaging branch is illustrated in Figure 5 (pupil imaging branch 542. with corresponding pupil camera 547).
[0069] It is proposed (e.g., in step 720) to extract a specific intensity related feature from the camera images of the pupil plane. This feature is referred to as the intensity term, [, because it is a function of the intensity of the pupil images. The intensity term is a set of scalar numbers, defined individually for pixels (e.g., every pixel) or locations within the diffracted spots in the pupil plane (and therefore for each individual scattering angle of the diffracted radiation). The index i labels these locations (or pixels). The details of the intensity term are described below'.
[0070] In step 730, a linear regression model may be assumed in an embodiment, e.g., that describes the correction δ for an aligned position as given by the dot product of the intensity term / with an sensor term φ(:
δ=Σ[φί a) i
[0071] The sensor term φ;· may be calibrated (e.g., in step 700) from a training set of data using a supervised learning approach. The details of the calibration procedure will be described below.
[0072] In other embodiments a non-linear model is assumed in step 730.
[0073] There are a number of alternative approaches for defining the intensity term. A first example for the intensity term / comprises using the normalized intensity of each pixel of a higher (i.e., not zeroth) diffraction order; e.g., in either of the+1 and the -1 diffracted order spots or any other higher order diffraction spot(s); e.g., +2, -2 orders in the case where other higher-order diffraction is captured in the pupil. Alternatively, or in combination, (e.g., normalized) zeroth order scattered radiation may be used.
[0074] By way of a specific example, and with reference to Figure 6(c), the intensity term may include the intensity distribution of the pupil points inside one of the two spots marked as 530X, or (as an alternative example) one of the two spots marked as 53OY, or any other diffracted or scattered (e.g., specularly reflected) spot that is captured in the pupil. The intensity term computed from spot 530X may include, as an example, the intensities at pupil points 600X and 605X and any other pupil point inside the spot, each intensity being normalized to the total intensity integrated inside the spot.
[0075] As such, for example, the intensity term /, may be defined as:
ƒ+!) , = 1 Λ ,(+i) (2)
Z-n ln where ƒ> is the intensity for the /th pixel (of n pixels total) of the +1 diffraction order (although this could equally be the -1 order or any other higher diffraction order, or the zeroth order).
[0076] In such an embodiment, a single diffraction spot can be used. Alternatively, a correction can be determined for both spots (+1 and -1 order) separately and the results of these corrections averaged. As such, a correction δ may be determined from each of the diffraction spots based on a separate consideration of each spot effectively as if they were separate images (e.g., using Equation (1) for each spot separately and averaging the results). This averaging can help average out any processing or alignment mark asymmetry.
[0077] In a second embodiment, the intensity of a pixel at position +r; in the +1 diffracted spot is averaged (using any kind of average: e.g., linear, RMS) with the intensity of the pixel at a corresponding position -T/ in the -1 diffracted spot. This method is more robust against effects such as tilts and grating asymmetry. A corresponding position may comprise a symmetrically opposite position in the pupil with respect to an axis of symmetry within the pupil (the pupil images will be nominally symmetrical, although in reality asymmetry may be observed due to mark asymmetry etc.).
[0078] In a further embodiment, the sensor symmetry can be explicitly taken into account and used in determining an intensity feature. In this context, sensor symmetry can be understood as describing the relationship that governs which pairs of pixels interfere to generate the fringe pattern. From the sensor design, such pairs of coherent pixels can be identified in the pupil plane. The actual equations for identifying the interfering pairs depend on the type of sensor.
[0079] For an embodiment of an alignment sensor, including but not limited to an SRI based alignment sensor as illustrated in Figure 3 or 4 (or an off-axis illumination equivalent), the intensity metric may comprise a more specific example of the previous embodiment, using a geometric mean. More specifically the intensity metric of such an embodiment may take the form:
(3) where /(il\+r,·) is the intensity at location +r, in the +1 diffracted spot and Λ ^(-r/) is the intensity at location -r^ in the -1 diffracted spot. The location -r^ may comprise a symmetrically opposite position to that of location +r,· in the pupil with respect to an axis of symmetry within the pupil. A similar intensity term may be defined for any pair of opposite diffracted order spots (-2, +2) which are captured in the pupil.
[0080] For a sensor such as illustrated in Figure 5 and 6, the points which mutually interfere are those at the same relative position with respect to the position of the chief rays rf+l) and This can be appreciated by reference to Figure 6(c). The chief rays can be identified from the specific sensor optics; for example (considering only X direction for now) the chief rays CR may be located at the respective centers of off-axis beams 530X. Each pair of coherent pixels are those in the same position within its respective spot 530X with respect to the spot center; e.g., the two pixels labeled 600X comprise one such coherent pair, as do the two pixels labeled 605X. The same analysis may be made for the Y direction.
[0081] More specifically, for such an embodiment, the intensity term may be defined as:
h = J + rt) ‘ (x1·-1·1 + r() ƒ (rl+1) + rfi) · + rn^ ] (4) n
where (r11'1 + and + r,·) describe the intensities of such a pair of coherent pixels (i.e., r 71 r + 6 ) is the intensity of a pixel displaced by +r, with respect to chief ray ί+Γ) (—1) i (-1) λ r' in the+1 diffraction order and Γ (r- + r(-1 is the intensity of a pixel displaced by+r((-1) with respect to chief ray r in the -1 diffraction order.
[0082] Figure 8 conceptually illustrates each of the above examples. Each plot is a simplified ID intensity plot across a pupil (with pupil coordinates NA; in one dimension across a linear axis passing through the center of symmetry of the pupil). In each case, the first column corresponds to the -1 diffraction order, the second column corresponds to the +1 diffraction order and the third column corresponds to the resultant intensity term /, . The first row describes an embodiment using Equation (1), based on only the +1 diffraction order. Three specific pupil coordinates NAi are signified by the square, triangle and cross in each column. The second row describes an embodiment using Equation (2) or (3), averaged per pair of symmetrically opposite pixels. As such the pixel represented by the square, triangle and cross in -1 order column will be averaged with the corresponding pixel in the second +1 order column. Therefore each pixel in the intensity metric column is essentially at the same level as each of its corresponding pixels in the -1 order and +1 order column (assuming a largely symmetrical pupil). The third row describes an embodiment using Equation 4. Note here that the corresponding pixels are no longer symmetrical in the pupil (around an axis of symmetry diriding the -1 order column and +1 order column). Instead the “cross” pixel is to the right of the “triangle” pixel (the latter corresponding to the chief ray) for both +1 mid -1 orders, and similarly the “square” pixel is to the left of the “triangle” pixel for both orders (reference can be made again to Figure 6 to visualize why this is so). Again, the result of averaging the corresponding pairs of pixels is the intensity metric shown in the third row.
[0083] In all the above examples, the concepts are extendable to higher diffraction orders than the first order diffraction and any reference to +1 and/or -1 diffraction orders can be taken to refer to any higher diffraction order(s). Also, while equations (2), (3) and (4) all describe a normalized intensity metric, this normalization is not strictly necessary.
[0084] In an embodiment, a pupil measurement performed at a first wavelength may be used to correct an aligned position measurement performed at a second wavelength. More specifically, step 710 may comprise taking N measurements for N colors, where each measurement comprises an image plane image (e.g., diffraction fringe image) and pupil image for each color) and using all the N pupil images measured at different colors to correct all the N measured positions for the different colors. In a similar manner, Y direction diffraction orders in the pupil can be used to correct X direction positions and vice versa; and/or pupils measured at a first polarization state can be used to correct positions measured at a second polar ization state.
[0085] The calibration step 700 may be performed using a supervised, semi-supervised or unsupervised learning approach based on a set of calibration pupil images. This pupil images should be diverse (e.g., have differences in the pupil plane intensity) which are comparable with the actual pupil images recorded by the pupil camera on product targets.
[0086] Any suitable method for obtaining such diverse pupil images may be used and the specifics of such a method is not important. By way of example, a non-exhaustive list of embodiments for obtaining the calibration images may comprise (one or any combination of):
a) Measuring a set of product wafers having process-induced variations. This may be as part of a ‘recipe setup’ phase or in a ‘shadow mode’, i.e., measure while running production and continuously check whether the pupil metrology model parameters could be improved.
b) Measuring a set of wafers that have been specifically fabricated having controlled and/or uncontrolled stack thickness variation.
c) Measuring a set of targets on the same wafer having controlled and/or uncontrolled intra-wafer process variation.
d) Using a rotating I moving aperture to filter I modify the angular intensity of the illumination light, and/or to filter / modify the light intensity in any plane conjugate to the objective pupil plane.
e) Using a spatial light modulator or a similar device for the same purpose as embodiment d).
f) Modulating the light source intensity in synchronization with a scanning mechanism for the same purposes as embodiment d). This can be done, for example, by using a coherence scrambler (such as may be comprised in the device illustrated in Figure 5) and modulating the intensity of the laser in synchronization with the scan mirror.
g) Using a coherence scrambler to generate a set speckle patterns in the illumination intensity. This can be done by keeping the scan mirror still so as to focus light on a fixed position on the fiber core. This will generate a speckle pattern at the other end of the fiber, which will be imaged on the pupil plane. A different position of the input fiber facet will result in a different speckle pattern,
h) measuring a mark (e.g., a fiducial mark) using a range of colors. The light will go through a slightly different part of the pupil for each color. This will result in a different measured position as function of color, and therefore as function of position in the pupil. This information can be used for pupil metrology.
i) Measuring a set of targets with controlled design differences (e.g., different duty cycle, subsegmentation) to induce different angular diffraction profiles.
j) Measuring a set of gratings having different orientations and pitches, in a manner such that the corresponding spots in the pupil will overlap in a region of interest. In this way, for every pixel in this region of the pupil there will be multiple measurements. For some of these measurements, the pixel intensity will be zero, for some others, it will be some non-zero value. This will provide enough variation in the pupil intensities for the calibration.
k) Using an external device which is located at a location on the wafer or fiducial which enables modification of the angular diffraction intensity of a grating in a controlled manner.
l) If wafers have a process variation over the wafer, e.g., a layer thickness variation gradient from center to edge of wafer, and if the layer thickness also changes (over the whole wafer uniformly) between wafers, it becomes possible to determine whether pupil metrology is well calibrated. If pupil metrology is not well calibrated, rings will appear in the measured alignment grids, these rings will disappear if well calibrated.
m) while doing recipe setup for a new process layer, using the calibration map determined for a previous (preferably similar) process layer as a starting point / initial guess, and then improve on this using any of the other methods described in this paragraph.
n) estimating (e.g., an initial guess) an aberration map based on previously built and calibrated sensors, possibly updated by taking into account a measured aberration map for the new sensor.
[0087] As a supervised approach is proposed in some embodiments, each calibration image may have a corresponding “ground truth”; i.e., the actual or known correction for the aligned position should be known. How this ground truth correction is obtained largely depends on the way in which images have been collected. For embodiments a), b), c), i), j) the actual correction can be obtained by exposing the alignment mark and measuring the resultant overlay; e.g., using a metrology apparatus (e.g., a scatterometry based metrology apparatus such as typically used for overlay metrology). For embodiments d), e), f), g), k) the ground truth is automatically given, as with these examples the same targets are being imaged at the same position. For embodiments c), i), and j) the accuracy in the nominal position of the targets on the wafer might be enough to provide a ground truth. Embodiments h and 1 do not require any ground truth. These are intended to be non-exhaustive examples. Any technique in use, in development, or to be developed, which can quantify the al igned position variation can be used to obtain the ground truths.
[0088] As an optional step, the dimensionality of the problem can be reduced by projecting the intensity term on a suitable basis, for example Zernike polynomials. In this way, all vectors are reduced from the size of the image pixels to the size of the chosen basis. The basis can be optimized so that the dimensionality is reduced as much as possible, e.g., using a principal component analysis method or other suitable method.
[0089] Formally, this means the intensity term ïn becomes: ϊη=ΣΐίΖ[η) (5) i
(ti) where Z· is the value of the nth element of the basis (e.g., Zernike polynomials) at pixel i. In this case, Equation (1) becomes:
δ=^ϊηΦη (6) n
[0090] The linear problem in Equation (1) or (6) may be inverted to calibrate the sensor term φι from the intensity term of each calibration image and the corresponding “ground truth” δ. In a possible embodiment, this inversion may be realized using a least-square fit. Any other linear fitting algorithm could also be used.
[0091] Once the calibrated sensor term φι is fitted from the calibration data, it can be used to compute the correction for any subsequent image, using Equation (1) or (6).
[0092] It is expected that the concepts disclosed herein will find particular application for the sensor of a type illustrated in Figure 5, as this has a relatively large spot size in the pupil plane and a corresponding higher-order intensity variation. However it may be applicable to any suitable (image based) alignment sensor, particularly for difficult stacks which show greater intensity variation.
[0093] The description above has concentrated on diffraction-based alignment applications. However, it will be appreciated that the concepts described herein are also equally applicable to image-based alignment systems such as described in the prior published applications US2008043212A1 (Shibazaki) and US2011013165A1 (Kaneko). The disclosure of those prior applications is hereby incorporated herein by reference. In each case a pupil image can be obtained from a pupil plane of the image-based system and an intensity term determined from the pupil image using, for example, an intensity term similar to Equation (2), although it will not be the pixels of a diffraction order but of any part of the pupil (or the whole pupil) comprising scattered radiation. Equation (1) can then be applied in the same manner as described, with suitably calibrated sensor term.
[0094] The description above has concentrated on alignment applications for positioning a substrate (e.g., within a lithography apparatus/scanner, metrology apparatus or any other processing apparatus). In such applications, the target is commonly referred to as an alignment mark, and the metrology device referred to as an alignment sensor (such as illustrated in Figure 3). It is to be appreciated that the concepts disclosed herein are applicable to any optical metrology application suffers from stack/processing dependent errors. This may comprise overlay or focus metrology (or other parameter of interest) and therefore measure overlay or focus targets (whether dedicated targets or target areas of product structures). In fact, the device illustrated in Figure 5 has many applications including inter alia overlay and focus metrology. Tire skilled person will be able to readily adapt the teaching above for such other applications. For example, any reference to an alignment position may be substituted with an overlay/focus value or value for intensity asymmetry (or any other parameter which shows an undesired stack dependency).
[0095] While specific embodiments of the invention have been described above, it will be appreciated that the invention may be practiced otherwise than as described.
[0096] Although specific reference may have been made above to the use of embodiments of the invention in the context of optical lithography, it will be appreciated that the invention may be used in other applications, for example imprint lithography, and where the context allows, is not limited to optical lithography. In imprint lithography a topography in a patterning device defines the pattern created on a substrate. The topography of the patterning device may be pressed into a layer of resist supplied to the substrate whereupon the resist is cured by applying electromagnetic radiation, heat, pressure or a combination thereof. The patterning device is moved out of the resist leaving a pattern in it after the resist is cured.
[0097] The terms “radiation” and “beam” used herein encompass all types of electromagnetic radiation, including ultraviolet (UV) radiation (e.g., having a wavelength of or about 365, 355,248, 193, 157 or 126 nm) and extreme ultra-violet (EUV) radiation (e.g., having a wavelength in the range of 1-100 nm), as well as particle beams, such as ion beams or electron beams.
[0098] The term “lens” and “objective”, where the context allows, may refer to any one or combination of various types of optical components, including refractive, reflective, magnetic, electromagnetic and electrostatic optical components. Reflective components are likely to be used in an apparatus operating in the UV and/or EUV ranges.
[0099] The breadth and scope of the present invention should not be limited by any of the abovedescribed exemplary embodiments, but should be defined only in accordance with the following claim and their equivalents. Other aspects of the invention are set-out as in the following numbered clauses.
1. A method of metrology comprising:
obtaining a pupil image comprising a measured intensity distribution in a pupil plane relating to scattered radiation resultant from a measurement of a structure;
determining an intensity term from the pupil image, the intensity term comprising a processed intensity distribution in the pupil plane relating to the measurement of the structure; and determining a measurement value or correction therefor using the intensity term and a sensor term relating to sensor optics used to perform said measurement.
2. A method according to clause 1, wherein the method of metrology comprises a method of position metrology in aligning an object, and said measurement value comprises an alignment value.
3. A method according to clause 1 or 2, wherein the step of determining a measurement value or correction therefor comprises applying a linear regression model to the intensity term and the sensor term.
4. A method as stated in any preceding clause, wherein the sensor term relates to aberration in the sensor optics as a function of position in the pupil plane.
5. A method as stated in any preceding clause, wherein the intensity term comprises an intensity distribution within the pupil plane relating to a single higher diffraction order.
6. A method according to clause 5, wherein the steps of determining an intensity term and determining a measurement value or correction therefor are preformed separately for each diffraction order of a corresponding pair of higher diffraction orders, the step of determining a measurement value or correction therefor further comprising averaging the results corresponding to each respective diffraction order to obtain said measurement value or correction therefor.
7. A method as stated in any of clauses 1 to 4, wherein the intensity term comprises an intensity distribution within the pupil plane describing a distribution of averages of pairs of corresponding pixels, said pairs of corresponding pixels comprising a first pixel in a first diffraction order of a corresponding pair of higher diffraction orders and a corresponding second pixel in a diffraction order of the corresponding pair of higher diffraction orders.
8. A method according to clause 7, wherein pairs of corresponding pixels comprise pixels in symmetrically opposite locations within the pupil plane.
9. A method according to clause 7, wherein the step of determining an intensity term comprises taking into account the symmetry of the sensor.
10. A method according to clause 9, wherein pairs of corresponding pixels comprise pairs of pixels which interfere to generate a fringe pattern imaged in performing said measurement, said measurement value being derived from said fringe pattern.
11. A method according to clause 10, wherein pairs of corresponding pixels comprise pixels in symmetrically opposite locations within the pupil plane.
12. A method according to clause 10, wherein pairs of corresponding pixels comprise pairs of pixels having a corresponding displacement within the pupil plane with respect to respective chief rays.
13. A method as stated in any preceding clause, wherein said intensity metric is normalized.
14. A method as stated in any preceding clause, comprising projecting the intensity term on a suitable basis to reduce the dimensionality of the determining of the measurement value or correction therefor.
15. A method according to clause 14, wherein the suitable basis comprises Zernike polynomials.
16. A method as stated in any preceding clause, comprising an initial calibration step to calibrate said sensor term for the sensor optics.
17. A method according to clause 16, wherein the calibration step comprises: obtaining diverse calibration pupil images using said sensor optics; and performing an inversion to calibrate the sensor term from an intensity term derived from each calibration image and the corresponding known measurement values.
18. A method according to clause 17, wherein the obtained diverse calibration pupil images comprise respective known measurement values or known correction therefor.
19. A method as stated in any preceding clause, comprising performing said measurement to obtain said pupil image and measurement value.
20. A method as stated in any preceding clause, wherein the measurement value and the pupil image each relate to different respective wavelengths, and/or polarizations of measurement radiation and/or repetition directions of a repeating pattern of said structure.
21. A computer program comprising computer readable instruction operable to perform the method of any preceding clause.
22. A processor and associated storage medium, said storage medium comprising the computer program of clause 21 such that stiid processor is operable to perform the method of any of clauses 1 to
20.
23. A metrology device comprising the processor and associated storage medium of clause 22 so as to be operable to perform the method of any of clauses 1 to 20.
24. A lithographic apparatus comprising the metrology device of clause 23.
25. A lithographic apparatus according to clause 24, comprising:
a patterning device support for supporting a patterning device:
a substrate support for supporting a substrate; and wherein the metrology device is operable to determine an aligned position for one or both of the patterning device support and substrate support.

Claims (2)

CONCLUSIECONCLUSION 1. Een inrichting ingericht voor het belichten van een substraat.A device adapted to illuminate a substrate. 1/61/6 2/62/6
NL2024142A 2019-11-01 2019-11-01 Alignment method and associated alignment and lithographic apparatuses NL2024142A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
NL2024142A NL2024142A (en) 2019-11-01 2019-11-01 Alignment method and associated alignment and lithographic apparatuses

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
NL2024142A NL2024142A (en) 2019-11-01 2019-11-01 Alignment method and associated alignment and lithographic apparatuses

Publications (1)

Publication Number Publication Date
NL2024142A true NL2024142A (en) 2019-11-25

Family

ID=68652284

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2024142A NL2024142A (en) 2019-11-01 2019-11-01 Alignment method and associated alignment and lithographic apparatuses

Country Status (1)

Country Link
NL (1) NL2024142A (en)

Similar Documents

Publication Publication Date Title
JP6577086B2 (en) Metrology method and apparatus, lithography system and device manufacturing method
KR101994385B1 (en) Method of measuring asymmetry, inspection apparatus, lithographic system and device manufacturing method
JP6008851B2 (en) Method and apparatus for determining overlay error
US11906906B2 (en) Metrology method and associated metrology and lithographic apparatuses
NL2009001A (en) Methods and patterning devices for measuring phase aberration.
US12025925B2 (en) Metrology method and lithographic apparatuses
NL2024260A (en) Metrology sensor, illumination system and method of generating measurement illumination with a configurable illumination spot diameter
TW202227904A (en) Metrology method and associated metrology and lithographic apparatuses
US20240094643A1 (en) Metrology method and system and lithographic system
NL2024394A (en) Alignment method and associated alignment and lithographic apparatuses
NL2024142A (en) Alignment method and associated alignment and lithographic apparatuses
US11762305B2 (en) Alignment method
NL2024766A (en) Alignment method and associated alignment and lithographic apparatuses
TWI817251B (en) Metrology system and lithographic system
US20240241454A1 (en) Metrology method and system and lithographic system
TWI841404B (en) Methods for measuring parameter of interest from target and lithographic apparatuses
US20240012339A1 (en) Metrology method for measuring an etched trench and associated metrology apparatus
EP4155822A1 (en) Metrology method and system and lithographic system
EP4224255A1 (en) Metrology method
WO2021249711A1 (en) Metrology method, metrology apparatus and lithographic apparatus
WO2023012338A1 (en) Metrology target, patterning device and metrology method
TW202340851A (en) A substrate comprising a target arrangement, and associated at least one patterning device, lithographic method and metrology method