CN115546114A - Focusing method for critical dimension measurement - Google Patents

Focusing method for critical dimension measurement Download PDF

Info

Publication number
CN115546114A
CN115546114A CN202211129197.XA CN202211129197A CN115546114A CN 115546114 A CN115546114 A CN 115546114A CN 202211129197 A CN202211129197 A CN 202211129197A CN 115546114 A CN115546114 A CN 115546114A
Authority
CN
China
Prior art keywords
camera
focus
value
focusing
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211129197.XA
Other languages
Chinese (zh)
Other versions
CN115546114B (en
Inventor
田东卫
温任华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Meijie Photoelectric Technology Shanghai Co ltd
Original Assignee
Meijie Photoelectric Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Meijie Photoelectric Technology Shanghai Co ltd filed Critical Meijie Photoelectric Technology Shanghai Co ltd
Priority to CN202211129197.XA priority Critical patent/CN115546114B/en
Publication of CN115546114A publication Critical patent/CN115546114A/en
Application granted granted Critical
Publication of CN115546114B publication Critical patent/CN115546114B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer

Abstract

The invention relates to a focusing method for critical dimension measurement. The focusing method applied to the critical dimension measurement requires that before the critical dimension on the wafer is measured, focusing is performed to make the wafer reach the focal plane of the camera: the position data of the camera movement and the acquired image definition data are used for fitting a second-order curve, and the vertex coordinates of the second-order curve need to be calculated to obtain the clearest position of the image. Therefore, quick, accurate and smooth focusing is realized, and the focusing condition of the region of interest can be reflected in real time and whether the image is the clearest or not can be mapped.

Description

Focusing method for critical dimension measurement
Technical Field
The present invention relates generally to the field of integrated circuit critical dimension measurement, and more particularly to a method for focusing critical dimension measurement or a focusing technique for critical dimension measurement.
Background
With the development of integrated circuit technology, the devices and processes of semiconductors become more and more complex. In order to ensure the accuracy of each process in the semiconductor manufacturing process, the measurement of the dimensions of the semiconductor structure is a necessary step. CD-SEM measurements are often used as a measurement tool, and alternative optical CD measurements can detect not only CD of a pattern such as photoresist, but also the relative dimensions of the cross-sectional profile of the pattern. Alignment is involved in either optical critical dimension or scanning electron microscopy or other measurements that provide information on the dimensions of semiconductor wafers.
The critical dimension measurement often depends heavily on the shooting of the measured object or whether the image is clear, and if the image of the measured object is only a rough blurred image, the critical dimension measurement must be deviated. The challenge is how to accomplish fine shooting of critical dimensions. In the prior art, shooting is often realized by roughly adjusting illumination, and generally, the scanning electron microscope graph becomes blurred and cannot realize accurate images, so that measurement cannot be performed. Or the scanning electron microscope image is considered sharp when viewed but does not actually achieve the best sharpness.
The field of metrology or lithographic apparatus in the semiconductor industry relates to auto-focusing. The auto-focus system of the measurement equipment is a key technology affecting the measurement performance, the focus speed affects the yield of the wafer production line, and the focus accuracy affects the quality of the whole product. If the focusing precision is not high, the mass-produced products are rejected due to disqualification. How to ensure a high degree of precision in focusing and thereby achieve accurate images of devices on the wafer is a problem to be solved.
In order to ensure that critical dimensions meet desired values, such as to ensure that circuits do not improperly overlap or interact with each other, design rules define rules such as allowable device-to-interconnect line distances and line widths. Such design rule limitations often define critical ranges of line and space dimensions, such as the width of a line or the space of dimensions allowed in a manufactured circuit. Dimensional errors indicate some instability in critical parts of the semiconductor process. Dimensional errors may be caused by any source, such as lens bending or aberrations in the optical system, mechanical, or chemical or anti-reflective resist thickness non-uniformity, and may be caused by providing the wrong energy, such as the exposure radiation. Therefore, it is desirable to ensure that the critical dimensions comply with predetermined specifications.
Apart from the ambiguities similar to this metrology, the most demanding requirement in metrology is the precision of the image. The problem is how to ensure that the image is still refined, which leads to subsequent attempts to improve the manufacturing process to optimize the semiconductor process offset without any rules, and the present application proposes the following embodiments based on these drawbacks.
It should be noted that the above background description is provided only for the sake of clarity and complete description of the technical solutions of the present application, and for the sake of understanding of those skilled in the art. The present application is not considered limited to this particular application scenario merely because these schemes are set forth in the background section of the present application.
Disclosure of Invention
The application provides a focusing method for measuring a critical dimension, wherein the method comprises the following steps:
before measuring the critical dimension on the wafer, focusing is performed to make the wafer reach the focal plane of the camera: and fitting a second-order curve by using the position data of the camera movement and the acquired image definition data, and calculating the vertex coordinates of the second-order curve to obtain the clearest position of the image.
The method described above, wherein: repeatedly adjusting the position of a camera provided with the microscope in the vertical axis direction, recording the initial position and the initial image definition of a focusing initial point, and recording the real-time position and the real-time image definition of the camera after each position adjustment; the position data comprises position difference values of a plurality of groups of real-time positions and initial positions, and the image definition data comprises definition difference values of a plurality of groups of real-time image definitions and initial image definitions.
The method described above, wherein: after the position is adjusted each time, the position difference value and the definition difference value of the camera at the same position are respectively regarded as an abscissa value and an ordinate value which correspond to one point on the second-order curve at the same time.
The method described above, wherein: and when the definition of the real-time image and the definition of the initial image are calculated, an energy gradient function or a Laplace function is used as a definition evaluation function.
The method described above, wherein: if the absolute value of any of the position differences exceeds a specified travel value, the current position adjustment is ended. For example, out of the position adjustment loop currently being located.
The method described above, wherein: the maximum number of times of adjustment in a vertical axis is specified, and the actual number of times of adjustment in which the camera repeatedly adjusts the position in the vertical axis direction is required not to exceed the maximum number of times.
The method described above, wherein: the clearest position of the image is the vertex coordinate of the second-order curve plus the initial position of the focusing starting point. The vertex coordinates referred to herein may be referred to simply as the vertex abscissa.
The method described above, wherein: if the coefficient of the second-order term of the second-order curve is less than zero, the vertex coordinate is greater than zero, and the vertex coordinate is less than a defined maximum value of the focusing stroke, the focusing is considered to be successful.
The method described above, wherein: if the coefficient of the secondary term of the second-order curve is not less than zero, the vertex coordinate is greater than zero, and the vertex coordinate is less than any one of the defined maximum values of the focusing stroke, the workbench needs to be moved in advance in the vertical axis direction to search the focus. The work stage referred to herein generally includes a microscope, a camera, and the like.
The method described above, wherein: in the stage of repeatedly adjusting the position of the camera for multiple times, the difference is made between any two adjacent definition difference values obtained by successively adjusting the position of the camera; defining a variable term to change along with the increase of the position adjusting times, wherein the current variable term is equal to the sum of the previous value and the current difference result; judging whether the variable item is less than zero;
if yes, moving the workbench up for a certain distance and then trying to focus;
if not, the workbench moves downwards for a certain distance and then tries to focus.
The method described above, wherein: the distance that the stage is moved up with respect to the starting position of the focusing is equal to: one half of the specified run value is divided by the current number of foci.
The method described above, wherein: the stage is moved down with respect to the starting position of the focus by a distance equal to: one specifying half of the run value. Or approximately equal to about one-half of the specified travel value.
The method described above, wherein: the camera is moved to the focus relative distance, i.e. the vertex coordinates of the second order curve minus the current position of the camera. The vertex coordinates referred to herein may be referred to simply as the vertex abscissa.
The method described above, wherein: multiple focus attempts are performed, with the camera repeatedly adjusting position in the vertical axis direction in any single focus attempt.
The method described above, wherein: the microscope lens is brought closer to the wafer to three-quarters of a specified stroke value before the camera position is repeatedly adjusted. Here three quarters is an alternative example of a predetermined ratio value.
The method described above, wherein: a stepping motor drives the camera to move in the vertical axis direction, and a control unit controls the stepping motor to control the movement of the camera;
the control unit is also used for fitting a second order curve and calculating vertex coordinates.
The present application further provides another focusing method for critical dimension measurement, wherein:
performing a plurality of autofocus attempts before performing measurements on critical dimensions on a wafer;
in each attempt: repeatedly adjusting the position of the camera in the vertical axis direction for multiple times to capture the position data of the movement of the camera and the corresponding image definition data;
fitting a second-order curve according to the position data and the image definition data;
judging whether the second-order curve meets a preset condition, if so, determining that focusing is successful; if not, the workbench is moved on the vertical axis to search for the focus. The work stage referred to herein generally includes a microscope, a camera, and the like.
The method described above, wherein: in each attempt: repeatedly adjusting the position of the camera in the vertical axis direction for multiple times, recording the initial position and the initial image definition of the focusing initial point, and recording the real-time position and the real-time image definition of the camera after each position adjustment; the position data comprises position difference values of a plurality of groups of real-time positions and initial positions, and the image definition data comprises definition difference values of a plurality of groups of real-time image definitions and initial image definitions.
The method described above, wherein: after the position is adjusted each time, the position difference value and the definition difference value of the camera at the same position are respectively regarded as an abscissa value and an ordinate value which correspond to one point on the second-order curve at the same time.
The method described above, wherein: and after the camera adjusts the position each time, if the absolute value of any position difference value exceeds a specified travel value, finishing the current position adjustment and jumping out of the cycle of repeatedly adjusting the position of the camera for many times.
The method described above, wherein: in each attempt: the maximum number of times of adjustment in a vertical axis is specified, and the actual number of times of adjustment in which the camera repeatedly adjusts the position in the vertical axis direction is required not to exceed the maximum number of times.
The method described above, wherein: the clearest position of the image is the vertex coordinate of the second order curve plus the initial position of the focusing initial point.
The method described above, wherein: in each attempt: the preset conditions comprise that the coefficient of the quadratic term of the second-order curve is less than zero, the vertex coordinate is greater than zero, and the vertex coordinate is less than a defined focusing stroke maximum value, and the focusing is considered to be successful if the preset conditions are met.
The method described above, wherein: in each attempt: the preset conditions comprise that the coefficient of the second-order term of the second-order curve is less than zero, the vertex coordinate is greater than zero, and the vertex coordinate is less than a defined focusing stroke maximum value, and if any one of the preset conditions is not met, the workbench is moved on the vertical axis to search the focus.
The method described above, wherein: in the stage of repeatedly adjusting the position of the camera for multiple times, the difference is made between any two adjacent definition difference values obtained by successively adjusting the position of the camera; defining a variable term which changes with the increase of the position adjustment times, and the current variable term is equal to the current difference result added to the previous value; judging whether the variable item is less than zero;
if yes, the workbench moves upwards for a distance, and then automatic focusing is tried again;
if not, the workbench moves downwards for a certain distance, and then automatic focusing is tried again.
The method described above, wherein: the distance the stage is moved up relative to the start position of focus is equal to: one half of the specified run value is divided by the current number of foci.
The method described above, wherein: the stage is moved down with respect to the starting position of the focus by a distance equal to: one specifying half of the run value.
The method described above, wherein: the camera is moved to the focus relative distance, i.e. the vertex coordinates of the second order curve minus the current position of the camera.
The method described above, wherein: in each attempt: before repeatedly adjusting the position of the camera, the microscope lens approaches the wafer to a preset proportion value of a specified travel value.
It should be noted that when measuring the critical dimension of the wafer, the images at different positions are not necessarily in the focal plane, which causes a large error in the measured value, and the conventional microscope seeks the focal plane such as manual repeated focusing and trimming of the working distance, which causes low measurement efficiency and poor measurement accuracy. The automatic focusing technology can complete rapid, accurate and smooth focusing, can perform real-time reaction on the focusing condition of the interested areas (for example, the image definition data can be the image definition data of some interested areas on the wafer), maps whether the image reaches the clearest or not, and judges whether the measurement opportunity of the size is reasonable or not. Various disadvantages brought forward in the background art are smoothly solved.
Drawings
In order that the above objects, features and advantages will be readily understood, a more particular description of the invention briefly described above will be rendered by reference to the appended drawings, which are illustrated in the appended drawings.
FIG. 1 shows a stage with a microscope camera moving up and down and carrying a wafer.
Fig. 2 is a second-order curve obtained by camera and fitting to judge whether focusing is successful.
Fig. 3 is an embodiment of a camera performing up and down movement with multiple iterations of autofocus attempts.
Fig. 4 shows that moving the camera up and down can obtain multiple sets of position data and image definition data.
Fig. 5 is a second order curve fit to multiple sets of position data and image sharpness data.
Fig. 6 is an example of accumulating gradient values for all pixels as sharpness evaluation function values.
Fig. 7 shows that the laplacian function takes the sum of squares of gradients of each pixel point as an evaluation function.
Detailed Description
The present invention will be described more fully hereinafter with reference to the accompanying examples, which are intended to illustrate, but not to limit the invention to the particular forms disclosed, and which are included within the scope of the invention as defined by the appended claims.
Referring to fig. 1, the necessary knowledge involved in the present application will be described. The semiconductor fabrication art generally refers to the silicon wafers used to fabricate integrated circuits. A metrology stage or motion stage 11 of the cd metrology apparatus is used to carry the wafer 10. The microscope and camera CA cooperate or are assembled together to capture fine wafer detail images. The microscope has a high power lens and a low power lens and the magnification of the lenses can be switched manually or automatically in a series of lenses LN. Such as switching from high power to medium power or to low power, or performing the opposite lens switching operation, such as switching from low power to medium power or to high power. This multiple switching relationship for the lenses includes on-axis switching.
Referring to fig. 1, for platform (CHUCK): the moving platform 11 is a special tool for absorbing and bearing wafers in the production process of various semiconductor silicon chips, and is mainly used for bearing wafers (wafers). Some documents also refer to this type of carrier as a carrier or lift mechanism, wafer carrier or platform, carrier platform, and the like. The motion platform belongs to a bearing mechanism in the semiconductor equipment. Reference herein to a carrier stage includes a platform (CHUCK) structure. The motion stage may move within the coordinate system along an abscissa X and an ordinate Y as desired, and in some cases the motion stage may rotate the wafer within the coordinate system as desired or move the wafer up and down in the Z-axis.
Referring to fig. 1, the platform motion control module: the device consists of an X axis, a Y axis, a theta axis and a CHUCK, and before the measuring equipment measures the critical dimension of the wafer, the CHUCK is required to be driven to move by a platform motion control module, so that the movement control of the wafer is realized. The θ -axis can rotate, for example, the θ -axis is rotated to rotate the CHUCK, which is equivalent to adjusting the value of the angle θ by controlling the rotation of the motion platform.
Referring to fig. 1, the critical dimension measuring apparatus of the semiconductor industry includes at least a motion stage 11 and a camera CA equipped with a microscope. The critical dimension measuring device can be a modification of an existing critical dimension measuring device or a measuring device with a completely new critical dimension design. In view of the critical dimension measurement apparatus already existing in the semiconductor industry, the detailed description thereof is omitted, and it should be noted that all or part of the features of the critical dimension measurement apparatus in the prior art can be applied to the measurement apparatus in the present application. This application defaults to including all or part of the prior art features when referring to a critical dimension measurement device.
Referring to fig. 1, an Image1 captured by a camera CA provides pixel coordinates. The work table referred to herein generally includes a microscope and a camera CA or the like fitted or assembled with the microscope.
Referring to fig. 1, the focusing Z-axis motion module of the camera CA: the wafer can be composed of a Z axis capable of moving up and down, when the wafer is placed on a measuring platform such as the platform 11, if the view field of a camera CA is clear and the resolution is high, the wafer needs to be arranged at the focal plane of the camera, and the Z axis movement module can drive the camera and a lens to move up and down in the process, so that the focal plane with the clearest view field of the camera can be found. I.e., finding the focal plane of the critical dimension structure on the wafer.
Referring to fig. 1, regarding the distance focal plane position adjustment: the Z-axis stepping motor moves to drive the camera to move up and down so as to adjust the position of the focal plane. How the motor moves with the camera belongs to the prior art, and currently existing critical dimension measuring equipment basically adopts such a structure, and the detailed description thereof is not repeated separately. Further, the motor and the camera equipped with the microscope, which belong to the known art, will not be described.
Referring to fig. 1, automated microscopy (Motorized Microscope) is well established, and like conventional manual microscopes typically move a sample being viewed using three degrees of freedom: the X axis and the Y axis move horizontally and the Z axis moves vertically and vertically. The lens is moved along the Z axis, so that the object distance of the microscopic optical system and the focusing imaging effect are directly determined. The technical features of the automatic microscope, either local or complete, can be applied to the microscope and its camera in the figure.
Referring to fig. 1, in order to ensure the operation process to be controllable and stable, the Z axis and the X and Y axes also need a high-precision positioning function, but the conditions for designing the positioning system of the Z axis are different from those of the X and Y axes: on the one hand, the Z axis is subjected to the action of gravity to ensure that no dead zone exists in the downward direction, the lens barrel basically runs through the driving of the screw-slider pair, and under the action of gravity, the slider can be ensured to be in close contact with the screw all the time, so that the slider can start to move on the screw when the screw rotates, and the idle stroke phenomenon cannot occur. On the other hand, when a lens having a high magnification and a short depth of field is used, a clear image can be observed only when the Z-axis height is located at a position that can ensure the optical system to be aligned/focused. Slight up-and-down movement (for example, only a few micrometers is needed and the total travel is thousands of micrometers) makes the defocused image very blurred, the current height of the Z axis cannot be measured through the defocused image, and the area of the focusing plane cannot be determined through a microscopic image due to the fact that the sample is widely blank which is not covered by the sample and is thorough, so that the Z axis height cannot be measured at any position.
Referring to fig. 1, the important term Critical Dimension (CD) is explained before the present application. In the manufacturing of semiconductor integrated circuit photomask and photoetching process, in order to evaluate and control the graphic processing precision of the process, special line patterns capable of reflecting the characteristic line width of the integrated circuit are specially designed in the industry, which are called as key dimensions. The following example of lithography illustrates that critical dimension measurement appears to be critical, and in fact more processes involve critical dimension measurement. The industry critical dimension terminology may also be replaced with critical dimension structures or critical dimension marks.
Referring to fig. 1, in the integrated circuit manufacturing process, a photoresist is first coated on the surface of a wafer. The photoresist is then exposed through a photomask. And then post exposure baking is performed. For positive-working chemically amplified resists, this initiates a deprotection reaction that allows the developer to more readily dissolve the resist in the exposed areas, thereby allowing the resist in the exposed areas to be removed during subsequent development to produce the desired resist pattern. Subsequent post-development testing may follow. Post-development inspection includes, for example, electron microscopy or optical metrology of the critical dimensions of the photoresist pattern to determine whether it meets specifications. The etching process is performed to transfer the photoresist pattern to the wafer only if the specifications are met, which is sufficient for the importance of the measurement.
Referring to fig. 1, efficient and accurate measurement is a measurement ruler for the mass production line of semiconductors, which plays an important role in monitoring and preventing process variation. The present application is directed to an explanation of the application of critical dimension measurement in the manufacture of LSI and related problems. The measurement of the critical dimension is very dependent on the shooting of the measured object or whether the image is clear, and if the image of the measured object is only a roughly blurred image, the deviation of the measurement of the critical dimension is obviously inevitable. This problem is particularly acute when going into the micron or even nanometer range.
Referring to fig. 1, the present application relates to an auto-focus method and scheme for critical dimension measurement, where focusing is critical to the definition of critical dimensions and the critical dimension image captured under these conditions. It is therefore necessary to first introduce sharpness evaluation based on image quality: the definition usually concerned can be used to quantitatively analyze whether the image is just clear enough, and if the image quality does not meet the definition requirement, the inferior image can not be used in the nano-scale and micro-scale measurement fields obviously. When the image is processed, the image is regarded as a two-dimensional discrete matrix, and the gradient function can be used for acquiring the gray information of the image so as to judge the definition of the image.
Referring to fig. 1, the technical problem to be solved by the present application is now: when the critical dimension of the wafer is measured, the images at different positions are not necessarily in the focal plane, which easily causes a great error in the measurement value. The manual repeated focusing of the microscope and the continuous trimming of the working distance in the traditional scheme lead to low efficiency and poor accuracy. The automatic focusing technology disclosed by the application can complete quick, accurate and smooth focusing, and can reflect the focusing condition of the region of interest in real time.
With reference to fig. 1, two of the technical problems to be solved by the present application are now: based on the prior art that the critical dimension measurement method is complex in process and slow in measurement speed (for example, triple focusing and repeated trimming of measurement distance are required), the focusing process involved in the critical dimension measurement step needs to be simplified, the measurement efficiency in unit time needs to be improved, the time of wafer staying in the measurement link on the whole production line is reduced, and the critical dimension measurement accuracy is improved.
Referring to fig. 1, regarding the autofocus implementation aspect: the system can be divided into an image acquisition module and a dimming module, wherein the dimming module is a Z-axis motion module and is used for dimming acquired images, and then the image algorithm is used for processing and judging whether the current position is on a focal plane or not so as to drive a Z-axis (usually an up-and-down moving axis) to move for adjustment.
Referring to fig. 1, regarding the distance focal plane position adjustment: the Z-axis stepping motor moves to drive the camera to move up and down so as to adjust the position of the focal plane. When the wafer is placed on the measuring platform, the wafer needs to be placed at the focal plane of the camera to enable the visual field of the camera to be clear and the resolution to be high, and at the time, the Z-axis motion module can drive the camera and the lens to move up and down to find the focal plane with the clearest visual field of the camera.
Referring to fig. 1, the module or module is moved about the Z-axis: the first is related to the stepping motor, for example, the speed and position of operation can be accurately controlled without feedback, and the function of the servo motor can be replaced under the conditions of low operation speed and low power requirement. The stepping motor can be free from various interference factors in terms of step size. Such as the magnitude of the voltage or the magnitude of the current or the voltage current waveform, temperature variations, etc.
Referring to fig. 1, the stroke of the moving module about the Z axis: for example, the Z-axis of the Z-axis minimum stroke is achieved by a stepper motor and its minimum stroke is the linear displacement of one pulse, which can be calculated as follows.
First, the step angle of the stepper motor is determined, as is commonly indicated. For example, the example of 1.8 degrees indicates that 360/1.8=200 for one circle, that is, 200 pulses are required for one rotation of the motor.
Secondly, whether the motor driver is provided with the subdivision is determined, the subdivision number is checked, and the dial on the driver can be observed to confirm whether the motor driver is provided with the subdivision number. For example, the motor driver is provided with 4 subdivisions, and as mentioned above, by calculating the correlation between 200 pulses, 200 × 4=800 is equivalent to requiring 800 pulse motors to rotate one revolution.
Furthermore, the length or lead of one revolution of the motor shaft is determined: pitch x number of thread heads equals lead if lead screw or pitch diameter (m x z) is lead if rack and pinion drive.
Lead divided by the number of pulses (lead/pulse) equals the linear displacement of one pulse. It is generally desirable that the distance traveled by the stepper motor be greater than or equal to the minimum stroke, otherwise the stepper motor will not respond.
Referring to fig. 1, it is assumed in an alternative example that the minimum stroke of the Z-axis moving module is 0.000078mm, for example, it is assumed that the camera satisfies a condition that the minimum stroke is 0.000078 mm. Such travel is different using different stages.
Referring to fig. 1, a single movement step distance onceStep (e.g., onceStep =0.000078 mm) is defined in an alternative example.
Referring to fig. 1, the stroke autofocus track of autofocusing is defined in an alternative example. The auto-focusing stroke parameter is determined according to the flatness of the product to be measured, such as a wafer, which is substantially the maximum value of the stroke when the Z-axis moves up and down to focus. For example, assume that the autofocus track takes 0.04mm.
Referring to fig. 1, a maximum number of autofocus attempts is defined in an alternative example.
Referring to fig. 1, the number of autofocus attempts is defined as cnt in an alternative example. cnt is continuously counted in cycles.
Referring to fig. 1, the current position on the Z-axis is defined as Zc in an alternative example.
Referring to fig. 1, in an alternative example, an autofocus maximum number MAX _ FRAME _ COUNT is defined, where the autofocus maximum number is the maximum number of Z-axis adjustments.
Referring to fig. 1, the number of times of Z-axis adjustment is defined as m _ focus _ cnt in an alternative example.
Referring to fig. 1, a method MoveZDirect (onceStep) is defined in an alternative example. For example, if the process of moving down one step along the vertical or vertical axis (Z axis) is MoveZDirect (onceStep). Conversely, if the process of moving one step up along the vertical or vertical axis (Z axis) is MoveZDIRect (-onceStep). The positive and negative values of the values inside the brackets of the method function represent downward movement and upward movement, respectively.
Referring to FIG. 1, the first type of array (m _ focus _ X [ ]) in the alternative example is a statistic of Z-axis position variation.
Referring to FIG. 1, the second type of array (m _ focus _ Y [ ]) in the alternative example is statistics on the amount of change in image sharpness.
Referring to fig. 1, a Z-axis position m _ focus _ Z of a focus start point is defined in an alternative example.
Referring to fig. 1, an image sharpness m _ focus _ def of a focus start point is defined in an alternative example.
Referring to fig. 1, the focusing referred to in the present application includes the following calculation process.
Referring to fig. 1, a temporary variable up _ load for calculation is defined, initially as double up _ load = travel. double is a type of computer language, i.e. double precision floating point type. The application may run on a computer or server or similar processing unit. Other alternatives of the processing unit: a field programmable gate array, a complex programmable logic device or a field programmable analog gate array, or a semi-custom ASIC or processor or microprocessor, or a digital signal processor or integrated circuit or a software firmware program stored in a memory, or the like. The Double notation in front of a calculation indicates that the type of the calculation is a Double precision floating point type, and the int type is an identifier for defining an integer type variable hereinafter.
Referring to fig. 1, before metrology is performed on critical dimensions, a number of autofocus attempts are performed, and the repeated autofocus attempts may be exemplified in computer language, namely for (cnt =0. The number cnt increases from the first zero value until the maximum number of autofocus attempts, autofocusttrycnt, is reached, that is, if the number cnt of autofocus attempts stops increasing when the condition cnt < autofocusttrycnt is not satisfied. The cnt self-increment operation is identified in computer language by the expressible cnt + +.
Referring to fig. 1, in each execution of an autofocus attempt: a camera or the like (such as a worktable with a microscope) repeatedly adjusts the position in the vertical axis direction, namely the Z axis direction for a plurality of times, and the process of repeatedly adjusting the position of the Z axis camera can be exemplified by computer languages, namely for (m _ focus _ cnt =0 m _/focus _/cnt and MAX _FRAME _/COUNT; ++ m _ focus _ cnt). The number of times the camera is adjusted in the vertical axis is m _ focus _ cnt. Wherein the for statement is a loop statement.
Referring to fig. 1, the Z-axis adjustment number m _ focus _ cnt is increased from the first zero value until the maximum Z-axis adjustment number MAX _ FRAME _ COUNT is reached, that is, if the condition m _ focus _ cnt < MAX _ FRAME _ COUNT is not satisfied during the Z-axis iterative adjustment, the Z-axis adjustment number m _ focus _ cnt is stopped from increasing and the m _ focus _ cnt is identified from one addition in the computer language by the expression + + m _ focus _ cnt.
Referring to fig. 1, in each execution of an autofocus attempt: the microscope lens is preferably brought closer to the wafer to a predetermined fraction (e.g., three quarters) of a specified stroke (e.g., travel) before the camera position is repeatedly adjusted. In an alternative embodiment, the microscope lens is brought into proximity with the sample, i.e. the wafer, before the camera position is iteratively adjusted: the shot first makes 3/4 of the way up the sample, exemplified in computer language by MoveZDIRect (-up _ load 3/4).
Referring to fig. 1, image sharpness is expressed by F. For example double def = F. The foregoing generally describes sharpness evaluation based on image gradients: it has been reported that the resolution can be used to quantitatively analyze whether the image is clear enough, and if the image quality does not meet the resolution requirement, the poor quality image cannot be applied in the field of micro-or even nano-scale wafer measurement. The evaluation of the image sharpness is explained below by a mathematical expression. def is the real-time image sharpness.
Referring to FIG. 1, the current Z-axis position is denoted Zc. The real-time Z-axis coordinate is expressed in Z _ pos.
Referring to FIG. 1, a real-time Z-axis coordinate Z _ pos is obtained. For example double z _ po = Zc. Note that in the first adjustment for repeatedly adjusting the camera position, there is a case where m _ focus _ cnt = =0 in a computer language example, and in this case, a start point of focus is assigned to m _ focus _ z and m _ focus _ def. The first adjustment or focus start point may be exemplified in a computer language by if (m _ focus _ cnt = = 0) { m _ focus _ z = z _ pos; m _ focus _ def = def; }. The Z-axis position m _ focus _ Z of the focus start point and the image clarity m _ focus _ def of the focus start point are expressed.
Referring to FIG. 1, the x-coordinate of the quadratic or quadratic curve is the amount of change in the Z-axis position. The position data of the captured camera movement can be exemplified in a computer language, i.e., m _ focus _ X [ m _ focus _ cnt ] = z _ pos-m _ focus _ z, wherein the array of X-coordinates of a quadratic or quadratic curve includes m _ focus _ X [ m _ focus _ cnt ].
Referring to fig. 1, the y-coordinate of the quadratic function or second-order curve is the amount of change in image sharpness. Similarly, the sharpness data of the captured image can be exemplified by a computer language, i.e., m _ focus _ Y [ m _ focus _ cnt ] = def-m _ focus _ def, wherein the array of Y-coordinates of the quadratic or quadratic curve includes m _ focus _ Y [ m _ focus _ cnt ].
Referring to fig. 1, if the moving distance exceeds the specified travel (e.g., the specified travel), the adjustment is ended. That is, if the distance or distance moved by the camera on the Z axis exceeds the specified travel, the Z axis position adjustment is finished. The out-of-range trip can be exemplified in a computer language such as if (math.abs (z _ pos-m _ focus _ z) > math.abs (travel)) break. Abstract values for numbers are expressed in Math.abs, such as the Abstract values for z _ pos-m _ focus _ z or travel. break indicates that a jump out of the current for loop is required, e.g., a jump out of the for loop of m _ focus _ cnt plus one. The m _ focus _ cnt encounters a situation where it jumps out of the loop and does not increase until the next round of auto-focus attempts is entered. m _ focus _ cnt encounters a condition where it is likely that it has not yet reached MAX _ FRAME _ COUNT.
Referring to fig. 1, after the Z-axis is adjusted for a plurality of times, the position data of the camera movement and the corresponding image definition data can be captured, where the position data of the position adjustment for a plurality of times includes m _ focus _ X [ m _ focus _ cnt ], and the image definition data of the position adjustment for a plurality of times includes m _ focus _ Y [ m _ focus _ cnt ]. The Z-axis adjustment is represented by MoveZDIRect.
Referring to FIG. 1, statistics of Z-axis position variations are captured (first type array m _ focus _ X [ ]).
Referring to fig. 1, statistics of the sharpness variation (second type array m _ focus _ Y [ ]) are retrieved.
Referring to fig. 1, focused data m _ focus _ X [ ] and m _ focus _ Y [ ] are fitted to a second order curve.
Referring to fig. 1, the second order curve is given by the equation y = ax 2 + bx + c.
Referring to fig. 1, the vertex coordinates of the second order curve, i.e., the abscissa of the vertex coordinates, b/(2 × a) or vertex coordinates, are calculated.
Referring to fig. 1, the position m _ focus _ best at which the image is sharpest is calculated: the vertex coordinates plus the focus start Z-axis position result in the sharpest position m _ focus _ best of the image, m _ focus _ best = -b/(2 a) + m _ focus _ Z. The clearest position of the image is related to the vertex coordinates of the second order curve and also to the Z-axis position m _ focus _ Z of the focus start point.
Referring to fig. 1, a second-order curve y = ax is satisfied 2 The coefficient of the second-order term of + bx + c is less than zero, i.e. a<0. The vertex coordinate is larger than zero (namely (-b/(2 a))>0. Vertex coordinates less than a defined focusMaximum of travel, vertex coordinate less than maximum of focus travel (-b/(2 a))<And (4) the AutoFocusTravel considers that the focusing is successful. Instantiating if (a) in computer language<0&&(-b/(2*a))>0&&(-b/(2*a))<Autofocus track) break. break indicates successful focusing without the need to use a moving stage to find the focus. Note that predetermined conditions including the above three items, etc., need to be satisfied at the same time to indicate that focusing is successful, and that any one item is not satisfied indicates that focusing is unsuccessful.
Referring to fig. 1, in each attempt (the number of attempts is denoted by cnt): the predetermined conditions include that the coefficient of the second-order term of the second-order curve is smaller than zero (a < 0), the vertex coordinate is larger than zero (i.e., (-b/(2 a)) > 0), the vertex coordinate is smaller than a defined maximum value of the focusing stroke (i.e., (-b/(2 a)) < automatic focusing travel), and if any one of the predetermined conditions is not met, the camera is moved on the vertical axis for a certain distance to search for the focus. In other words, the above condition is not satisfied, that is, the focus is not within the current stroke, and the stage needs to be moved to find the focus again.
Referring to fig. 1, dir is defined as the sum of the differences in the sharpness of two adjacent images in an alternative example. In the most initial state, for example, double dir =0. The Z-axis adjustment count is m _ focus _ cnt as described above. In the stage that the position of the camera is repeatedly adjusted for many times, any two adjacent definition difference values obtained by successively adjusting the position of the camera are subjected to difference to obtain a difference result. Two adjacent resolutions are respectively represented by m _ focus _ Y [ m ] and m _ focus _ Y [ m +1], and they are differenced to obtain a difference result m _ focus _ Y [ m +1] -m _ focus _ Y [ m ]. Definition m is a digital variable, and the digital variable is smaller than the Z-axis adjustment times m _ focus _ cnt. m represents the position adjustment times.
Referring to fig. 1, in each attempt (the number of attempts is denoted by cnt): and in the stage of repeatedly adjusting the position of the camera for multiple times on the Z axis, based on any two adjacent definition difference values obtained by successively adjusting the position of the camera, taking the difference as a difference result m _ focus _ Y [ m +1] -m _ focus _ Y [ m ]. As the camera position is continuously adjusted, a variable term is defined to change with the increase of the position adjustment times (m or m _ focus _ cnt), and the variable term of the current time is equal to the value of the previous time plus the difference result of the current time. The process of variation of the variable term dir can be exemplified by computer language, i.e. for (int m =0; }. The meaning of dir + = m _ focus _ Y [ m +1] -m _ focus _ Y [ m ] in the computer language expresses that: the current variable term dir is equal to its previous value plus the current differencing result, i.e., m _ focus _ Y [ m +1] -m _ focus _ Y [ m ]. In other words, dir is the sum of differences between two adjacent image definitions, which means the same meaning.
Referring to fig. 1, it is necessary to determine whether the variable term of each change is less than zero. If yes, the camera or the workbench moves upwards for a certain distance and then tries to focus; if not, the camera or the workbench moves downwards for a certain distance and then tries to focus.
Referring to fig. 1, if the variable term changes to less than zero, it may be moved up by half a stroke and focus may be attempted. For example, if the variable term is less than zero, then move up a distance to refocus, e.g., if (dir < 0) up _ load = (travel/(2 × (cnt + 1))). For example, the distance that the camera moves up relative to the start position of focus is equal to: one specifies one half of the run value (travel) divided by the current number of foci (the current number of foci is denoted by cnt +1, note that the first attempt is cnt =0, and the current number of foci is defined as cnt +1 for ease of understanding).
Referring to fig. 1, if the variable term changes to be not less than zero (other than the case where dir < 0), the camera is shifted down by half a stroke with respect to the focus initial position, and focusing is attempted again. An example with respect to if (dir < 0) is else up _ load = travel/2. For example, a distance by which the camera is shifted down with respect to the start position of focus is equal to: half of the specified travel value (travel).
Referring to fig. 1, a number of autofocus attempts have been performed so far. Attempts to repeat autofocusing may be exemplified in computer language by for (cnt =0. When no further focus attempts are performed, or after the end of the cycle of such attempts, the focus is moved to a relative distance dis, i.e. the vertex coordinates minus the current Z-axis position. In an alternative example, denoted by the method MoveZDirect (dis), double dis = m _ focus _ best-Zc. The autofocus is considered to be finished.
Referring to fig. 2, the focus method for the cd metrology includes steps SP1 to SP7. Step SP1 acquires image sharpness data and Z-axis position data of the camera. The Z-axis position data includes m _ focus _ X [ m _ focus _ cnt ], which is a data class in the form of an array. The Image sharpness data includes m _ focus _ Y [ m _ focus _ cnt ], which is also in the form of an array, and can be extracted from Image information Image1 captured by the camera.
Referring to fig. 2, step SP2 is mainly to generate m _ focus _ X [ m _ focus _ cnt ] according to position data of camera movement]And the acquired image definition data m _ focus _ Y [ m _ focus _ cnt ]]Fitting a second order curve, calculating the vertex coordinates of the second order curve, for example, equal to the second order curve y = ax 2 The abscissa value-b/(2 a) of the vertex coordinates of + bx + c. Substantially the vertex coordinates further include a vertical coordinate value (4 ac-b) 2 ) (4 x a), the present application focuses on the abscissa value of the vertex coordinate rather than the ordinate value of the vertex coordinate, and the vertex coordinate is referred to generically and directly as the abscissa value-b/(2 x a), so that the present application includes the meaning of the vertex coordinate as the abscissa value of the vertex coordinate.
Referring to fig. 2, step SP3 calculates the most sharp position m _ focus _ best = -b/(2 a) + m _ focus _ z of the image. The vertex coordinates plus the focus start Z axis position m _ focus _ Z result in the clearest position of the image.
Referring to fig. 2, step SP4 determines whether focusing is successful: and if the secondary coefficient of the second-order curve is less than zero, the vertex coordinate is greater than zero and the vertex coordinate is less than a defined maximum value of the focusing stroke, the focusing is considered to be successful. If yes, the focusing success flag of step SP5 is used.
Referring to fig. 2, step SP4 determines whether focusing is successful: and if the second-order coefficient of the second-order curve is smaller than zero, the vertex coordinate is larger than zero and the vertex coordinate is smaller than the maximum value of the focusing stroke, the camera is moved on the vertical axis for a certain distance, and then the focus is searched. Otherwise, the step SP6 is used to indicate that the workbench or the camera needs to be moved to find the focus.
Referring to fig. 2, if the known determination result is else, it can be represented by step SP 6: in step SP6, in the stage of repeatedly adjusting the position of the camera for many times, the difference is made based on any two adjacent definition differences obtained by successively adjusting the position of the camera (for example, the difference is made between two adjacent definition differences m _ focus _ Y [ m +1] and m _ focus _ Y [ m ]). Defining a variable term dir to change along with the increase of the position adjustment times, wherein the calculation mode of the variable term dir is as follows: the current variable term dir is equal to its previous value plus the current differencing result (e.g., m _ focus _ Y [ m +1] minus m _ focus _ Y [ m ]). Finally, whether the variable term of each time is smaller than zero is judged (namely, whether if (dir < 0) is true is judged). One of the key effects is to avoid using out-of-focus images, such as out-of-focus real-time image sharpness, with the original image sharpness in-focus to compute the sharpness difference. The definition difference prevention is based on image gradient evaluation which is carried out on the real-time image definition with small change of the gray value of the edge pixel of the image and the initial image definition with large change of the gray value of the edge pixel of the image, thereby avoiding the error of a second-order curve. Such errors are concealed and imperceptible. The image with large changes in the gray values of the edge pixels is sharper and has larger gradient values than the image with small changes in the gray values of the edge pixels.
Referring to fig. 2, if it is determined that if (dir < 0) is true), the focusing is attempted after moving up a distance. The distance that the camera moves up relative to the start position of focus is equal to: one specified run value (travel) is divided by one half over and over by the current number of foci, equal to up _ load = (travel/(2 x (cnt + 1))). Up _ load corresponds to MoveZDIRect.
Referring to fig. 2, if not (if (dir < 0) is determined negative), the focus is tried after moving down a distance. The distance that the camera is moved down relative to the start position of focus is equal to: one specifying half of the run value (travel). The distance moved down is equal to up _ load = travel/2.Up _ load corresponds to MoveZDIRect.
Referring to fig. 2, step SP7 is performed after step SP5 or step SP6. But SP7 is not essential. If an attempt is made to perform step SP7, this means that the camera or the stage has moved to the relative focal distance dis, i.e. the preceding position m _ focus _ best where the image is sharpest minus the current Z-axis position Zc. MoveZDirect (dis) represents the process of moving a camera or stage to a relative distance dis of focus. double dis = m _ focus _ best-Zc. Since the sharpest position of the image is closely related to the vertex coordinates, it is colloquially believed that the relative distance dis to the focus during this process, i.e., the vertex coordinates, is subtracted from the current Z-axis position. So far the autofocus is finished.
Referring to fig. 2, the process of acquiring image sharpness data in step SP1 is implemented by shifting the Z-axis, since the camera is shifted by a position that results in a change in m _ focus _ Y [ m _ focus _ cnt ] = def-m _ focus _ def, and the change in m _ focus _ Y provides material or source data that fits the ordinate of the second order curve as a basis for the ordinate of the quadratic function.
Referring to fig. 2, the process of acquiring the position data of the camera in step SP1 is implemented by moving the Z-axis, because the camera moves the position to cause a change in m _ focus _ X [ m _ focus _ cnt ] = Z _ pos-m _ focus _ Z, and the change in m _ focus _ X provides material or source data fitting the abscissa of the second-order curve as the basis of the abscissa of the quadratic function.
Referring to fig. 2, step SP1 requires the camera equipped with the microscope to repeatedly adjust the position in the vertical axis direction, records the start position (m _ focus _ z) of the focus start point and the start point initial image clarity (m _ focus _ def), and records the real-time position (z _ pos) and the real-time image clarity (def) of the camera after each adjustment of the position. In this case, the ordinate of the fitted second order curve, i.e., the X-coordinate, is the Z-axis position variation m _ focus _ X [ m _ focus _ cnt ] = Z _ pos-m _ focus _ Z, and the abscissa of the fitted curve, i.e., the Y-coordinate, is the image sharpness variation m _ focus _ Y [ m _ focus _ cnt ] = def-m _ focus _ def.
Referring to fig. 2, the position data of step SP1 includes a plurality of sets of position differences between the real-time position and the start position. For example, the position data includes a position difference value m _ focus _ X [0] = z _ pos0-m _ focus _ z, and z _ pos0 is an actual position of the real-time position at m _ focus _ cnt =0. m _ focus _ X [1] = z _ pos1-m _ focus _ z, and z _ pos1 is the actual position of the real-time position at m _ focus _ cnt = 1. m _ focus _ X [2] = z _ pos2-m _ focus _ z, z _ pos2 is the actual position of the real-time position at m _ focus _ cnt =2, and so on. With the increase of m _ focus _ cnt, sufficient ordinate information is provided.
Referring to fig. 2, the image sharpness data of step SP1 includes a plurality of sets of sharpness differences between the real-time image sharpness and the initial image sharpness. The sharpness difference m _ focus _ Y [0] = def0-m _ focus _ def, def0 is the real-time image sharpness when m _ focus _ cnt =0. m _ focus _ Y [1] = def1-m _ focus _ def, def1 is the definition of the real-time image captured when m _ focus _ cnt = 1. m _ focus _ Y [2] = def2-m _ focus _ def, def2 is the definition of the real-time image captured when m _ focus _ cnt =2, and so on. Sufficient abscissa information is provided as m _ focus _ cnt increases.
Referring to fig. 2, after the camera adjusts the position in step SP1 each time, it is noted that the position difference and the sharpness difference of the camera at the same position are respectively regarded as an abscissa value and an ordinate value corresponding to a point on a quadratic function or a quadratic curve at the same time. For example, after the position adjustment when m _ focus _ cnt =1, the position difference m _ focus _ X [1] and the sharpness difference m _ focus _ Y [1] of the camera at the same position are respectively regarded as the abscissa and the ordinate of the second-order curve corresponding to the same point at the same time. After the position adjustment when m _ focus _ cnt =2, the position difference m _ focus _ X [2] and the sharpness difference m _ focus _ Y [2] of the camera at the same position are respectively regarded as an abscissa and an ordinate corresponding to the same point on the second-order curve at the same time. Note that def-m _ focus _ def is a sharpness difference or sharpness change amount.
Referring to fig. 2, step SP1 ends the current position adjustment if the absolute value of any position difference value exceeds a specified travel value (travel). That is, the Z-axis position adjustment is finished this time, and m _ focus _ cnt stops counting.
Referring to fig. 2, step SP1 specifies a maximum number of times MAX _ FRAME _ COUNT of adjustments in the Z-axis, which the actual number of times m _ focus _ cnt of adjustments requiring the camera to repeatedly adjust the position in the vertical axis direction should be less than. The maximum number of autofocus, i.e., the maximum number of Z-axis adjustments, is defined as MAX _ FRAME _ COUNT. The situation that the position adjustment is continuously performed and the cycle cannot be skipped can be avoided, and the situation that the measurement process falls into the endless adjustment situation can be prevented.
Referring to fig. 3, the present embodiment is a further optimization based on fig. 2, which requires performing a plurality of auto-focus attempts (cnt) before performing metrology on the critical dimension on the wafer, and in an alternative example, defines the maximum number of auto-focus attempts to be auto-focus trycnt. As shown, the actual number of attempts cnt to repeat the autofocus attempt should be required to be less than the maximum number of attempts AutoFocusTryCnt. It is observed that each autofocus attempt or any single autofocus attempt includes the flow of steps SP1 to SP5 in fig. 2 or that a single autofocus attempt includes the flow of steps SP1 to SP6. Step SP7 of fig. 2 may still be used after each autofocus attempt or after the end of any single autofocus attempt.
Referring to fig. 3, a focus method for critical dimension measurement: multiple autofocus attempts are performed before metrology is performed on the critical dimensions on the wafer (focus continues to be attempted as long as cnt < autofocus trycnt). At each autofocus attempt or at any single autofocus attempt (e.g., cnt =0,1,2,3 8230; etc.), the camera is required to repeatedly adjust position in the vertical axis direction multiple times (the adjustment continues as long as m _ focus _ cnt < MAX _ FRAME _ COUNT) to retrieve position data of camera movement and corresponding image sharpness data. The number of focus attempts is recorded in cnt and each time a focus attempt is performed, cnt is required to run a self-increment operation. The number of times the position is adjusted is recorded with m _ focus _ cnt and each time the position is adjusted is performed, m _ focus _ cnt is required to run a self-add operation. The camera is required to perform m _ focus _ cnt number of position adjustment actions in the Z-axis direction in each numerical case that cnt may take.
Referring to fig. 3, a focus method for critical dimension measurement: a second order curve is also fitted according to the position data and the image sharpness data. Step SP2 informs that m _ focus _ X [ m _ focus _ cnt ] is based on the position data]And the acquired image definition data m _ focus _ Y [ m _ focus _ cnt ]]To fit a second order curve y = ax 2 + bx + c. Since the second order curve is known at this time, the sharpest position of the image is apparent. The aforementioned step SP3 or the reservation step SP3 can be omitted in this embodiment, which is allowed.
Referring to fig. 3, a focus method for critical dimension measurement: judging whether the second-order curve meets a preset condition, if so, determining that focusing is successful; if not, the camera is moved on the vertical axis for a certain distance, and then the focus is searched. As in step SP4.
Referring to fig. 3, the predetermined conditions include at least: the second-order coefficient of the second-order curve is less than zero, namely a <0, the vertex coordinate is greater than zero, namely (-b/(2 a)) >0, and (-b/(2 a)) < AutoFocusTracel, namely the vertex coordinate is less than the maximum value of the focusing stroke. The focusing is considered to be successful if the predetermined condition is satisfied at the same time, as by step SP5. The focus is found after moving the camera on the vertical axis by a distance not satisfying any of the predetermined conditions, as by step SP6.
Referring to fig. 3, the known determination result is represented by step SP 6: in step SP6, in the stage of repeatedly adjusting the position of the camera for multiple times, the difference is made based on any two adjacent definition differences obtained by successively adjusting the position of the camera (for example, the difference is made between two adjacent definition differences m _ focus _ Y [ m +1] and m _ focus _ Y [ m ]). In step SP6, it is possible that the resolution may be required to be poor at each autofocus attempt or at any single autofocus attempt. Defining a variable term which changes along with the increase of the position adjustment times, and calculating a variable term dir: the current variable term dir is the value equal to the last time the variable term was added to the current differencing result (e.g., m _ focus _ Y [ m +1] minus m _ focus _ Y [ m ]). In an alternative example, it can be seen that the current differencing result is considered to be the difference of the next subsequent sharpness difference, e.g., m _ focus _ Y [ m +1], minus the sharpness difference at the current position adjustment, e.g., m _ focus _ Y [ m ].
Referring to fig. 3, for example, assuming that m =3, the current variable term dir3 is equal to the value dir2 at the previous position adjustment plus the current subtraction result (the current subtraction result is m _ focus _ Y [4] minus m _ focus _ Y [3 ]). Based on this assumption, the calculation can be continued, and in the same way, the current dir2 is equal to the value dir1 at the time of the previous position adjustment plus the current subtraction result (the current subtraction result is m _ focus _ Y [3] minus m _ focus _ Y [2 ]). Based on this assumption, forward estimation can still continue, and so on, when current dir1 equals the value dir0 at the previous position adjustment plus its then-current subtraction result (m _ focus _ Y [2] minus m _ focus _ Y [1 ]). Finally, whether the variable term of each time is smaller than zero is judged (namely, whether if (dir < 0) is true is judged). In summary, the variable term corresponding to the current adjustment number is equal to the value of the variable term at the previous position adjustment plus the difference result at the current position adjustment.
Referring to fig. 3, if yes (if (dir < 0) is determined), the focus is attempted after moving up a distance. The distance that the camera moves up relative to the start position of focus is equal to: one specified run value (travel) is divided by one half over and over by the current number of foci, equal to up _ load = (travel/(2 × cnt + 1))). Since the default number of foci starts from zero, but provided that the statement of the zeroth attempt is not habitual, the current number of foci is more habitual with cnt + 1. For example, what happens when the current first focus count (cnt + 1) is substantially cnt =0, and this is done once. Again as occurs when the current second focus count (cnt + 1) is essentially cnt =1, this time does try twice. More strictly speaking, the distance by which the starting position of the focus is moved upwards is equal to: the above-mentioned specified run length value (travel) is divided by one half and by the total number of times equal to the number of times of focusing actually occurred plus one (i.e., cnt + 1), and the upward shift distance obtained by the different expressions is the same, which is equal to up _ load = (travel/(2 × cnt + 1)).
Referring to fig. 3, in each trial (e.g., cnt =0,1,2,3 \ 8230; \8230; etc.): and repeatedly adjusting the position of the camera in the vertical axis direction for multiple times, recording the initial position of the focusing starting point and the initial image definition, and recording the real-time position and the real-time image definition of the camera after each position adjustment.
Referring to fig. 3, in each trial (e.g., cnt =0,1,2,3 \ 8230; \8230; etc.): the camera repeatedly adjusts the position in the vertical axis direction for a plurality of times, and the position data m _ focus _ X [ m _ focus _ cnt =0,1,2,3 \8230 ] includes a plurality of sets of position differences between the real-time position and the start position.
Referring to fig. 3, in each trial (e.g., cnt =0,1,2,3 \ 8230; \8230; etc.): the camera repeatedly adjusts the position in the vertical axis direction for multiple times, and the image definition data m _ focus _ Y [ m _ focus _ cnt =0,1,2,3 \8230 ] comprises multiple groups of definition difference values of real-time image definition and initial image definition.
Referring to fig. 3, after the camera adjusts the position each time (e.g., m _ focus _ cnt =0,1,2,3 \8230; etc.), the position difference and the sharpness difference of the camera at the same position are respectively regarded as an abscissa value and an ordinate value corresponding to a point on the second-order curve at the same time.
Referring to fig. 3, the position difference m _ focus _ X [0] and the sharpness difference m _ focus _ Y [0] under the same position condition (e.g., m _ focus _ cnt = 0) are regarded as the abscissa value and the ordinate value of the same point on the second-order curve, respectively.
Referring to fig. 3, the position difference m _ focus _ X [3] and the sharpness difference m _ focus _ Y [3] under the same position condition (e.g., m _ focus _ cnt = 3) are respectively regarded as an abscissa value and an ordinate value of the same point on the second-order curve.
Referring to fig. 3, after the camera adjusts the position each time (e.g., m _ focus _ cnt =0,1,2,3 \8230; etc.), if any position difference value z _ pos-m _ focus _ z is greater than the specified stroke value travel, the current position adjustment is ended and the loop of repeatedly adjusting the position by the camera is skipped. z _ pos-m _ focus _ z is a position difference value or position variable.
Referring to fig. 3, in each segment of the trial (e.g., cnt =0,1,2,3 \ 8230; \8230; etc.): a maximum number of times MAX _ FRAME _ COUNT of adjustment in one vertical axis is specified, and an actual number of times m _ focus _ cnt of adjustment for which the camera is required to repeatedly adjust the position in the Z-axis direction is smaller than the maximum number of times MAX _ FRAME _ COUNT.
Referring to fig. 3, the clearest position m _ focus _ best of the image is the sum of the vertex coordinates of the second order curve plus the start position of the focus start point. m _ focus _ best = -b/(2 × a) + m _ focus _ Z, and the vertex coordinates plus the focus start Z-axis position result in the sharpest position m _ focus _ best of the image.
Referring to fig. 3, in each trial (e.g., cnt =0,1,2,3 \ 8230; \8230; etc.): and judging whether the second-order curve meets a preset condition. The predetermined conditions have already been explained above and are not described in detail.
Referring to fig. 3, in each trial (e.g., cnt =0,1,2,3 \ 8230; \8230; etc.): when the above condition, that is, the predetermined condition is not satisfied, that is, the focus is not within the current stroke, the stage needs to be moved continuously to find the focus in this case. The focus of the moving stage is explained above and will not be described in detail.
Referring to fig. 3, when the timing point of trying the upper limit (e.g., cnt < autofocusttrycnt) is not reached: multiple autofocus attempts should not end. The focusing attempt requires the execution of the flow of steps SP1 to SP5 or the flow of steps SP1 to SP6 when cnt =0, the execution of the flow of steps SP1 to SP5 or the execution of the flow of steps SP1 to SP6 when cnt =1, and the execution of the flow of steps SP1 to SP5 or the execution of the flow of steps SP1 to SP6 when cnt = 2. And so on. The jump out ends by cnt = autofocus trycnt.
Referring to fig. 3, when the timing point of the upper trial limit (e.g., cnt = autofocus trycnt) is reached: multiple autofocus attempts should end. For example, the cnt maximum value is equal to AutoFocusTryCnt minus one. If step SP7 is executed, it means that the camera or the stage is required to move to the relative focal distance dis, i.e. the position m _ focus _ best where the previous image is sharpest minus the current Z-axis position Zc. double dis = m _ focus _ best-Zc. By this point the autofocus is finished and the image of the critical dimension structure on the wafer is at the highest resolution and highest definition at that time. The foregoing focusing has been achieved with substantial success for the purposes of the present application set forth in the background section. While still moving the camera to the focus relative distance is also a better embodiment to achieve auto-focus.
Referring to fig. 3, in each segment of the trial (e.g., cnt =0,1,2,3 \ 8230; \8230; etc.): before the camera position is repeatedly adjusted or each time the camera position is adjusted, in an alternative example, the microscope lens approaches the wafer to a predetermined scale value (e.g., 3/4) of a specified travel value (travel). The process of step SP0 approaching the lens to the wafer to a predetermined distance is shown. Lens and sample i.e. wafer proximity: first, a distance above the sample of about the specified travel value multiplied by the predetermined ratio is traveled. An example such as MoveZDiret (-up _ load x 3 ÷ 4) shows the shot first making 3/4 strokes over the sample. Step SP0 represents lens and wafer proximity as MoveZDirect.
Referring to fig. 3, when the timing point of trying the upper limit (e.g., cnt < autofocus trycnt) is not reached: multiple autofocus attempts should not end. The focusing attempt requires the execution of the flow of steps SP0 to SP5 or the flow of steps SP0 to SP6 when cnt =0, the execution of the flow of steps SP0 to SP5 or the execution of the flow of steps SP0 to SP6 when cnt =1, and the execution of the flow of steps SP0 to SP5 or the execution of the flow of steps SP0 to SP6 when cnt = 2. And so on. This is an example when step SP0 is employed.
Referring to fig. 4, the process of acquiring the position data of the camera is implemented by moving the Z-axis, because the movement of the camera or the stage causes the m _ focus _ X [ m _ focus _ cnt ] = Z _ pos-m _ focus _ Z to change, as the fitting source data of the abscissa of the second-order curve, and the position data m _ focus _ X [ m _ focus _ cnt =0,1,2,3 \8230; ] exhibits the array characteristic.
Referring to fig. 4, the process of acquiring the image sharpness data is implemented by moving the Z-axis because the movement of the camera or the stage causes the m _ focus _ Y [ m _ focus _ cnt ] = def-m _ focus _ def to change, and the sharpness data m _ focus _ Y [ m _ focus _ cnt =0,1,2,3 \8230; ] is an array property as the fitting source data of the ordinate of the second-order curve.
See fig. 5, an example of a second order fit. Given a data sequence (x) i ,y i ) And satisfies (i =0,1,2,3 \ 8230;, m), the set of data is fitted with a quadratic polynomial. The following calculations and simplifications, etc. generally describe the process of second order curve fitting.
p(x)=a 0 +a 1 x+a 2 x 2
The mean square error of the fitting function and the data sequence is made based on p (x):
Figure BDA0003849344220000201
from the extreme principle of the multivariate function, Q (a) is derived 0 ,a 1 ,a 2 ) The minimum value of (c) can satisfy:
Figure BDA0003849344220000202
the foregoing relates to
Figure BDA0003849344220000203
The formula (2) is simplified as follows:
Figure BDA0003849344220000204
referring to fig. 5, an array X [ n ] uses position data m _ focus _ X [ m _ focus _ cnt =0,1,2,3 \8230 ].
Referring to fig. 5, an array Y [ n ] uses sharpness data m _ focus _ Y [ m _ focus _ cnt =0,1,2,3 \8230 ].
Referring to fig. 5, according to the discretization characteristic of the data sequence (m _ focus _ X, m _ focus _ Y), the data sequence is fitted by a quadratic polynomial, and the relationship Y = ax can be calculated by algorithm fitting 2 + bx + c. Step SP2 is to fit a second order curve according to the position data of the camera movement and the acquired image sharpness data, and calculate the vertex coordinates of the second order curve, for example, to be equal to the second order curve y = ax 2 The abscissa value-b/(2 a) of the vertex coordinates of + bx + c.
Or about
Figure BDA0003849344220000211
The related simplification of (1) is as follows:
Figure BDA0003849344220000212
Figure BDA0003849344220000213
given numberAccording to the sequence (x) i ,y i ) And fitting the data by a quadratic polynomial, further from (a) 0 ,a 1 ,a 2 ) Matrix form and composition of (y) 0 ,y 1 ,y 2 ) Correlation analysis in matrix form with respect to Q (a) 0 ,a 1 ,a 2 ) The minimum value correlation is simplified as follows:
Figure BDA0003849344220000214
it can be seen that the above simplifications differ slightly but the final result is the same. Solving by the above principle to obtain the coefficient a of a second order function 0 ,a 1 ,a 2 And the like mathematical terms or coefficients.
Referring to fig. 5, second order curve fitting: given two arrays x n of length n],y[n]For example, assuming that both arrays are discretized, this relationship y = ax can be calculated by algorithmic fitting 2 + bx + c. Thus two arrays x n are calculated by means of a fitting function],y[n]The process of the relationship between may be referred to as a second order curve fit.
See fig. 5, similarly y = ax 2 + bx + c or p (x) = a 0 +a 1 x+a 2 x 2 Mathematically belonging to a quadratic relation. Contains quadratic term coefficient a, first order term coefficient b and constant term c. x is the abscissa and y is the ordinate. In the latter expression, the coefficient of the second order term a is included 2 First order coefficient a 1 And constant term a 0
Referring to fig. 6, the expression of the energy gradient function F is the relational expression described below, which is the summation of all pixel gradient values as a sharpness evaluation function value. Similarly, the same is true for the metrology image and its pixels for critical dimensions.
Figure BDA0003849344220000221
Wherein F (xp, yp) represents the gray value of the corresponding pixel point, such as (xp, yp), and the image is clearer when the value of F is larger. The camera can capture an Image1, where the Image1 includes the grayscale value of each pixel point, such as (xp, yp). Step SP1 collects the image definition data, and the energy gradient function provides the basis for collecting the image definition for step SP 1. The Image sharpness data may be extracted from Image information Image1 captured by the camera.
Referring to fig. 6, the energy gradient function: and taking the square sum of the difference between the gray values of the adjacent pixels in the x direction and the y direction as the gradient value of each pixel point, and accumulating the gradient values of all pixels to be used as a definition evaluation function value.
Referring to fig. 7, in the sharpness evaluation method based on image gradients, besides calculation by using an energy gradient function, a laplacian Laplace function is further provided, a gradient matrix is obtained by performing convolution on a Laplace operator and the gray value of each pixel point of an image to obtain G (xp, yp), and the sum of squares of gradients of each pixel point is taken as an evaluation function.
Figure BDA0003849344220000222
Note that F (xp, yp) is used to represent the gray scale value of the corresponding pixel point, such as (xp, yp), and the larger the value of F, the sharper the image.
And in addition, the expression of G (xp, yp) is
Figure BDA0003849344220000223
Examples (but not limiting to examples) of L in the function G (xp, yp) relating to laplacian are as follows.
Figure BDA0003849344220000224
Referring to fig. 7, step SP1 collects image sharpness data, and may use a laplacian function in addition to using the summation of gradient values of all pixels of fig. 6 as sharpness evaluation function values. And when the definition of the real-time image and the definition of the initial image are calculated, an energy gradient function or a Laplace function is used as a definition evaluation function.
Referring to fig. 1, as mentioned above, the motors driving the camera and the microscope and their stage are controlled by a computer or server or associated processing unit. Other alternatives of the processing unit: a field programmable gate array, a complex programmable logic device or a field programmable analog gate array, or a semi-custom ASIC or processor or microprocessor, digital signal processor or integrated circuit or software firmware program stored in memory, or the like. Steps SP1 to SP7 of fig. 2 may also be implemented by a computer or a server or a processing unit, as may be the implementation of steps SP0 to SP7 of fig. 3.
Referring to fig. 1, a focus method for critical dimension measurement: before measuring the critical dimension of the wafer, focusing is performed to make the wafer reach the focal plane of the camera: and fitting a second-order curve or a quadratic function according to the position data of the camera movement and the acquired image definition data, and calculating the vertex coordinates of the second-order curve to obtain the clearest position of the image. Such a focus method for critical dimension measurement is specifically implemented in the example of fig. 2. The image comprises a wafer image shot by the camera through the microscope, and the clearest position of the image comprises the position of the camera at the clearest moment of the wafer image shot by the camera through the microscope. The roughness and the appearance of the surface of the wafer are different in different process stages, the structural characteristics of the critical dimension are changeable, and the focusing method is suitable for the structural appearances of various critical dimensions. Especially, the images at different positions are not always in the focus plane when the critical dimension of the wafer is measured, which results in a large error of the measured value, and the focusing method can always self-adaptively find the best position for shooting the wafer and the critical dimension thereof and the clearest position of the image.
Referring to fig. 1, a focus method for critical dimension measurement: before the critical dimension of the wafer is measured, executing a plurality of automatic focusing attempts; in each attempt: repeatedly adjusting the position of the camera in the vertical axis direction for multiple times to capture the position data of the movement of the camera and the corresponding image definition data; fitting a second-order curve or a fitting quadratic function according to the position data and the image definition data; judging whether the second-order curve meets a preset condition, if so, considering that focusing is successful; if not, the camera is moved on the vertical axis for a certain distance, and then the focus is searched. The position data is position information in the Z-axis or vertical axis direction. Such a focusing method for critical dimension measurement is specifically implemented in the examples of fig. 2 and 3.
While the above specification concludes with claims defining the preferred embodiments of the invention that are presented in conjunction with the specific embodiments disclosed, it is not intended to limit the invention to the specific embodiments disclosed. Various alterations and modifications will no doubt become apparent to those skilled in the art after having read the above description. Therefore, the appended claims should be construed to cover all such variations and modifications as fall within the true spirit and scope of the invention. Any and all equivalent ranges and contents within the scope of the claims should be considered to be within the intent and scope of the present invention.

Claims (29)

1. A focusing method for critical dimension measurement is characterized in that:
before measuring the critical dimension on the wafer, focusing is performed to make the wafer reach the focal plane of the camera: and fitting a second-order curve according to the position data of the movement of the camera and the acquired image definition data, and calculating the vertex coordinates of the second-order curve to obtain the clearest position of the image.
2. The method of claim 1, wherein:
repeatedly adjusting the position of a camera provided with the microscope in the vertical axis direction, recording the initial position and the initial image definition of a focusing initial point, and recording the real-time position and the real-time image definition of the camera after each position adjustment;
the position data comprises position difference values of a plurality of groups of real-time positions and initial positions, and the image definition data comprises definition difference values of a plurality of groups of real-time image definitions and initial image definitions.
3. The method of claim 2, wherein:
and after the camera adjusts the position each time, the position difference value and the definition difference value of the camera at the same position are respectively regarded as an abscissa value and an ordinate value which correspond to one point on the second-order curve at the same time.
4. The method of claim 2, wherein:
and when the real-time image definition and the initial image definition are calculated, an energy gradient function or a Laplace function is used as a definition evaluation function.
5. The method of claim 2, wherein:
and if the absolute value of any position difference value exceeds a specified travel value, ending the current position adjustment.
6. The method of claim 2, wherein:
the maximum number of times of adjustment in a vertical axis is specified, and the actual number of times of adjustment in which the camera repeatedly adjusts the position in the vertical axis direction is required not to exceed the maximum number of times.
7. The method of claim 2, wherein:
the clearest position of the image is the vertex coordinate of the second-order curve plus the initial position of the focusing starting point.
8. The method of claim 2, wherein:
and if the second-order coefficient of the second-order curve is less than zero, the vertex coordinate is greater than zero, and the vertex coordinate is less than a defined focusing stroke maximum value, the focusing is considered to be successful.
9. The method of claim 2, wherein:
if the second-order coefficient of the second-order curve is not satisfied with any one of the conditions that the second-order coefficient is smaller than zero, the vertex coordinate is larger than zero and the vertex coordinate is smaller than a defined focusing stroke maximum value, the camera is moved on the vertical axis for a certain distance, and then the focus is searched.
10. The method of claim 9, wherein:
in the stage of repeatedly adjusting the position of the camera for multiple times, the difference between any two adjacent definition difference values obtained by successively adjusting the position of the camera is made; defining a variable term which changes with the increase of the position adjustment times, and the current variable term is equal to the current difference result added to the previous value; judging whether the variable item is less than zero;
if yes, moving up a distance and trying to focus;
if not, the focusing is tried after moving downwards for a certain distance.
11. The method of claim 10, wherein:
the distance that the camera moves up relative to the start position of focus is equal to: one half of the specified run value is divided by the current number of foci.
12. The method of claim 10, wherein:
the distance that the camera is moved down relative to the start position of focus is equal to: one specifying half of the run value.
13. The method according to claim 8 or 9, characterized in that:
the camera is moved to the focus relative distance, i.e. the position where the image is sharpest minus the current position of the camera.
14. The method of claim 2, wherein:
multiple focus attempts are performed, with the camera repeatedly adjusting position in the vertical axis direction in any single focus attempt.
15. The method of claim 2, wherein:
before repeatedly adjusting the position of the camera, the microscope lens is close to the wafer to a preset proportional value of a designated stroke value.
16. The method of claim 1, wherein:
a stepping motor drives the camera to move in the vertical axis direction, and a control unit controls the stepping motor to control the movement of the camera;
the control unit is also used for fitting a second order curve and calculating vertex coordinates.
17. A focusing method for critical dimension measurement, comprising:
performing a plurality of autofocus attempts before performing measurements on critical dimensions on a wafer;
in each attempt: repeatedly adjusting the position of the camera in the vertical axis direction for multiple times so as to capture the moving position data of the camera and corresponding image definition data;
fitting a second-order curve according to the position data and the image definition data;
judging whether the second-order curve meets a preset condition, if so, determining that focusing is successful; if not, the camera is moved on the vertical axis for a certain distance, and then the focus is searched.
18. The method of claim 17, wherein:
in each attempt: repeatedly adjusting the position of the camera in the vertical axis direction for multiple times, recording the initial position and the initial image definition of the focusing initial point, and recording the real-time position and the real-time image definition of the camera after each position adjustment;
the position data comprises position difference values of a plurality of groups of real-time positions and initial positions, and the image definition data comprises definition difference values of a plurality of groups of real-time image definitions and initial image definitions.
19. The method of claim 18, wherein:
and after the camera adjusts the position each time, respectively taking the position difference and the definition difference of the camera at the same position as an abscissa value and an ordinate value corresponding to one point on the second-order curve at the same time.
20. The method of claim 18, wherein:
and after the camera adjusts the position each time, if the absolute value of any position difference value exceeds a specified travel value, finishing the current position adjustment, and jumping out of the cycle of repeatedly adjusting the position of the camera for many times.
21. The method of claim 18, wherein:
in each attempt: the maximum number of times of adjustment in a vertical axis is specified, and the actual number of times of adjustment in which the camera repeatedly adjusts the position in the vertical axis direction is required not to exceed the maximum number of times.
22. The method of claim 18, wherein:
the clearest position of the image is the vertex coordinate of the second-order curve plus the initial position of the focus starting point.
23. The method of claim 18, wherein:
in each attempt: the preset conditions comprise that the coefficient of the quadratic term of the second-order curve is less than zero, the vertex coordinate is greater than zero, and the vertex coordinate is less than a defined focusing stroke maximum value, and the focusing is considered to be successful if the preset conditions are met.
24. The method of claim 18, wherein:
in each attempt: the preset conditions comprise that the coefficient of the second-order term of the second-order curve is smaller than zero, the vertex coordinate is larger than zero, and the vertex coordinate is smaller than a defined focusing travel maximum value, and if any one of the preset conditions is not met, the camera is moved on the vertical axis for a certain distance to search the focus.
25. The method of claim 24, wherein:
in each attempt: in the stage of repeatedly adjusting the position of the camera for multiple times, the difference is made between any two adjacent definition difference values obtained by successively adjusting the position of the camera; defining a variable term which changes with the increase of the position adjustment times, and the current variable term is equal to the current difference result added to the previous value; judging whether the variable item is less than zero;
if yes, the camera moves upwards for a certain distance, and then automatic focusing is retried;
if not, the camera moves downwards for a distance, and then automatic focusing is tried again.
26. The method of claim 25, wherein:
the distance that the camera moves up relative to the start position of focus is equal to: one half of the specified run length value is divided by the current number of foci.
27. The method of claim 25, wherein:
the distance that the camera is moved down relative to the start position of focus is equal to: one specifying half of the run value.
28. The method according to claim 23 or 24, characterized in that:
the camera is moved to the focus relative distance, i.e. the position where the image is sharpest minus the current position of the camera.
29. The method of claim 17, wherein:
in each attempt: before repeatedly adjusting the position of the camera, the microscope lens approaches the wafer to a preset proportion value of a specified travel value.
CN202211129197.XA 2022-09-16 2022-09-16 Focusing method for critical dimension measurement Active CN115546114B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211129197.XA CN115546114B (en) 2022-09-16 2022-09-16 Focusing method for critical dimension measurement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211129197.XA CN115546114B (en) 2022-09-16 2022-09-16 Focusing method for critical dimension measurement

Publications (2)

Publication Number Publication Date
CN115546114A true CN115546114A (en) 2022-12-30
CN115546114B CN115546114B (en) 2024-01-23

Family

ID=84727273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211129197.XA Active CN115546114B (en) 2022-09-16 2022-09-16 Focusing method for critical dimension measurement

Country Status (1)

Country Link
CN (1) CN115546114B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03153015A (en) * 1989-11-10 1991-07-01 Nikon Corp Method and apparatus for alignment
US20050109959A1 (en) * 2003-11-24 2005-05-26 Mitutoyo Corporation Systems and methods for rapidly automatically focusing a machine vision inspection system
TW201428418A (en) * 2012-11-09 2014-07-16 Kla Tencor Corp Method and system for providing a target design displaying high sensitivity to scanner focus change
CN105097579A (en) * 2014-05-06 2015-11-25 无锡华润上华科技有限公司 Measuring method, etching method, and forming method of semiconductor device
CN107197151A (en) * 2017-06-16 2017-09-22 广东欧珀移动通信有限公司 Atomatic focusing method, device, storage medium and electronic equipment
CN110646933A (en) * 2019-09-17 2020-01-03 苏州睿仟科技有限公司 Automatic focusing system and method based on multi-depth plane microscope
CN115020174A (en) * 2022-06-15 2022-09-06 上海精测半导体技术有限公司 Method for measuring and monitoring actual pixel size of charged particle beam scanning imaging equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03153015A (en) * 1989-11-10 1991-07-01 Nikon Corp Method and apparatus for alignment
US20050109959A1 (en) * 2003-11-24 2005-05-26 Mitutoyo Corporation Systems and methods for rapidly automatically focusing a machine vision inspection system
TW201428418A (en) * 2012-11-09 2014-07-16 Kla Tencor Corp Method and system for providing a target design displaying high sensitivity to scanner focus change
CN105097579A (en) * 2014-05-06 2015-11-25 无锡华润上华科技有限公司 Measuring method, etching method, and forming method of semiconductor device
CN107197151A (en) * 2017-06-16 2017-09-22 广东欧珀移动通信有限公司 Atomatic focusing method, device, storage medium and electronic equipment
CN110646933A (en) * 2019-09-17 2020-01-03 苏州睿仟科技有限公司 Automatic focusing system and method based on multi-depth plane microscope
CN115020174A (en) * 2022-06-15 2022-09-06 上海精测半导体技术有限公司 Method for measuring and monitoring actual pixel size of charged particle beam scanning imaging equipment

Also Published As

Publication number Publication date
CN115546114B (en) 2024-01-23

Similar Documents

Publication Publication Date Title
TWI572990B (en) Method of applying a pattern to a substrate, device manufacturing method and lithographic apparatus for use in such methods
US7477396B2 (en) Methods and systems for determining overlay error based on target image symmetry
TWI630636B (en) Method and apparatus for inspection
US7808643B2 (en) Determining overlay error using an in-chip overlay target
US20160223476A1 (en) Metrology method, metrology apparatus and device manufacturing method
JP3002351B2 (en) Positioning method and apparatus
KR20190137132A (en) Method and apparatus for optimization of lithography process
US11194258B2 (en) Method and apparatus for determining a fingerprint of a performance parameter
TW201907246A (en) Level sensor apparatus, method of measuring topographical variation across a substrate, method of measuring variation of a physical parameter related to a lithographic process, and lithographic apparatus
TWI625610B (en) Methods for controlling lithographic apparatus, lithographic apparatus and device manufacturing method
US8097473B2 (en) Alignment method, exposure method, pattern forming method, and exposure apparatus
JPH06349696A (en) Projection aligner and semiconductor manufacturing device using it
TWI672569B (en) Method for monitoring a characteristic of illumination from a metrology apparatus
US7095904B2 (en) Method and apparatus for determining best focus using dark-field imaging
CN115546114A (en) Focusing method for critical dimension measurement
US20180210332A1 (en) Spatial-frequency matched wafer alignment marks, wafer alignment and overlay measurement and processing using multiple different mark designs on a single layer
US6914666B2 (en) Method and system for optimizing parameter value in exposure apparatus and exposure apparatus and method
TW201714023A (en) Methods for controlling lithographic apparatus, lithographic apparatus and device manufacturing method
TWI747725B (en) Method for controlling a manufacturing process and associated apparatuses
TW202236007A (en) Method and apparatus for imaging nonstationary object
CN115547909A (en) Method for wafer definition positioning
CN110209016B (en) Exposure apparatus, exposure method, and method of manufacturing article
US20040075099A1 (en) Position detecting method and apparatus
JP2003203855A (en) Exposure method and aligner, and device manufacturing method
Xie et al. Defocus compensation scheme for femtosecond laser direct writing of large-area non-periodic micro-structures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant