CN115546114B - Focusing method for critical dimension measurement - Google Patents

Focusing method for critical dimension measurement Download PDF

Info

Publication number
CN115546114B
CN115546114B CN202211129197.XA CN202211129197A CN115546114B CN 115546114 B CN115546114 B CN 115546114B CN 202211129197 A CN202211129197 A CN 202211129197A CN 115546114 B CN115546114 B CN 115546114B
Authority
CN
China
Prior art keywords
camera
focus
image
focusing
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211129197.XA
Other languages
Chinese (zh)
Other versions
CN115546114A (en
Inventor
田东卫
温任华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Meijie Photoelectric Technology Shanghai Co ltd
Original Assignee
Meijie Photoelectric Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Meijie Photoelectric Technology Shanghai Co ltd filed Critical Meijie Photoelectric Technology Shanghai Co ltd
Priority to CN202211129197.XA priority Critical patent/CN115546114B/en
Publication of CN115546114A publication Critical patent/CN115546114A/en
Application granted granted Critical
Publication of CN115546114B publication Critical patent/CN115546114B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Testing Or Measuring Of Semiconductors Or The Like (AREA)
  • Automatic Focus Adjustment (AREA)

Abstract

The invention relates to a focusing method for critical dimension measurement. Focusing methods applied to critical dimension measurement require focusing to bring the wafer to the focal plane of the camera before performing the measurement on the critical dimension on the wafer: and fitting a second-order curve by using the moving position data of the camera and the acquired image definition data, and calculating the vertex coordinates of the second-order curve to obtain the clearest position of the image. Therefore, quick, accurate and smooth focusing is realized, and real-time response can be carried out on the focusing condition of the region of interest and whether the mapping image reaches the clearest or not can be carried out.

Description

Focusing method for critical dimension measurement
Technical Field
The present invention relates generally to the field of integrated circuit critical dimension measurement, and more particularly, to a focusing method for critical dimension measurement or a focusing technique for critical dimension measurement.
Background
With the development of integrated circuit processes, semiconductor devices and processes are becoming more and more complex. In order to ensure the accuracy of each process during the semiconductor manufacturing process, the dimension measurement of the semiconductor structure is a necessary step. Generally, cd sem measurement is a measurement method that is more commonly used, and for example, alternative optical cd may be used to detect not only cd of a pattern similar to photoresist, but also the relevant dimension of the profile of the pattern. Alignment is involved whether optical critical dimensions or scanning electron microscopy or other measurements that provide information about the dimensions of the semiconductor wafer.
Critical dimension measurement often depends heavily on whether the image or shot of the object is sharp, and if the image of the object is only a coarser blurred image, then the critical dimension measurement must deviate. The trouble is how to finish the fine shooting of the critical dimension. In the prior art, the shooting is often realized by roughly adjusting illumination, and usually, the image of a scanning electron microscope becomes blurred, so that an accurate image cannot be realized, and measurement cannot be performed. Or the scanning electron microscope pattern is considered to be clear when viewed but in fact does not achieve optimal sharpness.
The fields of metrology equipment or lithographic equipment in the semiconductor industry are related to autofocus. The auto-focusing system of the measuring device is a key technology affecting the measuring performance, the focusing speed affects the yield of the wafer production line, and the focusing accuracy affects the quality of the whole product. If the focusing precision is not high, the product produced in quantity is directly disqualified and scrapped. How to ensure a high degree of precision of focusing and thereby achieve capturing of accurate images of devices on a wafer is a problem to be solved.
In order to ensure that the critical dimensions meet the desired values, such as to ensure that the circuits do not overlap or interact improperly with each other, design rules define rules such as the allowable distance between the device and the interconnect lines, and the line width. This design rule limits the space that often defines critical ranges of line and space dimensions, such as the width or size of lines allowed in the fabricated circuit. Dimensional errors indicate some instability in critical parts of the semiconductor process. Errors in dimensions may be caused by any source, such as lens curvature or aberrations in the optical system, uneven thickness of the mechanical, or chemical or anti-reflective resist, etc., and may be caused by the provision of incorrect energy, such as exposure radiation. Therefore, it is necessary to ensure that critical dimensions comply with predetermined specifications.
Except for similar measurement doubts, the most demanding requirement in terms of measurement is a precise image. The problem is how to ensure that there is still room for improvement in the fineness of the image, which would otherwise lead to the following attempts to improve the manufacturing process to optimize the semiconductor process offset, without any mention of the method, and it is based on these drawbacks that the present application proposes the following examples.
It should be noted that the foregoing description of the background art is only for the purpose of facilitating a clear and complete description of the technical solutions of the present application and for the convenience of understanding by those skilled in the art. The present application is not to be considered limited to such specific application scenarios merely because these schemes are set forth in the background section of the present application.
Disclosure of Invention
The application provides a focusing method aiming at critical dimension measurement, wherein:
before measuring the critical dimension on the wafer, focusing is performed to make the wafer reach the focal plane of the camera: and fitting a second-order curve by using the moving position data of the camera and the acquired image definition data, and calculating the vertex coordinates of the second-order curve to obtain the clearest position of the image.
The method, wherein: repeatedly adjusting the position of a camera provided with a microscope in the vertical axis direction, recording the initial position of a focusing initial point and the initial image definition, and recording the real-time position and the real-time image definition of the camera after each position adjustment; the position data comprises a plurality of groups of position differences between the real-time position and the initial position, and the image definition data comprises a plurality of groups of definition differences between the real-time image definition and the initial image definition.
The method, wherein: after each position adjustment, the position difference value and the definition difference value of the camera under the same position condition are respectively regarded as an abscissa value and an ordinate value corresponding to one point on the second-order curve at the same time.
The method, wherein: when the real-time image definition and the initial image definition are calculated, an energy gradient function or a Laplacian function is utilized as a definition evaluation function.
The method, wherein: if the absolute value of any position difference exceeds a specified travel value, the current position adjustment is ended. For example, out of the current position adjustment loop.
The method, wherein: the maximum number of adjustments in one vertical axis is specified, and the actual number of adjustments requiring the camera to repeatedly adjust the position in the vertical axis direction does not exceed the maximum number.
The method, wherein: the most clear position of the image is the vertex coordinates of the second order curve plus the starting position of the focusing starting point. The vertex coordinates referred to herein may refer to vertex abscissas.
The method, wherein: the focusing is considered successful if the quadratic coefficient satisfying the second order curve is smaller than zero, the vertex coordinate is larger than zero, and the vertex coordinate is smaller than a defined maximum value of focusing travel.
The method, wherein: any one of the quadratic coefficient smaller than zero, the vertex coordinate larger than zero, and the vertex coordinate smaller than a defined maximum value of the focusing stroke is not satisfied, and then the working table is moved in advance in the vertical axis direction to search the focus. The stage referred to herein generally includes a microscope, a camera, and the like.
The method, wherein: in the stage of repeatedly adjusting the position of the camera for many times, making a difference based on any two adjacent sharpness differences obtained by sequentially adjusting the position of the camera; defining that the variable item changes along with the increase of the position adjustment times, and the current variable item is equal to the previous value of the variable item plus the current difference result; judging whether the variable item is smaller than zero or not;
if yes, the workbench moves upwards for a certain distance and then tries to focus;
if not, the workbench moves down for a certain distance and then tries focusing.
The method, wherein: the distance the table moves up with respect to the start position of the focus is equal to: half of a given stroke value is divided by the current number of foci.
The method, wherein: the distance the table moves down with respect to the start position of the focus is equal to: one specifies half of the travel value. Or about one-half or so of the specified stroke value.
The method, wherein: the camera is moved to a relative focal distance, i.e. the vertex coordinates of the second order curve minus the current position of the camera. The vertex coordinates referred to herein may refer to vertex abscissas.
The method, wherein: multiple focus attempts are performed, with the camera being repeatedly repositioned in the vertical axis direction in any single focus attempt.
The method, wherein: before iteratively adjusting the camera position, the microscope lens is brought into proximity with the wafer to three-fourths of a specified stroke value. Three quarters is an alternative example of a predetermined ratio value.
The method, wherein: a stepping motor drives the camera to move in the vertical axis direction, and a control unit controls the stepping motor to control the movement of the camera;
the control unit is also used for fitting a second order curve and calculating vertex coordinates.
The application also provides another focusing method for critical dimension measurement, wherein:
performing a plurality of autofocus attempts before performing measurements on critical dimensions on the wafer;
in each attempt: the camera repeatedly adjusts the position for a plurality of times in the vertical axis direction so as to acquire the moving position data and the corresponding image definition data of the camera;
Fitting a second-order curve according to the position data and the image definition data;
judging whether the second-order curve meets a preset condition, if so, considering that focusing is successful; if not, the workbench is moved on the vertical axis to search for the focus. The stage referred to herein generally includes a microscope, a camera, and the like.
The method, wherein: in each attempt: repeatedly adjusting the position of the camera in the vertical axis direction for a plurality of times, recording the initial position of a focusing initial point and the initial image definition, and recording the real-time position and the real-time image definition of the camera after each position adjustment; the position data comprises a plurality of groups of position differences between the real-time position and the initial position, and the image definition data comprises a plurality of groups of definition differences between the real-time image definition and the initial image definition.
The method, wherein: after each position adjustment, the position difference value and the definition difference value of the camera under the same position condition are respectively regarded as an abscissa value and an ordinate value corresponding to one point on the second-order curve at the same time.
The method, wherein: after the camera adjusts the position each time, if the absolute value of any position difference value exceeds a designated travel value, ending the current position adjustment, and jumping out of the circulation of repeatedly adjusting the position of the camera for a plurality of times.
The method, wherein: in each attempt: the maximum number of adjustments in one vertical axis is specified, and the actual number of adjustments requiring the camera to repeatedly adjust the position in the vertical axis direction does not exceed the maximum number.
The method, wherein: the most clear position of the image is the initial position of the vertex coordinates of the second order curve plus the focusing initial point.
The method, wherein: in each attempt: the predetermined condition includes that the quadratic coefficient of the second order curve is smaller than zero, the vertex coordinate is larger than zero, the vertex coordinate is smaller than a defined focusing travel maximum value, and focusing is considered successful when the predetermined condition is met.
The method, wherein: in each attempt: the predetermined condition includes that the quadratic term coefficient of the second order curve is smaller than zero, the vertex coordinate is larger than zero, the vertex coordinate is smaller than a defined focusing travel maximum value, and the workbench is moved on the vertical axis to find the focus when any one of the predetermined conditions is not met.
The method, wherein: in the stage of repeatedly adjusting the position of the camera for many times, making a difference based on any two adjacent sharpness differences obtained by sequentially adjusting the position of the camera; defining a variable item to change along with the increase of the position adjustment times, wherein the current variable item is equal to the previous value of the variable item plus the current difference result; judging whether the variable item is smaller than zero or not;
If yes, the workbench moves upwards for a certain distance, and then the automatic focusing is retried;
if not, the table is moved down a distance and then the autofocus is retried.
The method, wherein: the distance the table moves up with respect to the start position of the focus is equal to: half of a given stroke value is divided by the current number of foci.
The method, wherein: the distance the table moves down with respect to the start position of the focus is equal to: one specifies half of the travel value.
The method, wherein: the camera is moved to a relative focal distance, i.e. the vertex coordinates of the second order curve minus the current position of the camera.
The method, wherein: in each attempt: before repeatedly adjusting the camera position, the microscope lens is brought close to the wafer to a predetermined ratio of the specified travel value.
It should be noted that, when measuring the critical dimension of the wafer, since the images at different positions are not necessarily on the focal plane, a great error occurs in the measured value, and the focal plane of the conventional microscope is required to be manually and repeatedly focused and the working distance is trimmed, which results in low measurement efficiency and poor measurement accuracy. The automatic focusing technology can complete quick, accurate and smooth focusing, and can perform real-time reaction on focusing conditions of the interested areas (such as image definition data of some interested areas on a wafer), map whether the image reaches the clearest, and judge whether the measurement time of the size is reasonable. The defects of the prior art are solved smoothly.
Drawings
So that the manner in which the above recited objects, features and advantages of the present application can be understood in detail, a more particular description of the invention, briefly summarized below, may be had by reference to the appended drawings.
Fig. 1 is a table with a camera equipped with a microscope that can be moved up and down and carrying a wafer.
Fig. 2 is a second order curve obtained by camera and fitting to determine whether focusing was successful.
Fig. 3 is an embodiment in which the camera performs up and down movements for multiple repeated autofocus attempts.
Fig. 4 shows that a camera moves up and down to obtain a plurality of sets of position data and image sharpness data.
Fig. 5 is a second order curve fitted with multiple sets of position data and image sharpness data.
Fig. 6 is an example of the addition of gradient values for all pixels as sharpness evaluation function values.
Fig. 7 is a graph of the sum of squares of gradients of pixel points taken as an evaluation function by the laplace function.
Detailed Description
The solution according to the invention will now be described more clearly and completely in connection with the following examples, which are given by way of illustration only and not by way of all examples, on the basis of which the person skilled in the art obtains without any inventive effort.
Referring to fig. 1, the necessary knowledge to which the present application relates is first described. Wafers in the field of semiconductor fabrication generally refer to silicon wafers used to fabricate integrated circuits. The metrology stage or motion stage 11 of the cd metrology apparatus is configured to carry a wafer 10. The microscope and camera CA cooperate or are assembled together to capture fine wafer detail images. The microscope has a high power lens and a low power lens and the lens magnification can be switched manually or automatically in a series of lenses LN. Such as switching from a high power lens to a medium power lens or to a low power lens, or performing the opposite lens switching operation, such as switching from a low power lens to a medium power lens or to a high power lens. Such a multiple switching relationship of the lens includes on-axis switching.
Referring to fig. 1, regarding the platform (CHUCK): is a special tool for adsorbing and carrying wafers in various semiconductor silicon wafer production processes, and the motion platform 11 is mainly used for carrying wafers. Some documents also refer to such carriers as susceptor or lifting mechanism, wafer carrier trays or platforms, load-carrying platforms, and the like. The motion platform belongs to a bearing mechanism in the semiconductor equipment. The load-bearing table referred to herein includes a platform (CHUCK) structure. The motion stage may move along the X and Y axes as desired in this coordinate system, and in some cases the motion stage may rotate the wafer or move the wafer up and down in the Z axis as desired in this coordinate system.
Referring to fig. 1, a platform motion control module: the wafer movement control device consists of an X axis, a Y axis, a theta axis and a CHUCK, and before the critical dimension of the wafer is measured by the measuring equipment, the platform movement control module is required to carry out the CHUCK movement, so that the movement control of the wafer is realized. The θ axis may be rotated, for example, by rotating the θ axis to rotate the CHUCK, which is equivalent to adjusting the value of the angle θ by controlling the rotation of the motion platform.
Referring to fig. 1, a critical dimension measuring apparatus of the semiconductor industry includes at least a motion stage 11 and a camera CA configured with a microscope. The critical dimension measuring device can be a modification of the existing critical dimension measuring device or a completely new critical dimension measuring device designed. In view of the critical dimension measurement apparatuses already existing in the semiconductor industry, the present application will not be repeated separately, and it should be noted that all technical features or local technical features of the critical dimension measurement apparatuses of the prior art may be applied to the measurement apparatus of the present application. The present application defaults to critical dimension measurement equipment when referring to it includes all technical features or local technical features of the prior art.
Referring to fig. 1, an Image1 photographed by a camera CA provides pixel coordinates. The stage referred to herein typically includes a microscope, a camera CA cooperating with or assembled with the microscope, and the like.
Referring to fig. 1, the focusing Z-axis movement module of the camera CA: the wafer can be formed by a Z-axis capable of moving up and down, when the wafer is placed on a measuring platform, such as the platform 11, if the vision of the camera CA is clear and the resolution is high, the wafer needs to be located at the focal plane of the camera, and the Z-axis moving module can move up and down with the camera and the lens at the moment, so that the focal plane with the clearest vision of the camera can be found. I.e., the focal plane where the critical dimension structures on the wafer are located.
Referring to fig. 1, regarding distance focal plane position adjustment: the camera is driven to move up and down by the movement of the Z-axis stepping motor so as to achieve the adjustment of the distance from the focal plane. As to how the motor moves with the camera, it is the prior art that the key dimension measuring device currently existing basically adopts such a structure, and the description thereof is not repeated herein. In addition, the motor and its camera equipped with a microscope, etc., which are known in the art, will not be described again.
Referring to fig. 1, the automated microscope (Motorized Microscope) technique is mature and automated microscopes, like conventional manual microscopes, typically use three degrees of freedom to move the sample being observed: x, Y axis horizontal movement and Z axis vertical up and down movement. And the Z-axis moving lens directly determines the object distance of the micro optical system and determines the focusing imaging effect. The local technical features or the whole technical features of the automatic microscope can be applied to the microscope and the camera thereof in the figure.
Referring to fig. 1, in order to ensure that the operation process is controlled and stable, the Z axis and the X, Y axis also need a high-precision positioning function, but the conditions for designing the Z axis positioning system are different from those of the X, Y axis: on the one hand, the Z axis is under the action of gravity to ensure that no dead zone exists in the downward direction, the lens barrel is basically driven by the screw rod sliding block pair, and under the action of gravity, the sliding block can be ensured to be always in close contact with the screw rod, so that the sliding block can start to move on the screw rod when the screw rod rotates, and the idle stroke phenomenon can not occur. On the other hand, when a lens with high magnification and short depth of field is used, a clear image can be observed only when the Z-axis height is located exactly where the optical system is ensured to be in-focus. The slight up-and-down movement (for example, only a few micrometers are needed and the total stroke is thousands of micrometers) causes the images to be very blurred after defocusing, the current height of the Z axis cannot be measured through the images, and the area of the focal plane cannot be determined through microscopic images because the sample is widely left uncovered by the sample and is thoroughly blank, so that the height of the Z axis cannot be measured at any position.
Referring to fig. 1, the Critical Dimension (CD) term is explained before the present application. In the process of manufacturing a photomask of a semiconductor integrated circuit, a photolithography process and the like, a special line pattern capable of reflecting the width of a characteristic line of the integrated circuit, called a critical dimension, is specially designed in the industry for evaluating and controlling the pattern processing precision of the process. In the following, it is important to explain the critical dimension measurement by taking photolithography as an example, and actually more steps are involved in the critical dimension measurement. The term of critical dimension in industry can be replaced by the term of critical dimension structure or critical dimension mark.
Referring to fig. 1, in the integrated circuit manufacturing process, a photoresist is first coated on a wafer surface. The photoresist is then exposed through a photomask. Followed by post exposure bake. For positive-working chemical multiplication photoresists, this will trigger a deprotection reaction, making the photoresist in the exposed areas more soluble to the developer, so that the photoresist in the exposed areas can be removed during subsequent development to produce the desired photoresist pattern. Subsequent post-development detection may follow. The critical dimensions of the photoresist pattern, including, for example, electron microscopy or optical metrology, are detected after development to determine if they meet specifications. Only if the specification is met, an etching process is performed to transfer the photoresist pattern onto the wafer, which is sufficient to measure the importance.
Referring to fig. 1, efficient and accurate measurement is a measurement ruler for smooth progress of a semiconductor mass production line, and measurement plays a critical role in monitoring and preventing deviations in the process. The application context will explain the application of critical dimension measurements in large scale integrated circuit production and related problems. The measurement of the critical dimension must be very dependent on whether the shot or image of the object is clear or not, and if the image of the object is only a relatively rough blurred image, it is obvious that the critical dimension measurement must deviate. This problem is particularly pronounced when entering the micrometer or even nanometer scale.
Referring to fig. 1, the present application relates to an auto-focus method and scheme for critical dimension metrology, focusing being critical to the definition of critical dimensions and critical dimension images taken under these conditions. It is therefore necessary to introduce a sharpness evaluation based on image quality: the sharpness of interest is often used to quantitatively analyze whether an image is just sufficiently sharp, and if the image quality does not meet the sharpness requirement, poor quality images are clearly unusable in the nano-scale and micro-scale metrology fields. When the image processing is carried out, the image is regarded as a two-dimensional discrete matrix, and the gradient function can be utilized to acquire the gray information of the image so as to judge the definition of the image.
Referring to fig. 1, the integration of the technical problem to be solved by the present application is now: the critical dimension of the wafer is not necessarily in the focal plane due to the images of different positions during measurement, which easily causes great errors in measurement values. In the traditional scheme, the manual repeated focusing and continuous trimming of the working distance of the microscope lead to low efficiency and extremely poor accuracy. The automatic focusing technology disclosed by the application can complete quick, accurate and smooth focusing, and can reflect the focusing condition of a region of interest in real time.
Referring to fig. 1, the solution of the technical problem to be solved by the present application is as follows: based on the complex flow of the critical dimension measurement method in the prior art and the slow measurement speed (such as the need of refocusing again and repeated trimming of the measurement distance), the focusing flow involved in the critical dimension measurement step needs to be simplified, the measurement efficiency per unit time is improved, the time that the wafer stays in the measurement link on the whole production line is reduced, and the critical dimension measurement accuracy is improved.
Referring to fig. 1, regarding the autofocus implementation aspect: the system can be divided into an image acquisition module and a dimming module, wherein the Z-axis movement module is used for dimming an acquired image, and then whether the current position is on a focal plane is judged through image algorithm processing so as to drive the Z-axis (usually an up-down movement axis) to move for adjustment.
Referring to fig. 1, regarding distance focal plane position adjustment: the camera is driven to move up and down by the movement of the Z-axis stepping motor so as to achieve the adjustment of the distance from the focal plane. When a wafer is placed on the measuring platform, the wafer needs to be positioned at the focal plane of the camera in order to ensure that the visual field of the camera is clear and the resolution is high, and the Z-axis movement module can move up and down with the camera and the lens so as to find the focal plane with the clearest visual field of the camera.
Referring to fig. 1, the module or module is moved about the Z-axis: firstly, the stepping motor is involved, for example, the running speed and the running position can be accurately controlled without feedback, and the stepping motor can replace the function of a servo motor under the conditions of low running speed and low power requirement. The stepper motor may be immune to various interference factors in terms of step size. Such as the magnitude of the voltage or the magnitude of the current or the voltage-current waveform, the change in temperature, etc.
Referring to fig. 1, regarding the travel of the Z-axis moving module: for example, the Z-axis of the minimum Z-axis stroke is implemented by a stepper motor and its minimum stroke is the linear displacement of one pulse, which can be calculated as follows.
First, the pitch angle of a stepper motor, which is typically designated on this motor, is determined in advance. For example, an example of 1.8 degrees indicates that 360/1.8=200 for one circumference, that is, 200 pulses are required for one revolution of the motor.
Second, it is determined whether the motor drive has a subdivision, the subdivision score is checked, and the dialing code on the drive can be observed to confirm whether the motor drive has a subdivision score. For example, the motor driver is provided with 4 subdivisions, and as stated above, by calculating the correlation with the aforementioned 200 pulses, 200×4=800, which is equivalent to requiring 800 pulse motors to rotate one revolution.
Furthermore, the length or lead of one revolution of the motor shaft is determined: the pitch of the thread is equal to the lead if the screw is a screw rod or the pitch circle diameter (m x z) is the lead if the screw rod is in gear-rack transmission.
The number of leads divided by pulses (leads/pulses) is equal to the linear displacement of one pulse. It is generally desirable that the distance of movement of the stepper motor is greater than or equal to a minimum stroke, otherwise the stepper motor will not respond.
Referring to fig. 1, in an alternative example, assume that the minimum stroke of the Z-axis moving die set is 0.000078mm, for example, assume that the camera satisfies the condition that the minimum stroke is 0.000078 mm. Such travel varies with different tables.
Referring to fig. 1, a single stride of movement is defined in an alternative example (e.g., oncestep=0.000078 mm).
Referring to fig. 1, an auto focus travel is defined in an alternative example. The parameters of the auto-focusing stroke are determined according to the flatness of the product to be measured, such as a wafer, and are substantially the maximum stroke taken when the Z-axis moves up and down to find the focal plane. For example, it may be assumed that autoFocusTravel takes 0.04mm.
Referring to fig. 1, an autofocus attempt maximum number autofocus try cnt is defined in an alternative example.
Referring to fig. 1, the number of autofocus attempts is defined as cnt in an alternative example. cnt is continuously counted in cycles.
Referring to fig. 1, in an alternative example, the current position on the Z-axis is defined as Zc.
Referring to fig. 1, in an alternative example, a maximum number of autofocus, max_frames_count, is defined, where the maximum number of autofocus is the maximum number of Z-axis adjustments.
Referring to fig. 1, in an alternative example, the Z-axis adjustment number is defined as m_focus_cnt.
Referring to fig. 1, a method MoveZDirect (onceStep) is defined in an alternative example. For example, if the process of moving one step down along the vertical or vertical axis (Z-axis) is MoveZDirect (onceStep). Conversely if the process of moving one step up along the vertical or vertical axis (Z-axis) is MoveZDirect (-onsep). The positive and negative values inside brackets of the method function represent downward and upward movement, respectively.
Referring to FIG. 1, the first class array (m_focus_X [ ]) in the alternative is a statistic of the Z-axis position variation.
Referring to fig. 1, the second class array (m_focus_y [ ]) in the alternative example is a statistic for the image sharpness change.
Referring to fig. 1, a Z-axis position m_focus_z of a focus start point is defined in an alternative example.
Referring to fig. 1, an image sharpness m_focus_def of a focus start point is defined in an alternative example.
Referring to fig. 1, focusing referred to in this application includes the following calculation process.
Referring to fig. 1, a temporary variable up_load for calculation is defined, and is initially double up_load=vector. double is one type of computer language, namely the double precision floating point type. The present application may run on a computer or server or similar processing unit. Other alternatives to the processing unit: a field programmable gate array, a complex programmable logic device or a field programmable analog gate array, or a semi-custom ASIC or processor or microprocessor, or a digital signal processor or integrated circuit, or a software firmware program stored in memory, or the like. Double notation indicates that the type of the calculated value is Double-precision floating point type, hereinafter int type is an identifier for defining an integer type variable, in front of the calculated value.
Referring to fig. 1, before performing metrology on critical dimensions, multiple autofocus attempts are performed, and repeated autofocus attempts can be exemplified in computer language as for (cnt=0, cnt < autofocus trycnt++). The number cnt increases from the initial zero value until the maximum number of autofocus attempts hcuttrycnt is reached, that is, if the number of autofocus attempts cnt stops increasing when the condition cnt < autofocus trycnt is not satisfied. The cnt self-increment operation is identified in the computer language by the expression cnt++.
Referring to fig. 1, in each execution of an autofocus attempt: the camera or the like (e.g., a stage including a microscope) is repeatedly repositioned in the vertical, i.e., Z-axis direction a plurality of times, and the Z-axis camera repeated repositioning process may be exemplified by for (m_focus_cnt=0; m_focus_cnt < max_frame_count + + m_focus_cnt) in computer language. The number of camera adjustments in the vertical axis is m_focus_cnt. Wherein the for statement is a loop statement.
Referring to fig. 1, the Z-axis adjustment number m_focus_cnt increases from a value of zero at the beginning until the maximum number of Z-axis adjustments max_frame_count is reached, that is, if the condition m_focus_cnt < max_frame_count is not satisfied during the repeated adjustment in the Z-axis direction, the Z-axis adjustment number m_focus_cnt stops increasing and the operation of self-increasing m_focus_cnt is identified by the expression ++ m_focus_cnt in the computer language.
Referring to fig. 1, in each execution of an autofocus attempt: before iteratively adjusting the camera position, the microscope lens is preferably brought into proximity with the wafer to a predetermined ratio (e.g., three-quarters) of a specified travel value (e.g., travel). In an alternative embodiment, the microscope lens is brought into proximity with the sample, i.e. the wafer, before iteratively adjusting the camera position: the lens is first taken 3/4 of the way over the sample, exemplified in computer language as MoveZDirect (-up_load 3/4).
Referring to fig. 1, image sharpness is expressed in terms of F. Example double def=f. The foregoing generally describes sharpness evaluation based on image gradients: the definition is known to quantitatively analyze whether the image is sufficiently clear, and if the image quality does not meet the definition requirement, the inferior image cannot be applied to the wafer measurement field of micro-scale or nano-scale. The following will explain the judgment of the image sharpness by using a mathematical expression. def is real-time image sharpness.
Referring to fig. 1, the current Z-axis position is denoted Zc. The real-time Z-axis coordinate is expressed in terms of z_pos.
Referring to fig. 1, a real-time Z-axis coordinate z_pos is acquired. Example double z_po=zc. Note that in the first adjustment of iteratively adjusting the camera position, there is a case where m_focus_cnt= 0 is exemplified in computer language, that is, the start point of focusing, the values of m_focus_z and m_focus_def are assigned. The first adjustment or focus start point may be exemplified by if (m_focus_cnt= 0) { m_focus_z=z_pos in computer language; m_focus_def=def; }. The Z-axis position m_focus_z of the focus start point, the image sharpness m_focus_def of the focus start point are expressed.
Referring to fig. 1, the x-coordinate of the quadratic or second order curve is the amount of change in the Z-axis position. The position data of the camera movement can be extracted by computer language, i.e. m_focus_x [ m_focus_cnt ] =z_pos-m_focus_z, wherein the array of X coordinates of the quadratic function or the second order curve comprises m_focus_x [ m_focus_cnt ].
Referring to fig. 1, the y-coordinate of the quadratic function or second order curve is the amount of change in image sharpness. Likewise, the captured image sharpness data may be exemplified in computer language as m_focus_y [ m_focus_cnt ] =def-m_focus_def, wherein the array of Y coordinates of the quadratic function or the second order curve comprises m_focus_y [ m_focus_cnt ].
Referring to fig. 1, when the moving distance exceeds a specified travel (e.g., a predetermined travel), the adjustment is ended. That is, if the distance or path moved by the camera in the Z axis exceeds the specified travel, the current Z axis position adjustment is ended. The out-of-travel may be in computer language exemplified by if (math.abs (z_pos-m_focus_z) > math.abs (travel)) break. The absolute value of the number is expressed by Math.abs, such as z_pos-m_focus_z or absolute. break indicates the need to jump out of the current for loop, e.g., jump out of the for loop with m_focus_cnt added by one. The m _ focus _ cnt will not increase after encountering this situation when it jumps out of the loop until the next round of autofocus attempts is entered. m_focus_cnt encounters the situation that it is likely that it has not yet reached max_frame_count.
Referring to FIG. 1, after the Z axis is adjusted multiple times, the camera moving position data and corresponding image sharpness data can be captured, wherein the position data for multiple times of position adjustment comprises m_focus_X [ m_focus_cnt ], and the image sharpness data for multiple times of position adjustment comprises m_focus_Y [ m_focus_cnt ]. The Z-axis adjustment is denoted by MoveZDirect.
Referring to FIG. 1, statistics of Z-axis position variation are captured (first class array m_focus_X [ ]).
Referring to fig. 1, statistics of the amount of change in the sharpness of the image (second class array m_focus_y [ ]) are captured.
Referring to fig. 1, focused data m_focus_x [ ] and m_focus_y [ ] are fitted to a second order curve.
Referring to fig. 1, the second order curve is represented by the formula y=ax 2 +bx+c.
Referring to fig. 1, the vertex coordinates-b/(2*a) or abscissa values of the vertex coordinates of the second-order curve are calculated.
Referring to fig. 1, a most clear position m_focus_best of an image is calculated: the vertex coordinates plus the focus start Z-axis position yields the most clear image position m_focus_best, m_focus_best= -b/(2*a) +m_focus_z. The most clear position of the image is related to the vertex coordinates of the second order curve and also to the Z-axis position m_focus_z of the focus start point.
Referring to fig. 1, a second order curve y=ax is satisfied 2 The quadratic coefficient of +bx+c is smaller than zero, i.e. a<0. The vertex coordinates are greater than zero, i.e., (-b/(2*a))>0. The vertex coordinates are smaller than a defined focus travel maximum, i.e., (-b/(2*a))<Autofocus drive, focus is considered successful. By way of example if (a) in computer language<0&&(-b/(2*a))>0&&(-b/(2*a))<Autofocus travel) break. break indicates successful focusing without the need to find the focus using a moving stage. Note that predetermined conditions including the above three and the like need to be satisfied simultaneously to indicate successful focusing, and any one of them does not coincide to indicate unsuccessful focusing.
Referring to fig. 1, among each attempt (number of attempts is denoted by cnt): the predetermined condition includes that the quadratic coefficient of the second order curve is smaller than zero (a < 0), the vertex coordinate is larger than zero (i.e., (-b/(2*a)) >0, the vertex coordinate is smaller than a defined focusing travel maximum (i.e., (-b/(2*a)) < autofocus travel), and the camera is moved a distance on the vertical axis and then the focus is found without any of the predetermined conditions. In other words, the above condition is not satisfied, that is, it is explained that the focus is not in the current stroke, and it is necessary to move the table to find the focus again.
Referring to fig. 1, dir is defined in an alternative example as the sum of adjacent two image sharpness differences. In the most initial state, for example, double dir=0. The Z-axis adjustment count is m_focus_cnt. And in the stage that the camera is repeatedly adjusted in position for many times, based on any two adjacent sharpness differences obtained by adjusting the position of the camera successively, the difference results are obtained by making differences. The two adjacent resolutions are represented by m_focus_Y [ m ] and m_focus_Y [ m+1], respectively, and their difference results m_focus_Y [ m+1] -m_focus_Y [ m ] are obtained by the difference. Definition m is a digital class variable, and m is smaller than the Z-axis adjustment times m_focus_cnt. m in fact characterizes the number of position adjustments.
Referring to fig. 1, among each attempt (number of attempts is denoted by cnt): and in the stage when the camera is repeatedly adjusted in position on the Z axis, based on any two adjacent sharpness differences obtained by adjusting the camera in sequence, the differences are taken as difference results m_focus_Y [ m+1] -m_focus_Y [ m ]. With the continuous adjustment of the camera position, a variable term is defined that changes with the increase of the position adjustment times (times m or m_focus_cnt), and the variable term of the current time is equal to the value of the previous time plus the current time difference result. The variation of the variable term dir can be exemplified by computer language, i.e., for (int m=0, m < m_focus_cnt-1;m ++ { dir+ =m_focus_y [ m+1] -m_focus_y [ m ]; }. The meaning of dir+=m_focus_y [ m+1] -m_focus_y [ m ] in this computer language is expressed as: the current variable term dir is equal to its previous value plus the current difference result, i.e., m_focus_Y [ m+1] -m_focus_Y [ m ]. In other words dir is the sum of the differences between the sharpness of two adjacent images, which is the same meaning.
Referring to fig. 1, it is necessary to determine whether the variable term of each change is less than zero. If yes, the camera or the workbench moves upwards for a certain distance and then tries to focus; if not, the camera or the workbench moves down for a certain distance and then tries to focus.
Referring to fig. 1, if the variable term changes to less than zero, it may be moved up by half a stroke and focus may be attempted again. For example, if (dir < 0) up_load= (vector/(2×cnt+1)) is moved up a distance below zero for refocusing. For example, the distance the camera moves up relative to the start position of focus is equal to: one half of a given travel value (travel) is divided by the current number of foci (current number of foci is denoted cnt+1, note that the first attempt is cnt=0, the current number of foci being defined as cnt+1 for ease of understanding).
Referring to fig. 1, if the variable term is changed to not less than zero (in the case of non dir < 0), the camera is moved down by half a stroke with respect to the focus initial position, and focus is attempted again. An example with respect to if (dir < 0) is else up_load=velocity/2. For example, the camera is moved down a distance relative to the start position of focus equal to: half of the specified travel value (travel).
Referring to fig. 1, a plurality of autofocus attempts have been performed so far. Attempts to iterate autofocus can be exemplified in computer language as for (cnt=0; cnt < autofocus trycnt; cnt++). When the focus attempt is no longer performed, or after the end of the cycle of the attempt, it is moved to the focus relative distance dis, i.e. the vertex coordinates minus the current Z-axis position. Represented in an alternative example by method MoveZDirect (dis), double dis=m_focus_best-Zc. The autofocus is considered to end.
Referring to fig. 2, the focusing method for critical dimension measurement includes steps SP1 to SP7. Step SP1 collects image sharpness data and Z-axis position data of the camera. The Z-axis position data includes m_focus_X [ m_focus_cnt ], which is a class of data in the form of an array. The Image sharpness data includes m_focus_y [ m_focus_cnt ], also in the form of an array, and can be extracted from Image information Image1 photographed by a camera.
Referring to FIG. 2, step SP2 is based primarily on camera-moved position data m_focus_X [ m_focus_cnt ]]And the acquired image sharpness data m_focus_y [ m_focus_cnt ]]Fitting a second order curve to calculate the vertex coordinates of the second order curve, e.g., equal to the second order curve y=ax 2 The abscissa value of the vertex coordinates +bx+c-b/(2*a). Essentially the vertex coordinates also include an ordinate value (4 ac-b 2 ) /(4*a), butThe application requires focusing attention on the abscissa value of the vertex coordinate, rather than the ordinate value of the vertex coordinate, and refers to the abscissa value-b/(2*a) directly to the vertex coordinate in a popular manner, so that the application includes the meaning of the abscissa value of the vertex coordinate when referring to the vertex coordinate.
Referring to fig. 2, step SP3 calculates the image clearest position m_focus_best= -b/(2*a) +m_focus_z. The vertex coordinates plus the focus start Z-axis position m_focus_z yields the position where the image is the clearest.
Referring to fig. 2, step SP4 judges whether focusing is successful: the focusing is considered successful if the quadratic coefficient satisfying the second order curve is smaller than zero, the vertex coordinate is larger than zero, and the vertex coordinate is smaller than a defined focusing travel maximum. If the determination result is yes, the focusing success flag of step SP5 may be used to indicate.
Referring to fig. 2, step SP4 judges whether focusing is successful: if the quadratic term coefficient of the second order curve is not smaller than zero, the vertex coordinate is larger than zero and the vertex coordinate is smaller than any one of the maximum focusing stroke, the camera is moved on the vertical axis for a certain distance and then the focus is searched. The result of the determination is that otherwise it is indicated by step SP6, when it is necessary to move the table or camera to find the focus.
Referring to fig. 2, the known determination result is otherwise represented by step SP 6: in step SP6, in the stage of repeatedly adjusting the position of the camera a plurality of times, any two adjacent sharpness differences obtained based on the sequential adjustment of the position of the camera are differenced (for example, two adjacent sharpness differences m_focus_ym+1 and m_focus_ym are differenced). Defining a variable item dir to change along with the increase of the position adjustment times, wherein the variable item dir is calculated in the following way: the current variable term dir is equal to its previous value plus the current difference result (e.g., m_focus_ym+1 minus m_focus_ym). Finally, whether the variable item of each time is smaller than zero or not is judged (namely whether if (dir < 0) is true or not is judged). One of the key roles is to avoid using out-of-focus images, such as real-time image sharpness out of focus, to calculate sharpness differences from the original image sharpness at normal focus. The sharpness prevention difference value is based on image gradient evaluation performed by real-time image sharpness with small change of the gray value of the image edge pixel and initial image sharpness with large change of the gray value of the image edge pixel, so that errors of a second-order curve are avoided. Such errors are hidden and imperceptible. An image with a large change in edge pixel gray values is sharp and has a larger gradient value than an image with a smaller change.
Referring to fig. 2, if (dir < 0) is determined to be true), focusing is attempted after a certain distance up-shift. The distance the camera moves up relative to the start position of focus is equal to: one half of a given stroke value (travel) divided by the current number of focus times is equal to up_load= (travel/(2 x (cnt+1))). up_load corresponds to MoveZDirect.
Referring to fig. 2, if not (negative judgment if (dir < 0)), focusing is attempted after a distance downward. The camera is moved down a distance equal to the start position of the focus: one half of a given travel value (travel). The distance of the move down is equal to up_load=travel/2. up_load corresponds to MoveZDirect.
Referring to fig. 2, step SP7 is performed after step SP5 or step SP 6. But SP7 is not required. If an attempt is made to perform step SP7, this means that the camera or the table is moved to the focus relative distance dis, i.e. the position m_focus_best at which the previous image is the clearest minus the current Z-axis position Zc. MoveZDirect (dis) shows the process of moving the camera or stage to a relative distance of focus dis. double dis=m_focus_best-Zc. Since the most clear position of the image is closely related to the vertex coordinates, it is colloquially believed that moving to the focus relative distance dis in this process, i.e. subtracting the vertex coordinates, removes the current Z-axis position. This is the end of the autofocus.
Referring to fig. 2, the process of acquiring image sharpness data in step SP1 is implemented by moving the Z-axis, because the camera moving position will cause a change in m_focus_y [ m_focus_cnt ] =def-m_focus_def, and the amount of change in m_focus_y provides the material or source data fitting the ordinate of the second order curve as the basis of the ordinate of the quadratic function line.
Referring to fig. 2, the process of collecting the position data of the camera in step SP1 is implemented by moving the Z-axis, because the camera moving the position causes a change in m_focus_x [ m_focus_cnt ] =z_pos_m_focus_z, and the amount of change in m_focus_x provides the material or source data fitting the abscissa of the second order curve as the basis of the abscissa of the quadratic function line.
Referring to fig. 2, step SP1 requires the camera equipped with the microscope to repeatedly adjust the position in the vertical axis direction, records the start position (m_focus_z) of the focus start point and the start point initial image sharpness (m_focus_def), and records the real-time position (z_pos) and the real-time image sharpness (def) after each adjustment of the camera. At this time, the ordinate of the fitted second order curve, that is, the X-coordinate is the Z-axis position change amount m_focus_x [ m_focus_cnt ] =z_pos-m_focus_z, and the abscissa of the fitted second order curve, that is, the Y-coordinate is the image sharpness change amount m_focus_y [ m_focus_cnt ] =def-m_focus_def.
Referring to fig. 2, the SP1 position data includes a plurality of sets of position differences between the real-time position and the start position. For example, the position data includes the position difference m_focus_x0 ] =z_pos 0-m_focus_z, where z_pos0 is the actual position of the real-time position when m_focus_cnt=0. m_focus_x1=z_pos1-m_focus_z, and z_pos1 is the actual position of the real-time position when m_focus_cnt=1. m_focus_x2=z_pos 2-m_focus_z, z_pos2 being the actual position of the real-time position when m_focus_cnt=2, and so on. A sufficient amount of ordinate information is provided as m_focus_cnt increases.
Referring to fig. 2, the step SP1 image sharpness data includes a plurality of sets of sharpness differences of real-time image sharpness and initial image sharpness. The sharpness difference m_focus_y0=def0-m_focus_def, def0 being the real-time image sharpness when m_focus_cnt=0. m_focus_y1=def1-m_focus_def, def1 being the real-time image sharpness captured when m_focus_cnt=1. m_focus_y2=def2-m_focus_def, def2 being the real-time image sharpness captured when m_focus_cnt=2, and so on. A sufficient amount of abscissa information is provided as m_focus_cnt increases.
Referring to fig. 2, after each adjustment of the position of the camera in step SP1, it is noted that the position difference and the sharpness difference under the condition that the camera is at the same position are respectively regarded as a second order function or an abscissa value and an ordinate value corresponding to a point on the second order curve. For example, the position difference m_focus_x1 and the sharpness difference m_focus_y1 under the condition that the camera is at the same position are respectively regarded as the abscissa and the ordinate corresponding to the same point on the second-order curve after the position adjustment when m_focus_cnt=1. The position difference m_focus_x2 and the definition difference m_focus_y2 under the condition that the cameras are at the same position are respectively regarded as the abscissa and the ordinate corresponding to the same point on the second-order curve after the position adjustment when m_focus_cnt=2. Note that def-m_focus_def is a sharpness difference or sharpness change amount.
Referring to fig. 2, step SP1 ends the current position adjustment if the absolute value of any position difference exceeds the specified travel value (travel). I.e. the current Z-axis position adjustment is finished, and the m_focus_cnt stops counting continuously.
Referring to fig. 2, step SP1 specifies a maximum number of adjustments max_frame_count on the Z-axis, and the actual number of adjustments m_focus_cnt, which require the camera to repeatedly adjust the position in the vertical axis direction, should be smaller than the maximum number. The maximum number of autofocus, i.e., the maximum number of Z-axis adjustments, is defined as max_frame_count. Can avoid the continuous adjustment position and the failure to jump out of the cycle, and can also prevent the measurement process from falling into the condition of no rest and no stop adjustment.
Referring to fig. 3, this embodiment is a further optimization measure taken on the basis of fig. 2, requiring multiple autofocus attempts (cnt) to be performed before measurements are performed on critical dimensions on the wafer, and in an alternative example, a maximum number of autofocus attempts autofocus try cnt is defined. As shown, the actual number cnt of repeated autofocus attempts is required to be less than the maximum number autofocus try cnt of such attempts. It can be observed that each autofocus attempt or any single autofocus attempt link includes the flow of steps SP1 to SP5 in fig. 2 or that a single autofocus attempt includes the flow of steps SP1 to SP 6. Step SP7 of fig. 2 may still be used after each autofocus attempt or after the end of any single autofocus attempt.
Referring to fig. 3, focusing method for critical dimension measurement: multiple autofocus attempts (continuing to attempt focus as long as cnt < autofocus trycnt) are performed before measurements are made on critical dimensions on the wafer. In each autofocus attempt or any single autofocus attempt (e.g., cnt=0, 1, 2, 3 … …, etc.), the camera is required to be adjusted repeatedly in the vertical direction multiple times (continued for as long as m_focus_cnt < max_frame_count) to capture the camera movement position data and corresponding image sharpness data. The number of focus attempts is recorded with cnt and each focus attempt performed requires cnt to run a self-addition operation. The number of adjustments to the position is recorded with m_focus_cnt and each time an adjustment is made requires that m_focus_cnt run a self-addition operation. Each value that cnt can take requires the camera to perform m_focus_cnt adjustments in the Z-axis direction.
Referring to fig. 3, focusing method for critical dimension measurement: it is also necessary to fit a second order curve from the position data and the image sharpness data. Step SP2 informs that according to the position data m_focus_X [ m_focus_cnt ]]And the acquired image sharpness data m_focus_y [ m_focus_cnt ] ]Fitting a second order curve y=ax 2 +bx+c. Since the second order curve is known at this time, the sharpest position of the image is apparent. The aforementioned step SP3 or the reservation step SP3 can be omitted in the present embodiment, which is allowed.
Referring to fig. 3, focusing method for critical dimension measurement: judging whether the second-order curve meets a preset condition, if so, considering that focusing is successful; if not, the camera is moved on the vertical axis for a certain distance and then the focus is found. As in step SP4.
Referring to fig. 3, the predetermined conditions include at least: the quadratic coefficient of the second order curve is smaller than zero, i.e. a <0, the vertex coordinates are larger than zero, i.e., (-b/(2*a)) >0, and (-b/(2*a)) < autofocus travel, i.e. the vertex coordinates are smaller than the focus stroke maximum. The predetermined condition is satisfied at the same time, and the focusing is considered successful as by step SP5. Any one of the predetermined conditions is not satisfied and the camera is moved a distance on the vertical axis to find the focus, as by step SP6.
Referring to fig. 3, the known determination result is otherwise represented by step SP 6: in step SP6, in the stage of repeatedly adjusting the position of the camera, any two adjacent sharpness differences obtained based on the sequential adjustment of the position of the camera are differenced (for example, two adjacent sharpness differences m_focus_ym+1 and m_focus_ym are differenced). In step SP6, it may be necessary to take such a measure that the sharpness is poor in each autofocus attempt or in any single autofocus attempt. A calculation method for defining a variable item which changes with the increase of the position adjustment times and a variable item dir: the current variable term dir is equal to the previous value of the variable term plus the current difference result (e.g., m_focus_ym+1 minus m_focus_ym). In an alternative example, it can be seen that the current difference result is considered as the difference result of subtracting the sharpness difference, e.g. m_focus_ym, at the current position adjustment, from the next subsequent sharpness difference, e.g. m_focus_ym+1.
Referring to fig. 3, for example, assuming that m=3, the current variable item dir3 is equal to the value dir2 at the previous position adjustment plus the current difference result (the current difference result is m_focus_y4 minus m_focus_y3). Based on this assumption, the forward estimation can be continued, and the current dir2 is equal to dir1 at the previous position adjustment and the current difference result (the current difference result is m_focus_y3 minus m_focus_y2) is added. Based on this assumption, the forward calculation can still continue, and so on, when the current dir1 is equal to the value dir0 at the previous position adjustment plus its current difference result (the current difference result is m_focus_y2 minus m_focus_y1). Finally, whether the variable item of each time is smaller than zero or not is judged (namely whether if (dir < 0) is established or not is judged). In general, the variable term corresponding to the current adjustment number is equal to the value of the variable term at the previous position adjustment plus the difference result at the current position adjustment.
Referring to fig. 3, if (dir < 0) is determined to be true), focusing is attempted after a certain distance up-shift. The distance the camera moves up relative to the start position of focus is equal to: one half of a given stroke value (travel) divided by the current number of focus times is equal to up_load= (travel/(2 x (cnt+1))). Since the default number of foci starts from zero, but provided that the statement claiming the zeroth attempt does not conform to habit, the current number of foci is more habit-conforming with cnt+1. For example, when the current first focusing number (cnt+1) is substantially cnt=0, it is actually tried once. Then, as occurs when the current second focusing count (cnt+1) is substantially cnt=1, a second attempt is indeed made at this time. More strictly speaking the initial position of the focus, is moved up a distance equal to: one half of the above specified stroke value (travel) is divided by the total number of times and the total number of times is equal to the number of times of focusing actually occurring plus one (i.e., cnt+1), and the up-shift distances obtained by different expressions are the same, which is equal to up_load= (travel/(2×cnt+1)).
Referring to fig. 3, in each trial (e.g., cnt=0, 1, 2, 3 … …, etc.) link: the camera repeatedly adjusts the position in the vertical axis direction for a plurality of times, records the initial position of the focusing initial point and the initial image definition, and records the real-time position and the real-time image definition after each adjustment of the position of the camera.
Referring to fig. 3, in each trial (e.g., cnt=0, 1, 2, 3 … …, etc.) link: the camera is repeatedly adjusted in position in the vertical axis direction for a plurality of times, and the position data m_focus_x [ m_focus_cnt=0, 1, 2, 3 … ] comprises a plurality of groups of position differences between the real-time position and the initial position.
Referring to fig. 3, in each trial (e.g., cnt=0, 1, 2, 3 … …, etc.) link: the camera is repeatedly adjusted in position in the vertical axis direction for a plurality of times, and the image definition data m_focus_y [ m_focus_cnt=0, 1, 2, 3 … ] includes a plurality of sets of definition differences of real-time image definition and initial image definition.
Referring to fig. 3, after the camera adjusts the position each time (e.g., m_focus_cnt=0, 1, 2, 3 … …, etc.), the difference between the position and the sharpness of the camera under the same position condition is regarded as an abscissa value and an ordinate value corresponding to one point on the second-order curve at the same time, respectively.
Referring to fig. 3, the position difference m_focus_x0 and the sharpness difference m_focus_y0 under the same position condition (e.g., m_focus_cnt=0) are respectively regarded as the abscissa and ordinate values of the same point on the second-order curve.
Referring to fig. 3, the position difference m_focus_x3 and the sharpness difference m_focus_y3 under the same position condition (e.g., m_focus_cnt=3) are respectively regarded as the abscissa and ordinate values of the same point on the second-order curve.
Referring to fig. 3, after each position adjustment (e.g., m_focus_cnt=0, 1, 2, 3 … …, etc.), if any position difference z_pos_m_focus_z occurs, the absolute value is greater than the specified travel value travel, the current position adjustment is ended and the camera jumps out of the camera's cycle of repeatedly adjusting positions. z_pos-m_focus_z is a position difference or position variable.
Referring to fig. 3, in each trial (e.g., cnt=0, 1, 2, 3 … …, etc.) link: the maximum number of adjustments max_frame_count on one vertical axis is specified, and the actual number of adjustments m_focus_cnt that require the camera to repeatedly adjust positions in the Z-axis direction is smaller than the maximum number max_frame_count.
Referring to fig. 3, the most clear position m_focus_best of the image is the sum of the vertex coordinates of the second order curve plus the start position of the focus start point. m_focus_best= -b/(2*a) +m_focus_z, the vertex coordinates plus the focus start Z-axis position yields the most clear image position m_focus_best.
Referring to fig. 3, in each trial (e.g., cnt=0, 1, 2, 3 … …, etc.) link: and judging whether the second-order curve meets a preset condition. The predetermined conditions have been explained above and will not be described again.
Referring to fig. 3, in each trial (e.g., cnt=0, 1, 2, 3 … …, etc.) link: when the above condition, i.e. the predetermined condition, is not met, i.e. it is clear that the focus is not in the current stroke, it is necessary to continue moving the table for finding the focus. The focus of the mobile station finding has been explained above and will not be described in detail.
Referring to fig. 3, when the timing point of the upper limit of the attempt (for example cnt < AutoFocusTryCnt) is not reached: multiple autofocus attempts should not end. The focus attempt requires the execution of the flow of steps SP1 to SP5 or the flow of steps SP1 to SP6 when cnt=0, the execution of the flow of steps SP1 to SP5 or the flow of steps SP1 to SP6 when cnt=1, the execution of the flow of steps SP1 to SP5 or the flow of steps SP1 to SP6 when cnt=2. And so on. By cnt=autofocustrycnt the jump ends.
Referring to fig. 3, when the timing point reaches the upper limit of the attempt (e.g., cnt=autofocustrycnt): then multiple autofocus attempts should end. For example, cnt maximum is equal to AutoFocusTryCnt minus one. If step SP7 is performed, this means that the camera or table is required to be moved to the focus relative distance dis, i.e. the position m_focus_best where the previous image is most clear minus the current Z-axis position Zc. double dis=m_focus_best-Zc. The autofocus ends so far and the resolution and definition of the image of the critical dimension structures on the wafer is highest at this point. Essentially the aforementioned focusing has been successful in achieving the objects set forth in the background section of this application. While still moving the camera to the relative focus distance is also a better embodiment to achieve autofocus.
Referring to fig. 3, in each trial (e.g., cnt=0, 1, 2, 3 … …, etc.) link: before iteratively adjusting the camera position or before each adjustment of the camera position, in an alternative example, the microscope lens is brought close to the wafer to a predetermined ratio value (ratio value, for example, 3/4) of a specified travel value (travel). The process of bringing the lens closer to the wafer to a predetermined distance at step SP0 is shown. The lens is close to the sample, i.e. the wafer: the distance above the sample is first about the specified travel value multiplied by the predetermined ratio. Such as MoveZDirect (-up load 3/4) this example shows the lens going 3/4 strokes over the sample. Step SP0 represents lens-to-wafer proximity as MoveZDirect.
Referring to fig. 3, when the timing point of the upper limit of the attempt (for example cnt < AutoFocusTryCnt) is not reached: multiple autofocus attempts should not end. The focusing attempt requires the execution of the flow of steps SP0 to SP5 or the flow of steps SP0 to SP6 when cnt=0, the execution of the flow of steps SP0 to SP5 or the flow of steps SP0 to SP6 when cnt=1, the flow of steps SP0 to SP5 or the flow of steps SP0 to SP6 when cnt=2. And so on. This is an example when step SP0 is employed.
Referring to fig. 4, the process of acquiring the position data of the camera is implemented by moving the Z-axis, because the camera or the stage moves the position, which causes the m_focus_x [ m_focus_cnt ] =z_pos-m_focus_z to change, the position data m_focus_x [ m_focus_cnt=0, 1,2,3 … ] as the fitting source data of the second order curve abscissa, exhibit array characteristics.
Referring to fig. 4, the process of acquiring image sharpness data is implemented by moving the Z-axis, because the camera or stage is moved in position, which causes m_focus_y [ m_focus_cnt ] =def-m_focus_def to change, the sharpness data m_focus_y [ m_focus_cnt=0, 1,2,3 … ] being a plurality of sets of characteristics as fitting source data of the ordinate of the second order curve.
Referring to fig. 5, a second order fitting example. Given a data sequence (x i ,y i ) And (i=0, 1,2,3 …, m) is satisfied, the set of data is fitted using a quadratic polynomial. The following calculations and simplifications generally describe the process of second order curve fitting.
p(x)=a 0 +a 1 x+a 2 x 2
And (3) making a mean square error of the fitting function and the data sequence based on p (x):
from the extremum principle of the multiple function, calculate Q (a 0 ,a 1 ,a 2 ) The minimum value of (2) may satisfy:
the above is aboutThe formula of (2) is simplified as follows:
referring to fig. 5, the array X [ n ] uses the position data m_focus_x [ m_focus_cnt=0, 1,2,3 … ].
Referring to fig. 5, the array Y n uses the sharpness data m_focus_y [ m_focus_cnt=0, 1, 2, 3 … ].
Referring to fig. 5, from the discretization of the data sequence (m_focus_x, m_focus_y), fitting the set of data with a quadratic polynomial, the relation y=ax can be calculated by an algorithmic fit 2 +bx+c. Step SP2 is to fit a second order curve according to the camera moving position data and the acquired image sharpness data, and calculate the vertex coordinates of the second order curve, for example, equal to the second order curve y=ax 2 The abscissa value of the vertex coordinates +bx+c-b/(2*a).
Or aboutThe related simplifications of (1) are:
given data sequence (x i ,y i ) And fitting the second order polynomial to the set of data, and then (a) 0 ,a 1 ,a 2 ) Matrix form and the matrix is formed by (y 0 ,y 1 ,y 2 ) Correlation analysis in matrix form, with respect to Q (a 0 ,a 1 ,a 2 ) Minimum value correlation reduction is:
it can be seen that the above simplified form is slightly different but the end result is the same. Solving by the principle to obtain the coefficient a of the second-order function 0 ,a 1 ,a 2 Mathematical terms or coefficients.
Referring to fig. 5, second order curve fitting: given two numbers of length nGroup x [ n ]],y[n]For example, assuming that both arrays are discretized, the relation y=ax can be calculated by algorithmic fitting 2 +bx+c. Thus two arrays x [ n ] are calculated by means of a fitting function],y[n]The process of the relationship between them can be referred to as a second order curve fit.
Referring to fig. 5, similar to y=ax 2 +bx+c or p (x) =a 0 +a 1 x+a 2 x 2 Mathematically belongs to a quadratic function relation. The method comprises a quadratic term coefficient a, a first-order term coefficient b and a constant term c. x is the abscissa and y is the ordinate. In the latter expression, the quadratic term coefficient a is contained 2 Coefficient of primary term a 1 Constant term a 0
Referring to fig. 6, the expression of the energy gradient function F is given as a relational expression as follows, which is to accumulate all pixel gradient values as sharpness evaluation function values. Similarly, the same is true for metrology images and their pixels for critical dimensions.
Where F (xp, yp) represents the gray value of the corresponding pixel point such as (xp, yp), and the larger the value of F, the clearer the image. The camera may capture an Image1, the Image1 including gray values of pixels such as (xp, yp). Step SP1 collects image sharpness data, and the energy gradient function provides a basis for how to collect image sharpness for step SP 1. The Image sharpness data may be extracted from the Image information Image1 photographed by the camera.
Referring to fig. 6, the energy gradient function: the sum of squares of differences between gray values of adjacent pixels in the x-direction and the y-direction is taken as a gradient value of each pixel point, and all pixel gradient values are accumulated as a definition evaluation function value.
Referring to fig. 7, in the image gradient-based sharpness evaluation method, besides the energy gradient function is used for calculating a Laplace function, a gradient matrix is obtained by convolving a Laplace operator with gray values of all pixel points of an image, and the sum of squares of gradients of all pixel points is taken as an evaluation function.
Note that the gray value of the corresponding pixel point such as (xp, yp) is represented by F (xp, yp), and the larger the value of F, the clearer the image.
In addition, G (xp, yp) has the expression of
An example of L in the laplace-related function G (xp, yp), but L is not limited to the example, is as follows.
Referring to fig. 7, step SP1 acquires image sharpness data, and a Laplace function may be used in addition to using the gradient value accumulation for all pixels of fig. 6 as the sharpness evaluation function value. When the real-time image definition and the initial image definition are calculated, an energy gradient function or a Laplacian function is utilized as a definition evaluation function.
Referring to fig. 1, as previously mentioned, the motors driving the camera and microscope and their work stations are controlled by a computer or server or associated processing unit. Other alternatives to the processing unit: a field programmable gate array, a complex programmable logic device or a field programmable analog gate array, or a semi-custom ASIC or processor or microprocessor, digital signal processor or integrated circuit, or software firmware program stored in memory, or the like. Steps SP1 to SP7 of fig. 2 can equally be implemented by a computer or a server or a processing unit, as can the implementation of steps SP0 to SP7 of fig. 3.
Referring to fig. 1, focusing method for critical dimension measurement: before the key dimension of the wafer is measured, focusing is performed to enable the wafer to reach the focal plane of the camera: and fitting a second-order curve or a quadratic function by the moving position data of the camera and the acquired image definition data, and calculating the vertex coordinates of the second-order curve to obtain the position of the clearest image. Such a focusing method for critical dimension measurement gives specific implementation measures in the example of fig. 2 and the like. The image comprises a wafer image shot by a camera through a microscope, and the clearest position of the image comprises the position of the camera at the clearest moment of the wafer image shot by the camera through the microscope. The surface roughness and morphology of the wafer are different in different process stages, the structural features of the critical dimension are varied, and the focusing method is suitable for various critical dimension structural morphologies. Especially, the images at different positions are not necessarily located on the focal plane when the critical dimension of the wafer is measured, so that a larger error occurs in the measured value, and the best and clearest positions of the photographed wafer and the critical dimension of the wafer can be always found in a self-adaptive manner by the focusing method.
Referring to fig. 1, focusing method for critical dimension measurement: performing a plurality of autofocus attempts before performing measurements of a critical dimension of the wafer; in each attempt: the camera repeatedly adjusts the position for a plurality of times in the vertical axis direction so as to acquire the moving position data and the corresponding image definition data of the camera; fitting a second-order curve or a fitting quadratic function according to the position data and the image definition data; judging whether the second-order curve meets a preset condition, if so, considering that focusing is successful; if not, the camera is moved on the vertical axis for a certain distance and then the focus is found. The position data is position information in the Z-axis or vertical-axis direction. Such focusing methods for critical dimension measurement are given in the examples of fig. 2 and 3.
The foregoing description and drawings set forth exemplary embodiments of the specific structure of the embodiments, and the above disclosure presents presently preferred embodiments, but is not intended to be limiting. Various alterations and modifications will no doubt become apparent to those skilled in the art after having read the above description. Therefore, the appended claims should be construed to cover all such variations and modifications as fall within the true spirit and scope of the invention. Any and all equivalent ranges and contents within the scope of the claims should be considered to be within the intent and scope of the present invention.

Claims (29)

1. A focusing method for critical dimension measurement, characterized in that:
before measuring the critical dimension on the wafer, focusing is performed to make the wafer reach the focal plane of the camera: fitting a second-order curve by using the moving position data of the camera and the acquired image definition data, and calculating the vertex coordinates of the second-order curve to obtain the clearest position of the image; the image definition data corresponds to the moving position data of the camera, the image definition data is extracted from the image information shot by the camera, the image is regarded as a two-dimensional discrete matrix when the image is processed, and the gradient function is utilized to acquire image gray information for judging the image definition; the x-coordinate of the second order curve is the amount of change in the Z-axis position, and the y-coordinate of the second order curve is the amount of change in the image sharpness.
2. The method according to claim 1, characterized in that:
repeatedly adjusting the position of a camera provided with a microscope in the vertical axis direction, recording the initial position of a focusing initial point and the initial image definition, and recording the real-time position and the real-time image definition of the camera after each position adjustment;
the position data comprises a plurality of groups of position differences between the real-time position and the initial position, and the image definition data comprises a plurality of groups of definition differences between the real-time image definition and the initial image definition.
3. The method according to claim 2, characterized in that:
after the camera adjusts the position each time, the position difference value and the definition difference value of the camera under the condition of the same position are respectively regarded as an abscissa value and an ordinate value corresponding to one point on the second-order curve at the same time.
4. The method according to claim 2, characterized in that:
when the real-time image definition and the initial image definition are calculated, an energy gradient function or a Laplacian function is utilized as a definition evaluation function.
5. The method according to claim 2, characterized in that:
and if the absolute value of any position difference value exceeds a specified travel value, ending the current position adjustment.
6. The method according to claim 2, characterized in that:
the maximum number of adjustments in one vertical axis is specified, and the actual number of adjustments requiring the camera to repeatedly adjust the position in the vertical axis direction does not exceed the maximum number.
7. The method according to claim 2, characterized in that:
the most clear position of the image is the vertex coordinates of the second order curve plus the starting position of the focusing starting point.
8. The method according to claim 2, characterized in that:
focusing is considered successful if the quadratic coefficient satisfied by the second order curve is less than zero, the vertex coordinates are greater than zero, and the vertex coordinates are less than a defined focus travel maximum.
9. The method according to claim 2, characterized in that:
if any one of the quadratic coefficient of the second order curve smaller than zero, the vertex coordinate larger than zero and the vertex coordinate smaller than a defined focusing stroke maximum is not satisfied, the camera is moved on the vertical axis for a distance and then the focus is searched.
10. The method according to claim 9, wherein:
in the stage of repeatedly adjusting the position of the camera for many times, making a difference based on any two adjacent sharpness differences obtained by sequentially adjusting the position of the camera; defining a variable item to change along with the increase of the position adjustment times, wherein the current variable item is equal to the previous value of the variable item plus the current difference result; judging whether the variable item is smaller than zero or not;
If yes, a distance is moved upwards, and focusing is tried again;
if not, then move down a distance and try focusing again.
11. The method according to claim 10, wherein:
the distance the camera moves up relative to the start position of focus is equal to: half of a given stroke value is divided by the current number of foci.
12. The method according to claim 10, wherein:
the camera is moved down a distance equal to the start position of the focus: one specifies half of the travel value.
13. The method according to claim 8 or 9, characterized in that:
the camera is moved to a relative distance of focus, i.e. the position at which the image is the sharpest minus the current position of the camera.
14. The method according to claim 2, characterized in that:
multiple focus attempts are performed, with the camera being repeatedly repositioned in the vertical axis direction in any single focus attempt.
15. The method according to claim 2, characterized in that:
before the camera position is repeatedly adjusted, the microscope lens is brought close to the wafer to a predetermined ratio of the specified stroke value.
16. The method according to claim 1, characterized in that:
a stepping motor drives the camera to move in the vertical axis direction, and a control unit controls the stepping motor to control the movement of the camera; the control unit is also used for fitting a second order curve and calculating vertex coordinates.
17. A focusing method for critical dimension measurement, characterized in that:
performing a plurality of autofocus attempts before performing measurements on critical dimensions on the wafer;
in each attempt: the camera is repeatedly adjusted in the vertical axis direction for a plurality of times to acquire moving position data of the camera and corresponding image definition data, wherein the image definition data is extracted from image information shot by the camera, and the image is regarded as a two-dimensional discrete matrix and image gray information is acquired by utilizing a gradient function for judging the image definition when the image is processed;
fitting a second-order curve according to the position data and the image definition data, wherein the x coordinate of the second-order curve is the change amount of the Z-axis position, and the y coordinate of the second-order curve is the change amount of the image definition;
judging whether the second-order curve meets a preset condition, if so, considering that focusing is successful; if not, the camera is moved on the vertical axis for a certain distance and then the focus is found.
18. The method according to claim 17, wherein:
in each attempt: repeatedly adjusting the position of the camera in the vertical axis direction for a plurality of times, recording the initial position of a focusing initial point and the initial image definition, and recording the real-time position and the real-time image definition of the camera after each position adjustment;
The position data comprises a plurality of groups of position differences between the real-time position and the initial position, and the image definition data comprises a plurality of groups of definition differences between the real-time image definition and the initial image definition.
19. The method according to claim 18, wherein:
after the camera adjusts the position each time, the position difference value and the definition difference value of the camera under the condition of the same position are respectively regarded as an abscissa value and an ordinate value corresponding to one point on the second-order curve at the same time.
20. The method according to claim 18, wherein:
after the camera adjusts the position each time, if the absolute value of any position difference value exceeds a designated travel value, ending the current position adjustment, and jumping out of the circulation of repeatedly adjusting the position of the camera for a plurality of times.
21. The method according to claim 18, wherein:
in each attempt: the maximum number of adjustments in one vertical axis is specified, and the actual number of adjustments requiring the camera to repeatedly adjust the position in the vertical axis direction does not exceed the maximum number.
22. The method according to claim 18, wherein:
the most clear position of the image is the vertex coordinates of the second order curve plus the starting position of the focusing starting point.
23. The method according to claim 18, wherein:
in each attempt: the predetermined condition includes that the quadratic coefficient of the second order curve is smaller than zero, the vertex coordinate is larger than zero, the vertex coordinate is smaller than a defined focusing travel maximum value, and focusing is considered successful when the predetermined condition is met.
24. The method according to claim 18, wherein:
in each attempt: the preset conditions comprise that the quadratic term coefficient of the second order curve is smaller than zero, the vertex coordinate is larger than zero, the vertex coordinate is smaller than a defined focusing stroke maximum value, and the camera is moved for a distance on the vertical axis and then the focus is found when any one of the preset conditions is not met.
25. The method according to claim 24, wherein:
in each attempt: in the stage of repeatedly adjusting the position of the camera for many times, making a difference based on any two adjacent sharpness differences obtained by sequentially adjusting the position of the camera; defining a variable item to change along with the increase of the position adjustment times, wherein the current variable item is equal to the previous value of the variable item plus the current difference result; judging whether the variable item is smaller than zero or not;
if yes, the camera moves upwards for a certain distance and then retries automatic focusing;
If not, the camera moves down a distance and then re-attempts auto-focusing.
26. The method according to claim 25, wherein:
the distance the camera moves up relative to the start position of focus is equal to: half of a given stroke value is divided by the current number of foci.
27. The method according to claim 25, wherein: the camera is moved down a distance equal to the start position of the focus: one specifies half of the travel value.
28. The method according to claim 23 or 24, characterized in that: the camera is moved to a relative distance of focus, i.e. the position at which the image is the sharpest minus the current position of the camera.
29. The method according to claim 17, wherein: in each attempt: before repeatedly adjusting the camera position, the microscope lens is brought close to the wafer to a predetermined ratio of the specified travel value.
CN202211129197.XA 2022-09-16 2022-09-16 Focusing method for critical dimension measurement Active CN115546114B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211129197.XA CN115546114B (en) 2022-09-16 2022-09-16 Focusing method for critical dimension measurement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211129197.XA CN115546114B (en) 2022-09-16 2022-09-16 Focusing method for critical dimension measurement

Publications (2)

Publication Number Publication Date
CN115546114A CN115546114A (en) 2022-12-30
CN115546114B true CN115546114B (en) 2024-01-23

Family

ID=84727273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211129197.XA Active CN115546114B (en) 2022-09-16 2022-09-16 Focusing method for critical dimension measurement

Country Status (1)

Country Link
CN (1) CN115546114B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03153015A (en) * 1989-11-10 1991-07-01 Nikon Corp Method and apparatus for alignment
TW201428418A (en) * 2012-11-09 2014-07-16 Kla Tencor Corp Method and system for providing a target design displaying high sensitivity to scanner focus change
CN105097579A (en) * 2014-05-06 2015-11-25 无锡华润上华科技有限公司 Measuring method, etching method, and forming method of semiconductor device
CN107197151A (en) * 2017-06-16 2017-09-22 广东欧珀移动通信有限公司 Atomatic focusing method, device, storage medium and electronic equipment
CN110646933A (en) * 2019-09-17 2020-01-03 苏州睿仟科技有限公司 Automatic focusing system and method based on multi-depth plane microscope
CN115020174A (en) * 2022-06-15 2022-09-06 上海精测半导体技术有限公司 Method for measuring and monitoring actual pixel size of charged particle beam scanning imaging equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7030351B2 (en) * 2003-11-24 2006-04-18 Mitutoyo Corporation Systems and methods for rapidly automatically focusing a machine vision inspection system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03153015A (en) * 1989-11-10 1991-07-01 Nikon Corp Method and apparatus for alignment
TW201428418A (en) * 2012-11-09 2014-07-16 Kla Tencor Corp Method and system for providing a target design displaying high sensitivity to scanner focus change
CN105097579A (en) * 2014-05-06 2015-11-25 无锡华润上华科技有限公司 Measuring method, etching method, and forming method of semiconductor device
CN107197151A (en) * 2017-06-16 2017-09-22 广东欧珀移动通信有限公司 Atomatic focusing method, device, storage medium and electronic equipment
CN110646933A (en) * 2019-09-17 2020-01-03 苏州睿仟科技有限公司 Automatic focusing system and method based on multi-depth plane microscope
CN115020174A (en) * 2022-06-15 2022-09-06 上海精测半导体技术有限公司 Method for measuring and monitoring actual pixel size of charged particle beam scanning imaging equipment

Also Published As

Publication number Publication date
CN115546114A (en) 2022-12-30

Similar Documents

Publication Publication Date Title
US7477396B2 (en) Methods and systems for determining overlay error based on target image symmetry
US10101677B2 (en) Inspection apparatus for measuring properties of a target structure, methods of operating an optical system, method of manufacturing devices
US20030170552A1 (en) Method of determining exposure conditions, exposure method, device manufacturing method, and storage medium
US7120286B2 (en) Method and apparatus for three dimensional edge tracing with Z height adjustment
WO2016124345A1 (en) Metrology method, metrology apparatus and device manufacturing method
US5747202A (en) Projection exposure method
CN112697112B (en) Method and device for measuring horizontal plane inclination angle of camera
EP3451061A1 (en) Method for monitoring a manufacturing process
TWI753479B (en) Method for configuring a lithographic apparatus and computer program product
TW201721307A (en) Methods for controlling lithographic apparatus, lithographic apparatus and device manufacturing method
CN115546114B (en) Focusing method for critical dimension measurement
CN114252014A (en) System and method for testing mark size of photomask substrate
US7095904B2 (en) Method and apparatus for determining best focus using dark-field imaging
JP2001093813A (en) Stepping projection method
CN110006921B (en) Automatic pose adjusting method and device for large-curvature-radius spherical optical element
CN115628685B (en) Method and equipment for measuring critical dimension and method for classifying and positioning critical dimension
CN115547909B (en) Wafer definition positioning method
CN110653016A (en) Pipetting system and calibration method thereof
CN115103124B (en) Active alignment method for camera module
CN115020174A (en) Method for measuring and monitoring actual pixel size of charged particle beam scanning imaging equipment
TW201714023A (en) Methods for controlling lithographic apparatus, lithographic apparatus and device manufacturing method
TWI747725B (en) Method for controlling a manufacturing process and associated apparatuses
CN109506626A (en) A kind of measurement method of camera tilt angles
CN114815211A (en) Microscope automatic focusing method and system based on image processing
EP3869271A1 (en) Method for controlling a manufacturing process and associated apparatuses

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant