CN114296935A - Method, electronic device, and storage medium for optical proximity correction - Google Patents

Method, electronic device, and storage medium for optical proximity correction Download PDF

Info

Publication number
CN114296935A
CN114296935A CN202111657509.XA CN202111657509A CN114296935A CN 114296935 A CN114296935 A CN 114296935A CN 202111657509 A CN202111657509 A CN 202111657509A CN 114296935 A CN114296935 A CN 114296935A
Authority
CN
China
Prior art keywords
subtask
tasks
task
execution
following
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111657509.XA
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced Manufacturing EDA Co Ltd
Original Assignee
Advanced Manufacturing EDA Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced Manufacturing EDA Co Ltd filed Critical Advanced Manufacturing EDA Co Ltd
Priority to CN202111657509.XA priority Critical patent/CN114296935A/en
Publication of CN114296935A publication Critical patent/CN114296935A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Methods, electronic devices, and storage media for optical proximity correction are disclosed. The method includes determining a priority of a plurality of tasks in response to receiving information for the plurality of tasks for the layout. And dividing each task in the plurality of tasks into a plurality of subtasks, wherein each subtask in the plurality of subtasks corresponds to a physical position range on the layout one by one respectively. And determining the dependency relationship between the following subtask in the task with the following priority and the preceding subtask in the task with the preceding priority, and obtaining a determination result. The dependency relationship indicates that the execution of a following subtask depends on the execution state of a preceding subtask corresponding to the same physical scope on the layout. Based on the determination result, execution of the following subtask is scheduled. The embodiment of the disclosure can improve the usability of user operation and the task execution efficiency.

Description

Method, electronic device, and storage medium for optical proximity correction
Technical Field
Embodiments of the present disclosure relate generally to the field of semiconductor manufacturing technology and, more particularly, to methods, electronic devices, and computer-readable storage media for optical proximity correction.
Background
As wafer processing continues to evolve, the density and complexity of masks are increasing. Generally, a computation engine for Optical Proximity Correction (OPC) decomposes a task into a set of subtasks to be executed through distributed processing, and finally fuses the subtasks into a final execution result. For example, a typical OPC correction task is divided into hundreds of thousands of subtasks, which run on thousands of CPUs, and the total operation time may be several tens of hours.
The OPC calculation engine typically performs different types of tasks depending on different configurations of the user. If a set of executed tasks has input and output dependency relationship, strict logic serial timing is required to be guaranteed. This results in the user needing additional attention to input and output between the preceding and following tasks. In addition, because strict logic serial timing needs to be maintained between tasks, the running time of the whole task is too long.
Disclosure of Invention
According to an example embodiment of the present disclosure, an improved scheme for optical proximity correction is provided.
In a first aspect of the disclosure, a method for optical proximity correction is provided. The method comprises the following steps: determining priorities of the plurality of tasks in response to receiving information for the plurality of tasks of the layout; dividing each task in the plurality of tasks into a plurality of subtasks, wherein each subtask in the plurality of subtasks corresponds to a physical position range on the layout one by one; determining the dependency relationship between a subsequent subtask in the task with the subsequent priority and a previous subtask in the task with the previous priority, and obtaining a determination result, wherein the dependency relationship indicates that the execution of the subsequent subtask depends on the execution state of the previous subtask in the same physical range on the corresponding layout; and scheduling execution of the following subtask based on the determination result.
In a second aspect of the present disclosure, an electronic device is provided. The electronic device includes a processor and a memory coupled with the processor, the memory having instructions stored therein that, when executed by the processor, cause the device to perform actions. The actions include: determining priorities of the plurality of tasks in response to receiving information for the plurality of tasks of the layout; dividing each task in the plurality of tasks into a plurality of subtasks, wherein each subtask in the plurality of subtasks corresponds to a physical position range on the layout one by one; determining the dependency relationship between a subsequent subtask in the task with the subsequent priority and a previous subtask in the task with the previous priority, and obtaining a determination result, wherein the dependency relationship indicates that the execution of the subsequent subtask depends on the execution state of the previous subtask in the same physical range on the corresponding layout; and scheduling execution of the following subtask based on the determination result.
In some embodiments, the actions further include: determining a business logic configuration of the plurality of tasks; and reading the service logic configurations of the tasks into a memory space, so that the service logic configurations of the tasks are in the same process during the execution of the tasks.
In some embodiments, wherein determining the priority of the plurality of tasks comprises: the priorities of the tasks are determined based on the process sequences corresponding to the tasks.
In some embodiments, dividing each of the plurality of tasks into a plurality of subtasks comprises: dividing the layout into a cell matrix; and respectively corresponding the areas processed by the sub tasks of each task in the plurality of tasks to the physical position range area of one cell in the cell matrix one by one.
In some embodiments, determining a dependency relationship of a following subtask in a following-priority task with a preceding subtask in a preceding-priority task includes: determining whether the execution of the following subtask is dependent on whether the execution of the preceding subtask corresponding to the same physical range on the layout as the following subtask is finished; and determining that the following subtask has a dependency relationship with the preceding subtask when the preceding subtask is not completely executed.
In some embodiments, the method further comprises: and after each previous subtask is executed, updating the state of the previous subtask.
In some embodiments, scheduling execution of the subsequent subtasks based on the determination includes: indicating to start executing the following subtask if the determination result indicates that the following subtask does not depend on the preceding subtask; and if the determination result indicates that the following subtask depends on the preceding subtask, indicating that the following subtask waits for completion of the preceding subtask.
In some embodiments, the actions further comprise: storing the execution result of the previous subtask in a disk space; and executing the following subtask based on the execution result stored in the disk space in a case where the following subtask depends on the preceding subtask.
In a third aspect of the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements a method according to the first aspect of the present disclosure.
It should be understood that the statements herein reciting aspects are not intended to limit the critical or essential features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein:
FIG. 1 illustrates a flow diagram of an exemplary OPC multi-level application example;
FIG. 2 illustrates a schematic diagram of an example environment in which embodiments of the present disclosure can be implemented;
FIG. 3 illustrates a flow diagram of a method for optical proximity correction according to some embodiments of the present disclosure;
FIG. 4 illustrates an architectural diagram of a fine dependency manager, according to some embodiments of the present disclosure;
FIG. 5 illustrates an architectural diagram of an OPC calculation engine in accordance with some embodiments of the present disclosure; and
FIG. 6 illustrates a block diagram of a computing device capable of implementing various embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
In describing embodiments of the present disclosure, the terms "include" and its derivatives should be interpreted as being inclusive, i.e., "including but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The terms "first," "second," and the like may refer to different or the same object. Other explicit and implicit definitions are also possible below.
The OPC process flow includes a number of steps. Can be briefly divided into: the OPC pre-processing phase, such as defining ideal wafer lithographic imaging targets, meets the process requirements for etching. And an OPC stage, wherein the mask is adjusted through modeling simulation, so that the imaging effect of the photoetching system continuously approaches to an ideal wafer photoetching imaging target. post-OPC processing, which is a stage that typically cross-verifies the results of OPC from other perspectives.
Typical OPC calculation engine designs are usually designed to process only one job at run-time, with different job tasks being concatenated through scripting or manual means. As mentioned briefly above, because of the need to maintain strict logical serial timing between tasks, this can result in an overall task that runs too long.
FIG. 1 illustrates a flow diagram 100 of an example of a typical OPC multi-level application. As shown in fig. 1, it contains three JOBs, also called three tasks, JOB0 ', JOB 1' and JOB2 ', denoted J0', J1 'and J2', respectively. For each job, the user starts a corresponding OPC engine (one of OPC engine 0 ', OPC engine 1 ', and OPC engine 2 ') through the business logic (Recipe) to execute the user-defined business logic. Business logic may be understood as a variety of processing rules that are different or the same. In actual business logic, a task at a later stage needs to use the output of a previous (multi-) stage task as an input or reference. That is, the task at the next stage needs to be executed after the task at the previous stage or stages is executed. Therefore, the total duration of time for performing the three tasks is equal to the sum of the respective durations of the three tasks.
In addition, in a whole view, JOB0 ', JOB1 ' and JOB2 ' need to guarantee strict serial connection, and simultaneously, a user needs input and output association of front and rear stages. To do this, different job tasks require the user to concatenate either through scripting or manually. For example, the priority or execution order of the individual tasks is specified by manually clicking on the tasks or additional program execution parameters. For example, it is specified that JOB0 ' executes first, second JOB1 ' executes, and last JOB2 ' executes.
The above method mainly has the following disadvantages: the OPC task usually requires distributed processing, so the effort is distributed over different computers, and therefore the data association can only be interacted through disk files, and the operation speed is affected by the performance of the file system (disk I/O, network hard disk speed). In addition, the communication between the front and rear tasks is performed through the interaction of the hard disk, and a user needs to pay extra attention to input and output among the tasks, so that the usability is poor. Moreover, the subtasks have locality characteristics, some subtasks run fast, some subtasks run slow, and it is possible that the running time of the entire task is delayed by some subtasks for a long time.
For this reason, there is a need for an improved method to increase user-friendliness and to reduce processing time of tasks.
According to an embodiment of the present disclosure, a solution for optical proximity correction is presented. In the scheme, the priorities of a plurality of tasks are determined in response to receiving information for the plurality of tasks of the layout. Each of the plurality of tasks is divided into a plurality of subtasks, each of the plurality of subtasks corresponding to a corresponding portion of the layout. And determining the dependency relationship between a subsequent subtask in the task with the subsequent priority and a previous subtask in the task with the previous priority, and obtaining a determination result, wherein the dependency relationship indicates that the execution of the subsequent subtask depends on the execution state of the previous subtask in the same physical range on the corresponding layout. Based on the determination result, execution of the following subtask is scheduled.
According to the scheme, the task is divided into a plurality of subtasks, and the execution time of the following subtask can be scheduled and optimized based on the execution state of the subtask and the dependency relationship between the preceding subtask and the following subtask, so that the tasks executed in series can be converted into at least partial parallel execution. Therefore, the execution efficiency of the task can be significantly improved. In addition, the user does not need to pay attention to switching between tasks, and therefore the usability of user operation can be significantly improved.
Embodiments of the present disclosure will be described below in detail with reference to the accompanying drawings. Referring to fig. 2, there is shown a schematic diagram of an example environment 200 in which various embodiments of the present disclosure can be implemented. As shown in FIG. 2, the example environment 200 includes a computing device 210, a client 220.
In some embodiments, computing device 210 may interact with client 220. For example, computing device 210 may receive input messages from client 220 and output feedback messages to client 220. In some embodiments, an input message from client 220 may specify a task to be processed. Computing device 210 may process for the task specified in the input message. In some embodiments, computing device 210 may send control messages to clients 220 to control clients 220 to perform tasks. For example, computing device 210 may assign pending tasks to clients 220 for execution. And receives the execution results fed back from the client 220.
In some embodiments, computing device 210 may include, but is not limited to, a personal computer, a server computer, a handheld or laptop device, a mobile device (such as a mobile phone, a personal digital assistant, PDA, a media player, etc.), a consumer electronic product, a minicomputer, a mainframe computer, a cloud computing resource, and the like. In some embodiments, clients 220 may also include, but are not limited to, personal computers, server computers, hand-held or laptop devices, mobile devices (such as mobile phones, Personal Digital Assistants (PDAs), media players, and the like), consumer electronics, minicomputers, mainframe computers, cloud computing resources, and the like.
It should be understood that the description of the structure and functionality of the example environment 200 is for exemplary purposes only and is not intended to limit the scope of the subject matter described herein. The subject matter described herein may be implemented in various structures and/or functions.
The technical solutions described above are only used for illustration and do not limit the invention. It is to be understood that the example environment 200 may have other various implementations. To more clearly explain the principles of the disclosed solution, it will be described in more detail below with reference to fig. 3.
FIG. 3 illustrates a flow diagram of a method for optical proximity correction, according to some embodiments of the present disclosure. For example, the method 300 may be implemented by the computing device 210 as shown in FIG. 2. The method 300 is described below in conjunction with fig. 4 and 5. FIG. 4 illustrates an architectural diagram of a fine dependency manager, according to some embodiments of the present disclosure. FIG. 5 illustrates an architectural diagram of an OPC calculation engine according to some embodiments of the present disclosure. It is to be understood that the method 300 may also include additional blocks not shown and/or may omit certain blocks shown. The scope of the present disclosure is not limited in this respect.
At block 302, in response to receiving information for a plurality of tasks for a layout, priorities of the plurality of tasks are determined. In some embodiments, input information from the outside may be received, the input information specifying a task to be performed. Such as the task of processing the layout. The input information may be input in various ways. For example, by receiving a message, or by receiving a file, etc. The embodiments of the present disclosure are not limited thereto. Each job or task has corresponding business logic. What processing to perform for the task, how to do it, etc. may be specified in the business logic. For example, it may be specified in the business logic that the layout is rotated by 90 degrees and cut into cells with dimensions of 10nm by 10 nm. As another example, the business logic may specify that a square be stretched by 5nm for each of the left and right sides, then rotated by 90 degrees, and then its area calculated. The "extending a square" and "extending the left and right sides by 5 nm" are business logic, and may also be referred to as business rules. Where 5 your and 90 degrees may be referred to as business logic configuration. I.e. the configuration of a set of operations that the user wishes to do. Performing the corresponding action according to the above rules may be referred to as a JOB or task, e.g., performing the above elongated action may be referred to as a JOB or task JOB 0. This action of rotating 90 degrees may be referred to as JOB 1. The area of the rectangle is calculated, and this action may be referred to as a JOB or task JOB 2. In some embodiments, the tasks may be performed by an OPC engine. Engines are typically located on multiple computers. As mentioned above, a typical OPC correction task is divided into hundreds of thousands of subtasks, which run on thousands of CPUs.
In some embodiments, the user receives a file of inputs, such as information about a task to be performed. The user may specify some rules, i.e. specify business logic. The engine may then perform the corresponding action to perform the task. In some embodiments of the present disclosure, the business logic corresponding to each process (job) or task is visible to the schedulers of the other processes. For example, in some embodiments, the scheduler may be a software-implemented application. The business logic of all jobs is visible to each level of scheduler. Therefore, the business logic of other jobs can be known in each level of scheduler, so that the execution of the subtasks can be reasonably scheduled.
In some embodiments, the user may specify the order of processing, or priority, of the plurality of tasks as desired. In some embodiments, the priority may also be specified by the manager.
In some embodiments, the business logic configuration of the plurality of tasks may be determined from the input information. In addition, the business logic configurations of the plurality of tasks can be read into the memory space, so that the business logic configurations of the plurality of tasks are in the same process during the execution of the plurality of tasks. The execution state of each subtask is stored in the memory. For example, one business logic configuration is stored at addresses 1 to 10000, another at 20000 to 3000, and another at 4000 to 50000. Of course, this is merely an example, and aspects of the present disclosure are not limited thereto.
In a typical OPC flow, the business logic between tasks is transparent, i.e., agnostic. The following tasks do not know, nor care about, the specific business logic of the preceding tasks. The execution of the various tasks is therefore in different processes. Furthermore, as mentioned above, data between different tasks are usually located on different nodes (e.g., computers, handheld devices, etc.), so data association can only be interacted through disk files, and the operating speed is affected by the I/O performance of the disk. In addition, the communication between the front and rear tasks requires the user to pay additional attention to the input and output between the tasks through the interaction of the hard disk. That is, the user needs to deal with the dependency relationship, and thus the usability is poor. In addition, the data interaction time is long. In some embodiments of the present disclosure, the service logic configurations of the multiple tasks are all read into the memory space, so that the service logic configurations of the multiple tasks are in the same process during the execution of the multiple tasks, and thus, a user does not need to pay attention to switching between the tasks. In addition, the speed of reading information in the memory is much faster than the speed of exchanging information through a disk file. In this way, during execution of the subsequent task, it is possible to know the execution status of the previous task and determine whether there is a dependency relationship between them. Based on this, the execution of different tasks can be scheduled reasonably. That is, the execution of the following task is not limited to after the execution of the preceding task is completed. Therefore, the execution efficiency of a plurality of tasks can be obviously improved, and the execution time is greatly saved.
In some embodiments, the plurality of tasks are prioritized based on a process order corresponding to each task of the plurality of tasks. In some embodiments, the user may specify other priorities or execution orders according to actual needs.
At block 304, each of the plurality of tasks is divided into a plurality of subtasks. Each of the plurality of subtasks may correspond to a respective portion of the layout. For example, referring to FIG. 4, FIG. 4 illustrates a schematic diagram of an architecture 400 of a fine dependency manager, according to some embodiments of the present disclosure. 402 is a view of task JOB0, which generally corresponds to a layout. Four subtasks t00-t03 are shown in the figure, each corresponding to a pane (cell) in the task view. In some embodiments, after dividing each of the plurality of tasks into a plurality of sub-tasks, the execution time of a following sub-task may be determined by determining a dependency of the following sub-task in the following priority task with a preceding sub-task in a preceding priority task, in such a way that the execution of the task may be optimized.
In some embodiments, dividing each of the plurality of tasks into a plurality of subtasks includes causing the regions processed by the respective subtasks of each of the plurality of tasks to respectively correspond one-to-one to a physical location range of one cell region in the cell matrix.
This is further described below in conjunction with fig. 4. As shown in FIG. 4, as previously mentioned, 402 is a view of task JOB0, which generally corresponds to a layout. Four subtasks t00-t03 are shown, each corresponding to a pane in the task view. Further, 404 is a view of task JOB1, which generally corresponds to a layout. For example, a subtask t1-0 of JOB1 corresponds to a physical location range of a tile, i.e., a cell region, in the task view. The grid corresponding to one subtask t1-0 of JOB1 is physically in the same range as the grid corresponding to four subtasks t00-t03 in the view of JOB 0. In other words, the region on the layout processed by the subtask t1-0 is the same region as the region on the layout processed by the four subtasks t00-t 03. In the embodiment shown in the figure, although the size of the cells divided by view 402 of JOB0 is different from the size of the cells divided by view 404 of JOB1, for each task, the sub-tasks respectively correspond to the physical position range on the layout, i.e. a certain cell therein.
Referring again to FIG. 4, where 404 is a view of task JOB1, which generally corresponds to a layout. For example, a subtask t1-0 of JOB1 corresponds to a pane in the task view. 408 represents a view of the layout, and view 402 of JOB0 and view 404 of JOB1 each correspond to view 408 of the layout. In other words, the view representing the layout is visible for all tasks, each sub-task corresponding to a predetermined physical extent on the layout. For example, as previously mentioned, a subtask t1-0 of JOB1 corresponds to a pane in the task view. The four subtasks t00-t03 in the view of JOB0 correspond to a sub-matrix of four squares (cells) in the task view. The two subtasks correspond to the same physical range on the layout.
In this way, by making the following subtask correspond to the same physical range as one or more of the preceding subtasks, specifically, dividing the layout into a matrix of cells, and making the regions processed by the respective subtasks of each of the plurality of tasks correspond to the physical position range of one cell region in the matrix of cells, one-to-one, respectively, it is possible to query the execution status of the preceding subtask corresponding to the same physical range before the execution of the following subtask, and thereby it is possible to determine when the execution of the following subtask can be started. This enables the entire serial execution of the tasks to be converted into the serial execution of the partial subtasks and the entire parallel execution of the tasks.
At block 306, a dependency of a following sub-task in a following-priority task with a preceding sub-task in a preceding-priority task is determined, thereby obtaining a determination result. The dependency relationship indicates that the execution of a following subtask depends on the execution state of a preceding subtask corresponding to the same physical scope on the layout. In a typical OPC flow, the execution of each task is only simple serial execution, and does not consider the subdivision of the task and the dependency relationship between the subdivided subtasks. In some embodiments of the present disclosure, by subdividing a task into subtasks and determining a dependency relationship between a following subtask in a task of a following priority and a preceding subtask in a task of a preceding priority, an execution order of the tasks can be optimized, and execution efficiency is improved.
In some embodiments, determining a dependency relationship between a following subtask in the following-priority task and a preceding subtask in the preceding-priority task includes determining whether a preceding subtask corresponding to the same physical scope on the layout as the following subtask has completed execution, and in a case where the preceding subtask has not completed execution, determining that the following subtask has a dependency relationship with the preceding subtask. In some embodiments, the fine dependency manager 406 may be queried for the status of prior subtasks, where the scheduler may update the status of the subtasks to the fine dependency manager 406 after each prior subtask has completed execution. In some embodiments, the scheduler also notifies the fine dependency manager 406 where the execution results are stored after each preceding subtask has completed execution.
The operation of the scheduler and the fine dependency manager is further described below with reference to FIG. 5. FIG. 5 illustrates a schematic diagram of an architecture 500 of an OPC calculation engine according to some embodiments of the present disclosure. As shown in FIG. 5, a user 502, an OPC engine 504, is shown. The OPC engine 504 may include the fine dependency manager 406 as well as scheduler 0, scheduler 1, and scheduler 2. Three tasks are also shown, JOB0, JOB1, and JOB 2. Each task has a corresponding scheduler, scheduler 0, scheduler 1 and scheduler 2.
Task JOB0 does not need to depend on any other task because it is highest in priority and can be executed directly. Each time a subtask is done, it is reported to the fine dependency manager 406. The fine dependency manager 406 may update the status of the subtask. In some embodiments, the states of the individual subtasks in each task may be stored in a memory in the form of a matrix. As the subtasks are executed, the state of each subtask in the matrix is continually updated (as indicated by arrow S0). For example, the execution completed state is updated to 1, and the unexecuted state remains 0. The storage manner of the subtasks herein is merely exemplary, and the manner of the present disclosure is not limited thereto, but may be variously modified. For JOB1, since its priority is lower than JOB0, there is a possibility that the sub-tasks therein depend on the sub-tasks in JOB0, and therefore, before executing, the scheduler 1 needs to inquire whether the execution of the sub-tasks depending on the corresponding position in JOB0, specifically, whether the execution of the previous sub-task is completed. For example, as shown in FIG. 5, before scheduling the sub-task t1-0 to execute, JOB1 queries whether there is a dependency on the previous sub-task corresponding to sub-task t1-0, i.e., t0-0 to t0-3 corresponding to JOB0, i.e., queries the execution status. If there is a dependency (i.e., not completed), then scheduler 0 may notify (as indicated by arrow S1) the following subtask to suspend execution, waiting for the preceding subtask t0-0 to t0-3 to complete execution. If the several dependent subtasks have completed execution, then scheduler 1 of JOB1 can confirm that t1-0 can be executed. In addition, after the previous subtasks t0-0 through t0-3 have completed execution, the scheduler 1 may notify (as indicated by arrow S1) that the following subtask can now be executed. After the execution of the following subtask of JOB1 is completed, the execution state of the subtask is also updated to the fine dependency manager 406. As can be seen in FIG. 4, S1 is a double-headed arrow indicating that the interaction between scheduler 1 and the fine dependency manager 406 is double-headed, both querying the status and notifying the execution of the subtask and updating the execution status.
For task JOB2, similar to JOB1, before it is executed, the scheduler 2 needs to query whether the sub-task depends on JOB0 and the corresponding position in JOB1, i.e. query the execution state of the previous sub-task. If there is a dependency, the scheduler 2 may notify (as indicated by arrow S2) the following subtasks to suspend execution, waiting for the preceding subtasks, such as JOB0 and/or JOB1, to complete execution. After the previous subtask has completed execution, the scheduler 2 may notify (as depicted by arrow S2) that the following subtask is now available for execution. Since JOB2 is the task with the lowest priority, it is not necessary to update the execution state of the subtask to the fine dependency manager 406 after the completion of the execution of the subtask. In this way, the user can define only the execution order without concern for dependencies, thereby improving ease of use. Furthermore, the dependency relationship may be determined by a scheduler, and by the scheduler working in cooperation with the fine dependency manager 406, the execution order of the subtasks can be optimized, improving the execution efficiency.
At block 308, based on the determination, execution of the following subtasks is scheduled. In some embodiments, scheduling execution of the subsequent subtasks based on the determination includes: if the determination indicates that the following subtask is not dependent on the preceding subtask, then execution of the following subtask is instructed. If the determination indicates that the following subtask depends on the preceding subtask, the following subtask may be indicated to wait for the preceding subtask to finish executing. After the execution of the subtasks in each task is completed, the subtasks can be combined together to form a complete task output.
As described above in connection with fig. 4 and the figures, based on the determination result, the scheduler may instruct the following subtask to suspend execution or the following subtask may execute. That is, the scheduler may implement functions of querying status, distributing tasks, orchestrating execution times of subtasks, and the like. In some embodiments, the scheduler corresponding to each task may schedule a respective engine to perform the respective task. The respective engines may be located on the same computing device (e.g., computer) or may be located on different computing devices.
In some embodiments, the method further comprises storing the results of the execution of the previous subtasks in a file (e.g., a disk or network hard drive) or disk space. And in a case where the following subtask depends on the preceding subtask, the following subtask is executed based on the execution result.
In some embodiments, such as mentioned previously, it may be specified in the business logic that the layout is rotated 90 degrees and cut into cells that are 10nm by 10nm in size. The OPC engine can rotate and cut in a programmed manner to form sub-tasks of the JOB. When the sub-tasks are completed, the corresponding processing results are stored in an area, such as a disk space, and are obtained by other JOBs through the fine dependency manager 406 for other sub-tasks with dependency relationships.
The above embodiments are merely illustrative, and the embodiments of the present disclosure are not limited to the above embodiments, but may be variously modified.
In some embodiments of the present disclosure, parallel processing can be realized by introducing a fine dependency manager and a scheduler to manage subtask dependency relationships between different procedures (tasks). The fine dependency manager may be designed primarily based on the following three-point task performance characteristics. 1. The global views of OPC layouts seen by different procedures are consistent. 2. The job is divided into a set of subtasks according to preset physical dimensions (which may be set by the user, for example, or by OPC based on other considerations as well). 3. The dependencies of the subtasks of different processes are confined to the same physical scope. Based on the three points, a new OPC engine architecture can be realized: the scheduler updates the status of the subtasks completed for execution to the fine dependency manager. The scheduler inquires whether the previous stage corresponding subtask depended by the subtask to be executed is completed. Based on this, the execution of the task can be optimized, and the execution efficiency can be improved.
In some embodiments of the present disclosure, global dependencies between jobs are subdivided into sub-task level local dependencies by a fine dependency manager. And meanwhile, multiple jobs are configured, and the priority order among the jobs is in accordance with the characteristics of the distributed tasks, so that the usability of the user is improved, and the efficiency is improved. The fine dependency manager refines the dependency to the level of the subtasks, thereby obtaining run-time job-level parallelism. In addition, the engine can simultaneously acquire the configuration of different jobs, and the details of switching the concerned tasks displayed by the user are not needed, so that the usability is improved.
In some embodiments of the disclosure, a pipeline-based OPC distributed computing engine is designed to combine a plurality of jobs into a pipeline, each job is a process on the pipeline, a user can configure the business logic of different processes only, and the OPC engine creates an independent subtask scheduler for each process. The business logic configuration corresponding to each process is visible to the schedulers of other processes, so that the user does not need to care about the switching logic between the processes. Because multiple sets of logic are visible to different process schedulers, data interaction between processes can be accomplished through memory interaction. Therefore, on the whole, the processes have no obvious precedence relationship, the subtask execution period dependence of each process can be independently distributed by the checking result of the fine dependence manager, and the actual execution period effect is a parallel pipeline.
According to the embodiment of the disclosure, the dependency relationship between the subtasks is kept transparent to the user, that is, the mutual dependency relationship between the business logics is known, so that the serial operation can be changed into the parallel operation. Meanwhile, the configuration of the user is simpler, and the usability of the user is improved.
The method and the device can also provide a uniform implementation framework for other characteristics, and are compatible with the situation that the jobs have global serial dependence. The engine can automatically trigger operation according to the dependency rules as long as a certain task is abstracted into a process and put into a proper position in the pipeline.
In the embodiments of the present disclosure, three JOB are taken as an example for description, and the embodiments of the present disclosure are not limited thereto, but may be any other number of JOB, for example, hundreds of thousands of JOB.
Fig. 6 illustrates a schematic block diagram of an example device 600 that can be used to implement embodiments of the present disclosure. For example, the electronic device of the present disclosure may be implemented by the device 600. As shown, device 600 includes a Central Processing Unit (CPU)601 that may perform various appropriate actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM)602 or loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The processing unit 601 performs the various methods and processes described above, such as the method 300. For example, in some embodiments, the method 300 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into RAM 603 and executed by CPU 601, one or more steps of method 300 described above may be performed. Alternatively, in other embodiments, CPU 601 may be configured to perform method 300 by any other suitable means (e.g., by way of firmware).
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a load programmable logic device (CPLD), and the like.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Further, while operations are depicted in a particular order, this should be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. A method for optical proximity correction, comprising:
determining priorities of a plurality of tasks for a layout in response to receiving information for the plurality of tasks;
dividing each task of the multiple tasks into multiple sub-tasks, wherein each sub-task of the multiple sub-tasks corresponds to a physical position range on the layout one by one;
determining the dependency relationship between a subsequent subtask in the task with the subsequent priority and a previous subtask in the task with the previous priority, and obtaining a determination result, wherein the dependency relationship represents that the execution of the subsequent subtask depends on the execution state of the previous subtask corresponding to the same physical range on the layout; and
scheduling execution of the following subtask based on the determination result.
2. The method of claim 1, further comprising:
determining a business logic configuration of the plurality of tasks; and
reading the service logic configurations of the tasks into a memory space, so that the service logic configurations of the tasks are in the same process during the execution of the tasks.
3. The method of claim 1, wherein determining priorities of the plurality of tasks comprises:
determining the priorities of the tasks based on the process sequence corresponding to each task in the tasks.
4. The method of claim 1, wherein dividing each of the plurality of tasks into a plurality of subtasks comprises:
dividing the layout into a cell matrix; and
and respectively enabling the areas processed by the sub tasks of each task in the plurality of tasks to correspond to the physical position range of one cell area in the cell matrix one by one.
5. The method of claim 1, wherein determining a dependency relationship of a later sub-task in a later-priority task with a previous sub-task in a previous-priority task comprises:
determining whether the preceding subtask corresponding to the same physical range on the layout as the following subtask is executed completely; and
and determining that the subsequent subtask has a dependency relationship with the previous subtask when the previous subtask is not completely executed.
6. The method of claim 1, further comprising:
updating the state of each of the preceding subtasks after the preceding subtask has been executed.
7. The method of claim 1, wherein scheduling execution of the following subtask comprises, based on the determination:
instructing to start executing the following subtask if the determination result indicates that the following subtask is not dependent on the preceding subtask; or
And if the determined result indicates that the following subtask depends on the preceding subtask, indicating the following subtask to wait for the preceding subtask to finish executing.
8. The method of claim 1, further comprising:
storing the execution result of the previous subtask in a disk space; and
executing the following subtask based on the execution result stored in the disk space in a case where the following subtask depends on the preceding subtask.
9. An electronic device, comprising:
a processor; and
a memory coupled with the processor, the memory having instructions stored therein that, when executed by the processor, cause the apparatus to perform acts comprising:
determining priorities of a plurality of tasks for a layout in response to receiving information for the plurality of tasks;
dividing each task of the multiple tasks into multiple sub-tasks, wherein each sub-task of the multiple sub-tasks corresponds to a physical position range on the layout one by one;
determining the dependency relationship between a subsequent subtask in the task with the subsequent priority and a previous subtask in the task with the previous priority, and obtaining a determination result, wherein the dependency relationship represents that the execution of the subsequent subtask depends on the execution state of the previous subtask corresponding to the same physical range on the layout; and
scheduling execution of the following subtask based on the determination result.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method for optical proximity correction according to any one of claims 1-8.
CN202111657509.XA 2021-12-30 2021-12-30 Method, electronic device, and storage medium for optical proximity correction Pending CN114296935A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111657509.XA CN114296935A (en) 2021-12-30 2021-12-30 Method, electronic device, and storage medium for optical proximity correction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111657509.XA CN114296935A (en) 2021-12-30 2021-12-30 Method, electronic device, and storage medium for optical proximity correction

Publications (1)

Publication Number Publication Date
CN114296935A true CN114296935A (en) 2022-04-08

Family

ID=80973103

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111657509.XA Pending CN114296935A (en) 2021-12-30 2021-12-30 Method, electronic device, and storage medium for optical proximity correction

Country Status (1)

Country Link
CN (1) CN114296935A (en)

Similar Documents

Publication Publication Date Title
US10503547B2 (en) Task scheduling in a GPU
US11720399B2 (en) Task scheduling in a GPU using wakeup event state data
CN113535367B (en) Task scheduling method and related device
US9477521B2 (en) Method and system for scheduling repetitive tasks in O(1)
US11816509B2 (en) Workload placement for virtual GPU enabled systems
CN105683939A (en) A computing platform, a reconfigurable hardware device and a method for simultaneously executing processes on dynamically reconfigurable hardware device, such as an FPGA, as well as instruction set processors, such as a CPU, and a related computer readable medium.
WO2023051505A1 (en) Job solving method and apparatus
WO2024082853A1 (en) Method and system for application performance optimization in high-performance computing
Iserte et al. Efficient scalable computing through flexible applications and adaptive workloads
Tan et al. Serving DNN models with multi-instance gpus: A case of the reconfigurable machine scheduling problem
KR20140097815A (en) Resource allocation and apparatus
CN115686805A (en) GPU resource sharing method and device, and GPU resource sharing scheduling method and device
CN114490048A (en) Task execution method and device, electronic equipment and computer storage medium
CN114296935A (en) Method, electronic device, and storage medium for optical proximity correction
CN117311937A (en) Distributed task scheduling method and device, electronic equipment and storage medium
WO2020246965A1 (en) Task distribution across multiple processing devices
JP6368452B2 (en) Improved scheduling of tasks performed by asynchronous devices
WO2021191365A1 (en) Method and system for optimizing data transfer from one memory to another memory
WO2021191361A1 (en) Method and system for optimizing data transfer from one memory to another memory
Mandava et al. Nimblock: Scheduling for fine-grained fpga sharing through virtualization
CN107562527B (en) Real-time task scheduling method for SMP (symmetric multi-processing) on RTOS (remote terminal operating system)
Burkimsher Fair, responsive scheduling of engineering workflows on computing grids
Skrinarova et al. GPGPU based job scheduling simulator for hybrid high-performance computing systems
Kamal et al. Enhanced user preference based intelligent scheduling algorithm (E-UPISA)
Morman et al. The Future of GNU Radio: Heterogeneous Computing, Distributed Processing, and Scheduler-as-a-Plugin

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination