CN113608854A - Task scheduling method and device for layout verification, server and storage medium - Google Patents

Task scheduling method and device for layout verification, server and storage medium Download PDF

Info

Publication number
CN113608854A
CN113608854A CN202110907237.8A CN202110907237A CN113608854A CN 113608854 A CN113608854 A CN 113608854A CN 202110907237 A CN202110907237 A CN 202110907237A CN 113608854 A CN113608854 A CN 113608854A
Authority
CN
China
Prior art keywords
subtasks
slave
task
processor
layout
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110907237.8A
Other languages
Chinese (zh)
Inventor
王帅龙
戴斌华
刘艳霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huada Jiutian Technology Co ltd
Original Assignee
Shenzhen Huada Jiutian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huada Jiutian Technology Co ltd filed Critical Shenzhen Huada Jiutian Technology Co ltd
Priority to CN202110907237.8A priority Critical patent/CN113608854A/en
Publication of CN113608854A publication Critical patent/CN113608854A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Design And Manufacture Of Integrated Circuits (AREA)

Abstract

The invention provides a task scheduling method, a device, a server and a storage medium for layout verification, which can utilize a main processor to read layout data, divide a total task process for verifying the layout data into a plurality of subtasks, grade the plurality of subtasks, grade a plurality of slave processors, schedule the plurality of subtasks according to the grade data of the plurality of subtasks, and distribute the plurality of unassigned subtasks to the slave processors which are correspondingly matched with the grade of the subtasks in the plurality of slave processors which are not distributed with tasks one by one according to the grade sequence of the plurality of subtasks. Therefore, the tasks can be efficiently and uniformly scheduled and distributed, so that the operating efficiency of a multi-machine system is improved, and the operating speed of the layout verification tool in a multi-machine mode is further improved.

Description

Task scheduling method and device for layout verification, server and storage medium
Technical Field
The present disclosure relates to the field of computer aided design of integrated circuits, and in particular, to a task scheduling method, apparatus, server and storage medium for layout verification in semiconductor integrated circuit design.
Background
With the development of integrated circuit technology, the feature size of a chip is smaller and smaller, the integration level of a single chip is higher and higher, and with the expansion of the chip scale, the design rules required to be verified at each stage of the integrated circuit design are increased. Among them, Design Rule Checking (DRC) of integrated circuit layouts and conformance checking (LVS) of integrated circuit layouts with schematics become more and more important, which have important roles in eliminating errors, reducing design costs and reducing the risk of design failures. In the design of very large scale integrated circuits, the layout scale expands rapidly, and how to complete the verification work of the design scheme within the effective time becomes a problem which needs to be solved by all large EDA manufacturers urgently.
The parallel processing comprises two key technologies of distributed processing and multithreading. Distributed processing studies divide a problem that requires a very large amount of computing power into many small parts, then distribute the parts to many computers for processing, and finally combine the results of the computations to obtain a final result. Multithreading is a parallel processing mode in which the granularity of processing objects is smaller than that of distribution. The data processing object of each processing unit of distributed processing is a process, and the processing object of multithreading is a thread. Each program that is running on the system is a process. Each process contains one to more threads. A process may also be the dynamic execution of an entire program or a portion of a program. A thread is a collection of instructions, or a special piece of a program, that can be executed independently within the program. It may also be understood as the context in which the code runs. A thread is basically a lightweight process that is responsible for performing multiple tasks within a single program. The scheduling and execution of multiple threads is typically handled by the operating system. A thread is a single sequential control flow in a program. Running multiple threads simultaneously in a single program accomplishes different tasks, referred to as multithreading. Threads are distinguished from processes in that multiple processes have different code and data spaces, while multiple threads share a data space, each thread having its own execution stack and program counter for its execution context. The running of the thread requires the use of the memory resources and CPU of the computer.
When the layout verification tool relates to the layout of a super-large scale circuit, process scheduling or thread scheduling can be described by a producer-consumer model. In this model, a single-machine mode may cause a tool operation failure due to hardware conditions such as the memory of a single processor, which requires a multi-machine mode to be operated, and layout data is distributed to a plurality of processors to operate the tool, and the multi-machine mode designed by the layout verification tool generally employs a master-slave system. In a master-slave multi-machine system, tasks need to be scheduled and distributed, and the tasks needing to be processed are distributed to slave processors of the multi-machine system. The existing task scheduling method is lack of quantitative analysis on interaction relation among subtasks in a collaborative process, and has the problem of disjointed task decomposition and resource allocation; in addition, comprehensive evaluation on task weight and resource capacity is lacked, the distribution method easily causes excessive occupation of the tasks with high importance on the resources, the problem of local optimization occurs, the method is not suitable for a multi-machine system of the layout verification tool, the efficiency of the multi-machine system is low due to the fact that part of the slave processors are idle due to improper task distribution, and the operation speed of the layout verification tool in a multi-machine mode is greatly influenced.
Disclosure of Invention
In order to solve the technical problems, the disclosure provides a task scheduling method, a task scheduling device, a server and a storage medium for layout verification, which can efficiently and uniformly schedule and distribute tasks to improve the operating efficiency of a multi-machine system and further improve the operating speed of a layout verification tool in a multi-machine mode.
In one aspect, the present disclosure provides a task scheduling method for layout verification, applied to a master-slave multi-machine system, where the multi-machine system includes a master processor and a plurality of slave processors communicatively connected to the master processor, where the task scheduling method includes:
reading layout data by using a host processor;
dividing the total task process for verifying the layout data into a plurality of subtasks;
ranking the plurality of subtasks, and ranking the plurality of slave processors;
and scheduling the plurality of subtasks according to the rating data of the plurality of subtasks, and allocating the plurality of unassigned subtasks to the slave processors of the unassigned tasks one by one according to the rating sequence of the plurality of unassigned tasks, wherein the slave processors are correspondingly adapted to the ratings of the subtasks.
Preferably, the step of dividing the total task process for verifying the layout data into a plurality of subtasks includes:
dividing the layout data into a plurality of subtasks according to task requirements and configuration information of the plurality of slave processors, distributing corresponding task numbers,
the segmentation of the total task process is carried out by taking the layout structure layer as a unit or taking a region in the layout structure layer as a unit.
Preferably, the task scheduling method further includes:
and acquiring the task number of the distributed subtasks in real time, and updating the information of the unallocated subtasks according to the task number.
Preferably, the step of ranking the plurality of subtasks comprises:
grading according to the total number of all figures in the layout structure layer of each of the plurality of unassigned subtasks, or grading according to the total number of figures in the corresponding region of the plurality of unassigned subtasks, to obtain the grading data of the plurality of subtasks,
the larger the total number of aforementioned graphs, the higher the rating of the corresponding subtask.
Preferably, the step of ranking the plurality of slave processors comprises:
acquiring and identifying the configuration and performance information of the plurality of slave processors, analyzing and judging the capability of the slave processors for executing the subtasks according to the configuration and performance information, grading the plurality of slave processors to obtain the grading data of the plurality of slave processors,
the configuration and performance information of the slave processor comprises: the more powerful the slave processor is in its ability to execute subtasks, the higher the slave processor is ranked.
Preferably, the step of scheduling the plurality of subtasks according to the rating data of the plurality of subtasks, and allocating the plurality of unassigned subtasks one by one in the rating order to the slave processor corresponding to the rating of the subtask among the plurality of slave processors of the unassigned task includes:
acquiring task information and rating data of any unallocated subtasks;
monitoring the states of the plurality of slave processors, and acquiring the rating data of the slave processors which are not distributed with the subtasks;
selecting a plurality of unallocated subtasks one by one according to the ranking order of the plurality of unallocated subtasks, selecting a slave processor which is not allocated with the subtasks and has the same ranking with the plurality of unallocated subtasks according to the ranking data of the plurality of unallocated subtasks, and selecting a slave processor which is not allocated with the subtasks and has a ranking higher than the plurality of unallocated subtasks if no slave processor which is not allocated with the subtasks and has the same ranking with the plurality of subtasks is available;
and scheduling to complete the current unallocated subtasks, and updating the task information of the unallocated subtasks.
In another aspect, the present disclosure further provides a task scheduling device for layout verification, applied to a master-slave multi-machine system, where the multi-machine system includes a master processor and a plurality of slave processors communicatively connected to the master processor, where the task scheduling device is disposed on the master processor, and includes:
a layout data reading-in component for reading layout data;
the task management component is in communication connection with the layout data reading component and is used for dividing a total task process for verifying the layout data into a plurality of subtasks and grading the subtasks;
the slave management component is in communication connection with the plurality of slave processors and is used for grading the plurality of slave processors;
and the task scheduling component is configured to schedule the plurality of subtasks according to the rating data of the plurality of subtasks, and allocate the plurality of unassigned subtasks to the slave processing machines which are correspondingly matched with the ratings of the subtasks in the plurality of slave processing machines which are not allocated with the tasks one by one according to the rating orders of the plurality of subtasks.
Preferably, the task scheduling device further includes:
a display unit, which is connected with the main processor in a communication way and is used for displaying the distribution result of the sub tasks which are not distributed and are scheduled to the slave processors which are not distributed with tasks;
and the storage component is respectively connected with the task management component and the slave management component and is used for storing the layout data, a scheme for verifying the total task process of the layout data and dividing the total task process into a plurality of subtasks, rating data of the scheme and the rating data of the slave processors.
In another aspect, the present disclosure further provides a server, including:
a processor;
a memory for storing one or more programs;
wherein, when the aforementioned one or more programs are executed by the aforementioned processor, the aforementioned processor implements the task scheduling method for layout verification as described above.
In yet another aspect, the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the task scheduling method for layout verification as described above.
The beneficial effects of this disclosure are: the task scheduling method, the device, the server and the storage medium for layout verification provided by the disclosure can utilize the host processor to read layout data, divide the total task process for verifying the layout data into a plurality of subtasks, grade the plurality of subtasks and grade a plurality of slave processors, then the plurality of subtasks are scheduled according to the rating data of the plurality of subtasks, the plurality of unassigned subtasks are assigned to the slave processors of the unassigned tasks one by one according to the rating sequence of the plurality of slave processors, the ratings of the subtasks are correspondingly adapted to the slave processors, thereby realizing the maximum use of the slave processors, avoiding the low efficiency of the multi-machine system caused by the idle of part of the slave processors due to the improper task allocation, therefore, the tasks can be efficiently and uniformly scheduled and distributed, the operating efficiency of a multi-machine system is improved, and the operating speed of the layout verification tool in a multi-machine mode is further improved.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of the embodiments of the present disclosure with reference to the accompanying drawings.
Fig. 1 illustrates a flowchart of a task scheduling method for layout verification according to an embodiment of the present disclosure;
FIG. 2 is a block diagram schematically illustrating a master-slave multi-computer system to which the task scheduling method shown in FIG. 1 is applied;
fig. 3 is a schematic structural diagram of a task scheduling device for layout verification according to a second embodiment of the present disclosure;
FIG. 4 is a diagram illustrating an application model of a task scheduling component corresponding to the second embodiment shown in FIG. 3;
fig. 5 shows a schematic structural diagram of a server provided in the third embodiment of the present disclosure.
Detailed Description
To facilitate an understanding of the present disclosure, the present disclosure will now be described more fully with reference to the accompanying drawings. Preferred embodiments of the present disclosure are set forth in the accompanying drawings. However, the present disclosure may be embodied in different forms and is not limited to the embodiments described herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The terminology used in the description of the disclosure herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure.
The present disclosure is described in detail below with reference to the accompanying drawings.
The first embodiment is as follows:
fig. 1 shows a flowchart of a task scheduling method for layout verification according to an embodiment of the present disclosure, and fig. 2 shows a block diagram of a master-slave multi-machine system to which the task scheduling method shown in fig. 1 is applied.
On one hand, a first embodiment of the present disclosure provides a task scheduling method for layout verification, which may be executed by an electronic device equipped with an EDA design tool, where the electronic device may be, for example, a server or a terminal device. In practical applications, which are applied to a master-slave multi-machine system (as shown in fig. 2), the multi-machine system includes a master processor 10 and a plurality of slave processors 20 communicatively connected to the master processor 10, and referring to fig. 1, the task scheduling method may specifically include the following steps S110 to S140:
step S110: the layout data is read by a host processor.
In the Layout design of an integrated circuit, a Printed Circuit Board (PCB) usually designs a schematic diagram, then sets the appearance and size of the PCB, then sets environmental parameters according to own habits, then introduces data such as a netlist and component packaging, then sets working parameters, usually including the setting of a board layer and the setting of wiring rules, after the preparation work is completed, components can be arranged, then works such as automatic wiring, manual adjustment of unreasonable drawings and the like are performed, finally design verification is performed to form a Layout, the Layout (Layout) is that an integrated circuit design converts a designed and simulated and optimized circuit into a series of geometric figures, the Layout figures comprise all physical information of devices such as the size of the integrated circuit and the definition of each layer topology, and an integrated circuit manufacturer manufactures a mask according to the Layout information.
The layout verification workload is large, and the verification of the layout mainly comprises two aspects of design rule verification (DRC) and layout circuit comparison verification (LVS). In layout design, the size, shape, position in a chip, connection with other elements, and the like of each component are carefully considered, so that a very compact layout and optimal circuit performance can be obtained. In the process of drawing the layout, sometimes a small part of structures of devices (such as high-voltage I/O switching devices) with complex structures and repetition may not meet design requirements, and the local structures need to be adjusted after verification. The general verification and adjustment method is to disassemble the layout structure layer of the whole device and then modify the layout structure layer to meet the performance requirements of the device on the whole circuit. However, the layout data, i.e., gds files, of an integrated circuit including a plurality of devices are very large, which results in a long design and verification period, and when the layout of a very large scale circuit is involved, a processor may also cause a tool operation failure due to the limitation of hardware conditions such as memory, which greatly reduces the verification efficiency, and the prior simulation of the circuit and the post-simulation of the layout may consume a large amount of time, because the processing speed of the prior Spice simulation software for the post-simulation of the layout cannot meet the requirement of a designer. Therefore, in the master-slave multi-computer system, the tasks need to be reasonably scheduled and resource allocated, and the tasks to be processed are allocated to the corresponding slave processors of the multi-computer system.
Step S120: and dividing the total task process for verifying the layout data into a plurality of subtasks.
In step S120, the method may specifically include: and dividing the total task process into the plurality of subtasks according to the task requirements and the configuration information of the plurality of slave processors by the layout data, and distributing corresponding task numbers, wherein the division for verifying the layout data task process is carried out by taking the layout structure layer as a unit or taking the region in the layout structure layer as a unit. Specifically, for example: when the layout of the ultra-large scale circuit is involved, the number and the distribution positions of the graphs on different layout structure layers are different, the layout structure layer with less number of graphs can adapt to the execution capacity (configuration and performance information) of the processor to perform the segmentation of the verification subtasks by taking the layout structure layer as a unit, for the layout structure layer with more number of graphs and complex distribution, if the layout structure layer is also taken as a unit, the required processor has higher performance and longer verification time, if the structure needs to be optimized, the period of modification and repeated verification is longer, which is not beneficial to improving the efficiency of integral operation, in order to reduce the execution time of a single subtask, the performance requirement on the processor is further reduced, the integral efficiency of the layout verification is improved as much as possible by reasonable resource allocation, the layout structure layer can be divided into regions based on the number of the graphs, and the segmentation is performed by taking different regions as units, and dividing the verification task into a plurality of subtasks, and distributing corresponding task numbers.
Optionally, the task scheduling method may further include:
and acquiring the task numbers of the distributed subtasks in real time, and updating the information of the unallocated subtasks according to the task numbers so as to dynamically track the distribution and completion conditions of each subtask, thereby improving the distribution efficiency of the unallocated subtasks and integrally improving the layout design and verification efficiency.
Step S130: the plurality of subtasks are ranked, and the aforementioned plurality of slave processors are ranked.
In step S130, the step of rating the plurality of subtasks may specifically include:
and grading according to the total number of all figures in the layout structure layers of the plurality of unallocated subtasks, or grading according to the total number of figures in the corresponding region of the plurality of unallocated subtasks to obtain the grading data of the plurality of subtasks, wherein the larger the total number of figures is, the higher the grading of the corresponding subtasks is.
The step of ranking the plurality of slave processors may specifically comprise:
obtaining and identifying the configuration and performance information of the plurality of slave processors 20, analyzing and judging the capability of the slave processors 20 to execute the subtasks according to the configuration and performance information, grading the plurality of slave processors 20 to obtain the grading data of the plurality of slave processors 20,
the configuration and performance information of the slave processor 20 includes: the more powerful the slave processor 20 is in its ability to perform subtasks, the higher the slave processor 20 is ranked, the number and model of CPUs, memory information, disk information, and operating system information.
Through the double grading of the subtasks and the slave processors 20 for executing the subtasks, the states of the subtasks and the capacity of the slave processors 20 for executing the distribution subtasks can be acquired more clearly, so that the distribution operation of the subtasks can be performed conveniently and intuitively and quickly by operators, the maximum execution utilization of the slave processors 20 is effectively improved, the influence of repeated distribution after failure due to the fact that the number of the figures in the factor tasks is too large and the slave processors 20 cannot operate is avoided, the operation time is effectively reduced, the distribution efficiency of the slave processors 20 corresponding to the subtasks is improved, and the overall efficiency of layout verification is greatly improved.
Step S140: and scheduling the plurality of subtasks according to the rating data of the plurality of subtasks, and allocating the plurality of unassigned subtasks to the slave processors of the plurality of unassigned slave processors corresponding to the rating of the subtask according to the rating sequence of the plurality of unassigned subtasks.
In step S140, the plurality of subtasks are scheduled according to the rating data of the plurality of subtasks, and the step of assigning one of the plurality of unassigned subtasks to the slave processor, which is correspondingly adapted to the rating of the subtask, in the plurality of slave processors of the unassigned task may specifically include:
acquiring the rating data of any unallocated subtask;
monitoring the states of the plurality of slave processors 20, and acquiring the rating data of the plurality of slave processors 20 which are not allocated with the subtasks;
from the perspective of the unassigned subtasks, selecting the slave processors 20 which are not assigned with the subtasks and have the same rating with the unassigned subtasks one by one according to the rating data of the currently unassigned subtasks in the rating sequence, if no slave processor 20 which is not assigned with the subtasks and has the same rating with the unassigned subtasks is available, selecting the slave processor 20 which is higher than the currently unassigned subtasks by one level and is not assigned with the subtasks, and if the slave processor 20 does not exist, selecting the slave processor 20 which is higher than the currently unassigned subtasks by two levels and is not assigned with the subtasks, and carrying out recursion until the slave processor 20 which is suitable for assignment is found; from the perspective of the slave processor 20 in the idle state, if there are currently unassigned subtasks, the unassigned subtasks with the same rating are selected one by one according to the rating data of the slave processor 20 of the currently unassigned subtasks in the order of the ranking thereof, if there is no task with the same rating, a task with a one-step difference from the ranking thereof is unassigned, if there is still no task with the same rating, a task with a two-step difference from the ranking thereof is unassigned, and recursion is performed until a suitable assigned subtask is found;
and scheduling to complete the current unallocated subtasks, and updating the task information of the unallocated subtasks.
For all slave processors 20, only one sub-task is executed at a time on a single slave processor 20, and the slave processor 20 will be assigned a new sub-task by the master processor 10 after the slave processor 20 has completed the current task.
Therefore, the task scheduling method for layout verification according to this embodiment can realize maximum use of the slave processors, and avoid low efficiency of a multi-machine system due to idle of part of the slave processors due to improper task allocation, so that tasks can be efficiently and uniformly scheduled and allocated, the operating efficiency of the multi-machine system is improved, and the operating speed of the layout verification tool in a multi-machine mode is further improved.
Example two:
fig. 3 shows a schematic structural diagram of a task scheduling device for layout verification according to a second embodiment of the present disclosure, and fig. 4 shows a schematic application model corresponding to the task scheduling component in the second embodiment shown in fig. 3.
Referring to fig. 3, a second embodiment of the present disclosure provides a task scheduling device 100 for layout verification, where the task scheduling device 100 is also applied to a master-slave multi-machine system (as shown in fig. 2), and it is assumed that the master-slave multi-machine system includes a master processor 10 and three slave processors 20, which are numbered as slave 1, slave 2 and slave 3 for distinguishing the slave processors, and a task for layout verification needs to be completed by the master-slave multi-machine system. The task scheduling apparatus 100 is provided in a host processor 10 in the master-slave multi-machine system, and includes: layout data reading unit 110, task management unit 120, slave management unit 130, and task scheduling unit 140.
Alternatively, the layout data read-in unit 110 located on the host processor 10 may be used to read the complete layout data and store it on the host processor 10. The task management component 120 is in communication connection with the layout data reading component 110, and may divide the total task process for verifying the layout data into a plurality of subtasks according to the read layout data, the task requirement, the information from the processor 20, and the like, and allocate corresponding numbers (the division of the tasks is performed by taking the layout structure layer or the region in the layout structure layer as a unit, which is specifically described in the first embodiment, and is not described herein again), and after the task management component 120 completes the division of the tasks, all the subtasks may be ranked, and the ranking data of the plurality of subtasks may be obtained. The grading criterion is the total number of graphs of all layout structure layers corresponding to the subtask, or the total number of graphs in the area corresponding to the subtask, wherein the larger the total number of graphs is, the higher the grading of the subtask is.
For example, the task management component 120 recognizes that the layout has 5 layout structure layers, and then divides the task into 5 subtasks by taking the layout structure layers as a unit, and the 5 subtasks are respectively numbered as task 1, task 2, task 3, task 4, and task 5, and then the task management component 120 ranks all the subtasks, assuming that the result of the ranking is: task 1 and task 2 are rated level 1, task 3 and task 4 are rated level 2, and task 5 is rated level 3.
Optionally, the slave management component 130 is communicatively connected to a plurality of slave processors 20 (slave 1, slave 2, and slave 3) in the master-slave multi-machine system, and ranks the plurality of slave processors 20 to obtain ranking data thereof. The rating is based on the configuration and performance information of the slave processor 20, including the number and model of the CPU, memory information, disk information, operating system information, etc., and the better the performance of the slave processor 20, the higher the rating. Assume that the result of ranking a plurality of slave processors 20 (slave 1, slave 2, and slave 3) in the master-slave multi-machine system is: slave 1 is rated level 1, slave 2 is rated level 2, and slave 3 is rated level 3.
Alternatively, the task scheduling unit 140 is communicatively connected to the task management unit 120 and the slave management unit 130, respectively, and schedules and allocates the unassigned subtasks one by one in the order of their ranks, based on the rank data of all the subtasks by the task management unit 120, the rank data of all the slave processors 20 by the slave management unit 130, and the like. Specifically, in combination with the application model of the task scheduling unit 140 shown in fig. 4, in an implementation case, the program flow of the task scheduling unit 140 is as follows:
101: the task scheduling section 140 determines whether or not information of the slave processor in the idle state transferred from the slave management section 130 is received. If not, turning to 101, and if yes, turning to 102;
102: the task scheduling component 140 queries the task management component 120 whether there are any unassigned subtasks. If not, turning to 103, and if yes, turning to 104;
103: the task scheduling unit 140 queries the slave management unit 130 whether all slave processors are in an idle state, i.e. whether the current subtasks have been completed. If not, turning to 102, and waiting for the generation of a new subtask or waiting for the completion of all subtasks; if yes, the task scheduling component 140 ends the operation;
104: the task scheduling unit 140 allocates an appropriate subtask to the currently idle slave processor 20, and after the subtask allocation is completed, the flow goes to 101.
In summary, the working principle of the task scheduling device 100 is described in detail with reference to an embodiment:
(1) the slave management unit 130 monitors the status of all the slave processors 20, and when a task on one of the slave processors 20 is completed and is in an idle state, the slave management unit 130 transmits idle slave processor information to the task scheduling unit 140.
(2) The task scheduling component 140 will query the information of the unassigned subtasks in the task management component 120, and if there is no unassigned task currently, the slave processor 20 continues to be in an idle state, and waits for the generation of a new subtask or the end of another task of the slave processor 20; if there is currently an unallocated subtask, the task scheduling component 140 preferentially allocates the subtask with the same rating to the slave processor 20, if there is no subtask with the same rating, allocates the subtask with one level difference from the rating, and if there is still no subtask with two levels difference from the rating, so as to recur until a subtask suitable for allocation is found.
(3) After the assignment of the subtasks is completed, the task scheduling part 140 transfers the assigned subtask number information to the task management part 120, and then the task management part 120 updates the unassigned subtask information.
Specifically, for example, assuming that the slave management unit 130 monitors that the slaves 1, 2, and 3 are all in the idle state, the slave management unit 130 sequentially transfers information of the slaves 3, 2, and 1 to the task scheduling unit 140 according to the rank from high to low. The task scheduling unit 140 first receives information that the slave 3 is in an idle state, and starts to allocate a subtask to the slave 3, and the task scheduling unit 140 queries the task management unit 120 for information about the unassigned subtask, where the currently unassigned subtask includes task 1, task 2, task 3, task 4, and task 5. The task scheduling unit 140 preferentially allocates the slave 3 with the same subtasks as the slave 3, that is, a task of level 3, and allocates the task 5 to the slave 3 to complete the task, where the unassigned task of level 3 is task 5. Then, the task scheduling unit 140 receives the information that the slave 2 is in the idle state, and starts to allocate the subtasks to the slave 2, where the currently unallocated subtasks include task 1, task 2, task 3, and task 4. The task scheduling unit 140 preferentially allocates the slave 2 with the same subtasks as the slave 2, and the unassigned subtasks of level 2 are task 3 and task 4, and both task 3 and task 4 can be allocated to the slave 2 to complete, assuming that task 4 is allocated to the slave 2 to complete. Then, the task scheduling unit 140 receives the information that the slave 1 is in the idle state, starts to allocate tasks to the slave 1, and queries the information of the unassigned subtasks, where the currently unassigned subtasks include task 1, task 2, and task 3. The task scheduling part 140 preferentially allocates the slave 1 with the same subtasks as the slave 1, and the unallocated subtasks having the level 1 are task 1 and task 2, and both task 1 and task 2 can be allocated to the slave 1 to be completed, assuming that task 2 is allocated to the slave 1 to be completed.
At this time, the slave 1, the slave 2 and the slave 3 are all executing subtasks, wherein the slave 1 is executing the task 2, the slave 2 is executing the task 4, the slave 3 is executing the task 5, and tasks which are not currently allocated are the task 1 and the task 3. Assuming that the slave 2 completes task 4 first, then the slave management unit 130 monitors that the slave 2 is in an idle state, and transmits information that the slave 2 is in the idle state to the task scheduling unit 140, then the task scheduling unit 140 starts to allocate a subtask to the slave 2, the task scheduling unit 140 preferentially allocates a subtask having the same evaluation as that of the slave 2, that is, a subtask having a level 2, an unallocated subtask having a level 2 is task 3, and allocates task 3 to the slave 2 to complete. In this way, slave 1, slave 2 and slave 3 are in the state of executing tasks, and the currently unassigned subtask is task 1. If it is assumed that the slave 3 completes the task 5 first, then the slave management component 130 monitors that the slave 3 is in the idle state, and transmits information that the slave 3 is in the idle state to the task scheduling component 140, and then the task scheduling component receives the information that the slave 3 is in the idle state, and starts to allocate a subtask to the slave 3, where the currently unallocated subtask is only task 1, the rating of task 1 is 2, and the rating of slave 3 is 3, so that the task scheduling component 140 allocates task 1 to the slave 3 to complete, and at this time, the slave 1, the slave 2, and the slave 3 are all executing tasks, where the slave 1 is executing task 2, the slave 2 is executing task 3, and the slave 3 is executing task 1, and there is no currently unallocated subtask.
Assuming that slave 1 then completes task 2, slave management component 130 monitors that slave 1 is in an idle state, and transmits information that slave 1 is in an idle state to task scheduling component 140. The task scheduling unit 140 queries the task management unit 120 for the information of the unassigned subtasks, and as a result, the slave 1 continues to be in the idle state, waits for a new subtask to be generated, or waits for all the slave processors 20 to be in the idle state, that is, all the subtasks are completed.
In one implementation, assuming that task 1 fails to be executed on slave 3, slave management unit 130 obtains the information of the execution failure, and then transmits the information of the execution failure of task 1 to task management unit 120, and task management unit 120 updates task 1 to an unallocated subtask with a level of 1. Then, the task scheduling unit 140 inquires the task management unit 120 that there is a new unassigned task 1 currently, assigns task 1 to the slave 1 for execution, and the slave management unit 130 monitors that the slave 3 is in an idle state, and transmits information that the slave 3 is in the idle state to the task scheduling unit 140. The task scheduling unit 140 inquires the task management unit 120 that there is no currently unallocated subtask, and the slave 3 continues to be in the idle state, waits for a new subtask to be generated, or waits for all the slave processors 20 to be in the idle state, that is, all the subtasks are completed.
Assuming that slave 2 subsequently completes task 3, slave management component 130 monitors that slave 2 is in an idle state, and transmits information that slave 2 is in an idle state to task scheduling component 140. When the task scheduling unit 140 inquires the task management unit 120 that there is no currently unallocated subtask, both the slave 2 and the slave 3 continue to be in the idle state, and wait for a new subtask to be generated, or wait for all the slave processors 20 to be in the idle state, that is, all the subtasks are completed. Suppose that the slave 1 finishes the task 1 and is in an idle state, all the subtasks are finished at this time, all three slaves quit operation, and the master-slave multi-machine system also quits and finishes operation.
Optionally, the task scheduling device 100 may further include: a display unit and a storage unit (not shown), wherein the display unit is connected in communication with the aforementioned host processor 10, and is configured to display the result of scheduling the assignment of the unassigned subtasks to the slave processors 20 to which the subtasks are not assigned; the storage unit is respectively connected to the task management unit 120 and the slave management unit 130, and is configured to store the layout data and a scheme of dividing the total task process for verifying the layout data into a plurality of sub-tasks, and the rating data thereof, and the rating data of the plurality of slave processors 20.
Therefore, the task scheduling device for layout verification according to the second embodiment of the present invention can achieve maximum use of the slave processor 20, and avoid low efficiency of a multi-machine system due to idle of part of the slave processors due to improper task allocation, so that tasks can be efficiently and uniformly scheduled and allocated, thereby improving the operating efficiency of the multi-machine system, and further improving the operating speed of the layout verification tool in a multi-machine mode.
EXAMPLE III
Fig. 5 shows a schematic structural diagram of a server provided in the third embodiment of the present disclosure.
Referring to fig. 5, the present disclosure also presents a block diagram of an exemplary server suitable for use in implementing embodiments of the present disclosure. It should be understood that the server shown in fig. 5 is only an example, and should not bring any limitation to the function and the scope of the application of the embodiments of the present disclosure.
As shown in FIG. 5, server 200 is in the form of a general purpose computing device. The components of server 200 may include, but are not limited to: one or more processors or processing units 210, a memory 220, and a bus 201 that couples the various system components (including the memory 220 and the processing unit 210).
Bus 201 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Server 200 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by server 200 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 220 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)221 and/or cache memory 222. The server 200 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 223 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, often referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 201 by one or more data media interfaces. Memory 220 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the disclosure.
Program/utility 224 having a set (at least one) of program modules 2241 may be stored, for example, in memory 220, such program modules 2241 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which or some combination of which may comprise an implementation of a network environment. Program modules 2241 generally perform the functions and/or methods of the embodiments described in the embodiments of the present disclosure.
Further, the server 200 may also be communicatively connected to a display 300 for displaying the scheduling assignment result and progress of the integrated circuit layout data verification subtasks, where the display 300 may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some embodiments, the display 300 may also be a display screen with an input device or a touch screen.
Further, the server 200 may also communicate with one or more devices that enable a user to interact with the server 200, and/or with any devices (e.g., network cards, modems, etc.) that enable the server 200 to communicate with one or more other computing devices. Such communication may be through input/output (I/O) interfaces 230. Also, server 200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) via network adapter 240. As shown, network adapter 240 communicates with the other modules of server 200 via bus 201. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the server 200, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 210 executes various functional applications and data processing by running programs stored in the system memory 220, for example, implementing a task scheduling method for integrated circuit layout verification provided in the first embodiment of the present disclosure.
Example four
The fourth embodiment of the present disclosure further provides a computer-readable storage medium, on which a computer program (or referred to as computer-executable instructions) is stored, where the computer program, when executed by a processor, is configured to perform the task scheduling method for integrated circuit layout verification provided in the first embodiment of the present disclosure, where the method includes:
reading layout data by using a host processor;
dividing the total task process for verifying the layout data into a plurality of subtasks;
ranking the plurality of subtasks, and ranking the plurality of slave processors;
and scheduling the plurality of subtasks according to the rating data of the plurality of subtasks, and allocating one of the plurality of unassigned subtasks to the slave processor which is correspondingly adapted to the rating of the subtask in the plurality of slave processors of the unassigned task.
The computer storage media of the disclosed embodiments may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
Further, in this document, the contained terms "include", "contain" or any other variation thereof are intended to cover a non-exclusive inclusion, so that a process, a method, an article or an apparatus including a series of elements includes not only those elements but also other elements not explicitly listed or inherent to such process, method, article or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Finally, it should be noted that: it should be understood that the above examples are only for clearly illustrating the present disclosure, and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications of the invention as herein taught are within the scope of the present disclosure.

Claims (10)

1. A task scheduling method for layout verification is applied to a master-slave multi-computer system, wherein the multi-computer system comprises a master processor and a plurality of slave processors which are in communication connection with the master processor, and the task scheduling method comprises the following steps:
reading layout data by using the host processor;
dividing the total task process for verifying the layout data into a plurality of subtasks;
ranking the plurality of subtasks and ranking the plurality of slave processors;
and scheduling the plurality of subtasks according to the rating data of the plurality of subtasks, and distributing the plurality of unassigned subtasks to the slave processors which are correspondingly matched with the rating of the subtasks in the plurality of slave processors of the unassigned tasks one by one according to the rating sequence of the plurality of subtasks.
2. The task scheduling method according to claim 1, wherein the step of dividing the total task process for verifying the layout data into a plurality of subtasks comprises:
dividing the layout data into a plurality of subtasks according to task requirements and configuration information of the plurality of slave processors, distributing corresponding task numbers,
the segmentation of the total task process is carried out by taking the layout structure layer as a unit or taking a region in the layout structure layer as a unit.
3. The task scheduling method according to claim 2, further comprising:
and acquiring the task number of the distributed subtasks in real time, and updating the information of the unallocated subtasks according to the task number.
4. The task scheduling method of claim 1, wherein the step of ranking the plurality of subtasks comprises:
grading according to the total number of all figures in the layout structure layer of each of the plurality of unassigned subtasks, or grading according to the total number of figures in the corresponding region of the plurality of unassigned subtasks, to obtain the grading data of the plurality of subtasks,
the larger the total number of graphics, the higher the rating of the corresponding subtask.
5. The task scheduling method of claim 4, wherein the step of ranking the plurality of slave processors comprises:
acquiring and identifying the configuration and performance information of the plurality of slave processors, analyzing and judging the capability of the slave processors for executing the subtasks according to the configuration and performance information, grading the plurality of slave processors to obtain grading data of the plurality of slave processors,
the configuration and performance information of the slave processor includes: the more powerful the slave processor is in the ability to execute subtasks, the higher the slave processor is ranked.
6. A task scheduling method according to claim 3, wherein said scheduling the plurality of subtasks according to the rating data of the plurality of subtasks, and the step of allocating the plurality of unassigned subtasks one by one in the order of their ratings to the slave processor of the plurality of slave processors of the unassigned task, the slave processor whose rating is adapted to the corresponding subtask comprises:
acquiring task information and rating data of any unallocated subtasks;
monitoring the states of the plurality of slave processing machines, and acquiring the rating data of the slave processing machines which are not distributed with the subtasks;
selecting the slave processors which are not distributed with the subtasks and have the same grade with the plurality of the unassigned subtasks one by one according to the grading data of the plurality of the unassigned subtasks according to the grading sequence of the plurality of the unassigned subtasks, and if no slave processor which is not distributed with the subtasks and has the same grade with the plurality of the unassigned subtasks is available, selecting the slave processor which is higher than the currently unassigned subtask in grade and is not distributed with the subtasks;
and scheduling to complete the current unallocated subtasks, and updating the task information of the unallocated subtasks.
7. A task scheduling device for layout verification, which is applied to a master-slave multi-computer system, wherein the multi-computer system comprises a master processor and a plurality of slave processors which are in communication connection with the master processor, and the task scheduling device is arranged on the master processor and comprises:
a layout data reading-in component for reading layout data;
the task management component is in communication connection with the layout data reading component and is used for dividing a total task process for verifying the layout data into a plurality of subtasks and grading the subtasks;
a slave management component, communicatively coupled to the plurality of slave processors, for ranking the plurality of slave processors;
and the task scheduling component is configured to schedule the plurality of subtasks according to the rating data of the plurality of subtasks, and allocate the plurality of unassigned subtasks to the slave processing machines which are correspondingly matched with the ratings of the subtasks in the plurality of slave processing machines which are not allocated with the tasks one by one according to the rating orders of the plurality of subtasks.
8. The task scheduler of claim 7, further comprising:
the display component is in communication connection with the main processor and is used for displaying the distribution result of the dispatched sub-tasks to the slave processors which are not distributed with the tasks;
and the storage component is respectively connected with the task management component and the slave management component and is used for storing the layout data, a scheme for verifying that the total task process of the layout data is divided into a plurality of subtasks and the rating data thereof, and the rating data of the plurality of slave processors.
9. A server, comprising:
a processor;
a memory for storing one or more programs;
wherein the one or more programs, when executed by the processor, cause the processor to implement the task scheduling method for layout verification according to any one of claims 1 to 6.
10. A computer-readable storage medium, on which a computer program is stored, wherein the program, when being executed by a processor, implements the task scheduling method for layout verification according to any one of claims 1 to 6.
CN202110907237.8A 2021-08-09 2021-08-09 Task scheduling method and device for layout verification, server and storage medium Pending CN113608854A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110907237.8A CN113608854A (en) 2021-08-09 2021-08-09 Task scheduling method and device for layout verification, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110907237.8A CN113608854A (en) 2021-08-09 2021-08-09 Task scheduling method and device for layout verification, server and storage medium

Publications (1)

Publication Number Publication Date
CN113608854A true CN113608854A (en) 2021-11-05

Family

ID=78307601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110907237.8A Pending CN113608854A (en) 2021-08-09 2021-08-09 Task scheduling method and device for layout verification, server and storage medium

Country Status (1)

Country Link
CN (1) CN113608854A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114090269A (en) * 2022-01-21 2022-02-25 北京阿丘科技有限公司 Service scheduling balancing method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109933420A (en) * 2019-04-02 2019-06-25 深圳市网心科技有限公司 Node tasks dispatching method, electronic equipment and system
CN110209496A (en) * 2019-05-20 2019-09-06 中国平安财产保险股份有限公司 Task sharding method, device and sliced service device based on data processing
CN110749814A (en) * 2018-07-24 2020-02-04 上海富瀚微电子股份有限公司 Automatic testing system and method for chip IC sample

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110749814A (en) * 2018-07-24 2020-02-04 上海富瀚微电子股份有限公司 Automatic testing system and method for chip IC sample
CN109933420A (en) * 2019-04-02 2019-06-25 深圳市网心科技有限公司 Node tasks dispatching method, electronic equipment and system
CN110209496A (en) * 2019-05-20 2019-09-06 中国平安财产保险股份有限公司 Task sharding method, device and sliced service device based on data processing

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114090269A (en) * 2022-01-21 2022-02-25 北京阿丘科技有限公司 Service scheduling balancing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
Amaral et al. Topology-aware gpu scheduling for learning workloads in cloud environments
US8205208B2 (en) Scheduling grid jobs using dynamic grid scheduling policy
Cho et al. Natjam: Design and evaluation of eviction policies for supporting priorities and deadlines in mapreduce clusters
US20020156824A1 (en) Method and apparatus for allocating processor resources in a logically partitioned computer system
JP2002530780A (en) Reconfigurable programmable logic device computer system
Sudarsan et al. ReSHAPE: A framework for dynamic resizing and scheduling of homogeneous applications in a parallel environment
JP2008186136A (en) Computer system
CN111476344A (en) Multipath neural network, resource allocation method and multipath neural network analyzer
CN113157379A (en) Cluster node resource scheduling method and device
Hilbrich et al. Model-based generation of static schedules for safety critical multi-core systems in the avionics domain
CN112882828A (en) Upgrade processor management and scheduling method based on SLURM job scheduling system
US11954419B2 (en) Dynamic allocation of computing resources for electronic design automation operations
CN111177984B (en) Resource utilization of heterogeneous computing units in electronic design automation
CN113608854A (en) Task scheduling method and device for layout verification, server and storage medium
US20170083375A1 (en) Thread performance optimization
US6829765B1 (en) Job scheduling based upon availability of real and/or virtual resources
US20130298132A1 (en) Multi-core processor system and scheduling method
Burgio et al. Adaptive TDMA bus allocation and elastic scheduling: A unified approach for enhancing robustness in multi-core RT systems
Khaitan et al. Proactive task scheduling and stealing in master-slave based load balancing for parallel contingency analysis
KR100590764B1 (en) Method for mass data processing through scheduler in multi processor system
CN115102851B (en) Fusion platform for HPC and AI fusion calculation and resource management method thereof
CN112528583B (en) Multithreading comprehensive method and comprehensive system for FPGA development
Huang et al. An iterative expanding and shrinking process for processor allocation in mixed-parallel workflow scheduling
Qu et al. Improving the energy efficiency and performance of data-intensive workflows in virtualized clouds
CN112416566A (en) IMA general processing module resource scheduling analysis method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20211105