CN114489867A - Algorithm module scheduling method, algorithm module scheduling device and readable storage medium - Google Patents

Algorithm module scheduling method, algorithm module scheduling device and readable storage medium Download PDF

Info

Publication number
CN114489867A
CN114489867A CN202210412394.6A CN202210412394A CN114489867A CN 114489867 A CN114489867 A CN 114489867A CN 202210412394 A CN202210412394 A CN 202210412394A CN 114489867 A CN114489867 A CN 114489867A
Authority
CN
China
Prior art keywords
algorithm
module
algorithm module
task queue
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210412394.6A
Other languages
Chinese (zh)
Other versions
CN114489867B (en
Inventor
殷俊
黄鹏
岑鑫
虞响
吴立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202210412394.6A priority Critical patent/CN114489867B/en
Publication of CN114489867A publication Critical patent/CN114489867A/en
Priority to PCT/CN2022/114395 priority patent/WO2023201947A1/en
Application granted granted Critical
Publication of CN114489867B publication Critical patent/CN114489867B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • G06F9/4451User profiles; Roaming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Stored Programmes (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Image Processing (AREA)

Abstract

The application provides an algorithm module scheduling method, an algorithm module scheduling device and a computer readable storage medium. The algorithm module scheduling method comprises the following steps: acquiring a configuration file, wherein the configuration file comprises a plurality of algorithm modules with sequential dependency relationship of front and back results; configuring the priorities of the algorithm modules by utilizing the sequential dependency relationship of the front and rear results in the configuration file; creating a task queue according to the priorities of the algorithm modules; and processing data based on a scheduling algorithm module of the front-back relation of the queue positions in the task queue. By the method, the algorithm module scheduling method realizes self-adaptive task priority marking and queue sequencing, improves the overall processing efficiency of the algorithm scheme and prevents the algorithm scheme from being stuck.

Description

Algorithm module scheduling method, algorithm module scheduling device and readable storage medium
Technical Field
The present application relates to the technical field of algorithm model processing, and in particular, to an algorithm module scheduling method, an algorithm module scheduling apparatus, and a computer-readable storage medium.
Background
In the execution of the image processing algorithm program, task scheduling execution of the image processing algorithm is mostly managed in a mode of a task queue and a thread pool.
The task queue is mainly managed in a first-in first-out mode, when a task is ready, the task queue is inserted into the tail of the task queue to wait, and when an idle execution thread exists, the task is acquired from the head of the task queue to be executed.
Different algorithm modules in the current algorithm scheme have mutual dependence and sequence relation, and the first-in first-out task queue mode can cause the overall efficiency of the algorithm scheme to be low and cause the overall flow to be stuck under the condition of insufficient thread number.
Disclosure of Invention
The application provides an algorithm module scheduling method, an algorithm module scheduling device and a computer readable storage medium.
The application provides an algorithm module scheduling method, which comprises the following steps:
acquiring a configuration file, wherein the configuration file comprises a plurality of algorithm modules with sequential dependency relationship of front and back results;
configuring the priorities of the algorithm modules by utilizing the sequential dependency relationship of the front and rear results in the configuration file;
creating a task queue according to the priorities of the algorithm modules;
and processing data by a scheduling algorithm module based on the front-back relation of the queue positions in the task queue.
Wherein, the configuring the priorities of the algorithm modules by using the dependency relationship of the sequence of the front and back results in the configuration file comprises:
generating a data flow graph of the plurality of algorithm modules by utilizing the sequential dependency relationship of the front and back results in the configuration file;
and configuring the priorities of a plurality of algorithm modules in the data flow diagram from small to large according to the input and output directions of the data flow diagram.
Wherein, the creating of the task queue according to the priorities of the algorithm modules comprises:
adding a current algorithm module into the tail part of a task queue, wherein the task queue comprises a plurality of algorithm modules;
comparing the priority of the current algorithm module with the priority of the previous algorithm module in the task queue;
and when the priority of the current algorithm module is lower than that of the previous algorithm module, the position of the current algorithm module in the task queue is not adjusted.
After comparing the priority of the current algorithm module with the priority of the previous algorithm module in the task queue, the algorithm module scheduling method further comprises:
when the priority of the current algorithm module is higher than that of the previous algorithm module, exchanging the queue positions of the current algorithm module and the previous algorithm module in the task queue, and updating the task queue;
and continuously comparing the priority of the current algorithm module with the priority of the previous algorithm module in the updated task queue until the priority of the current algorithm module in the updated task queue is lower than the priority of the previous algorithm module, or the current algorithm module is positioned at the head of the updated task queue.
Wherein, the creating of the task queue according to the priorities of the algorithm modules comprises:
acquiring the priority of the current algorithm module and the priorities of all queue algorithm modules in a task queue;
comparing the priority of the current algorithm module with the priorities of all queue algorithm modules;
when the priority of the current algorithm module is higher than the priorities of all queue algorithm modules, adding the current algorithm module into the head of the task queue, and updating the task queue;
and when the priority of the current algorithm module is lower than the priority of one or more queue algorithm modules, adding the current algorithm module to the task queue at the next bit of the queue algorithm module with the lowest priority in the one or more queue algorithm modules.
The data processing based on the queue position front-back relation scheduling algorithm module in the task queue comprises the following steps:
acquiring a resource lock of the task queue, and starting to schedule an algorithm module in the task queue to process data;
acquiring a new algorithm module, and adding the new algorithm module into the task queue according to the priority;
and releasing the resource lock of the task queue after the task queue is updated.
The data processing based on the queue position front-back relation scheduling algorithm module in the task queue comprises the following steps:
acquiring equipment information, wherein the equipment information comprises a processor core number;
judging whether the number of processor cores in the equipment information is larger than a first preset threshold value or not;
if yes, creating a corresponding number of thread pools based on the processor cores, and executing the tasks in the task queue through executing threads in the corresponding number of thread pools;
and if not, creating a plurality of execution threads, putting the execution threads into one thread pool, and executing the tasks in the task queue through the execution threads in the thread pool.
Wherein the creating a corresponding number of thread pools based on the processor cores comprises:
setting a one-to-one corresponding relation between the thread pool and the processor;
and setting a plurality of execution threads in each thread pool, and setting the affinity of each execution thread and the processor corresponding to the thread pool.
After the device information is acquired, the algorithm module scheduling method further includes:
judging whether the number of processor cores in the equipment information is greater than a first preset threshold value or not, and whether an algorithm module node in the configuration file is greater than a second preset threshold value or not;
if yes, creating a corresponding number of thread pools based on the processor cores, and executing the tasks in the task queue through executing threads in the corresponding number of thread pools;
and if not, creating a plurality of execution threads, putting the execution threads into one thread pool, and executing the tasks in the task queue through the execution threads in the thread pool.
The algorithm module scheduling method further comprises the following steps:
analyzing the configuration file to obtain a plurality of algorithm nodes in the configuration file, wherein each algorithm node corresponds to at least one algorithm module;
obtaining a node input stream and a node output stream of each algorithm node;
determining the pipeline sequence of the algorithm nodes according to the node input streams and the node output streams of the algorithm nodes;
and determining the sequential dependency relationship of the front and rear results of the corresponding algorithm modules by using the pipeline sequence of the algorithm nodes.
The application also provides an algorithm module scheduling device, which comprises an acquisition module, a configuration module, a creation module and a scheduling module; wherein the content of the first and second substances,
the acquisition module is used for acquiring a configuration file, wherein the configuration file comprises a plurality of algorithm modules with sequential dependency relationship of front and back results;
the configuration module is used for configuring the priorities of the algorithm modules by utilizing the sequential dependency relationship of the front and rear results in the configuration file;
the creating module is used for creating a task queue according to the priorities of the algorithm modules;
and the scheduling module is used for processing data based on the queue position front-back relation scheduling algorithm module in the task queue.
The application also provides another algorithm module scheduling device, which comprises a processor and a memory, wherein the memory is stored with program data, and the processor is used for executing the program data to realize the algorithm module scheduling method.
The present application also provides a computer-readable storage medium for storing program data which, when executed by a processor, is used to implement the algorithm module scheduling method described above.
The beneficial effect of this application is: the method comprises the following steps that an algorithm module scheduling device obtains a configuration file, wherein the configuration file comprises a plurality of algorithm modules with sequential dependency relationship of front and back results; configuring the priorities of the algorithm modules by utilizing the sequential dependency relationship of the front and rear results in the configuration file; creating a task queue according to the priorities of the algorithm modules; and processing data by a scheduling algorithm module based on the front-back relation of the queue positions in the task queue. By the method, the algorithm module scheduling method realizes self-adaptive task priority marking and queue sequencing, improves the overall processing efficiency of the algorithm scheme and prevents the algorithm scheme from being stuck.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of an algorithm module scheduling method provided herein;
FIG. 2 is a detailed flow chart of the algorithm module scheduling method shown in FIG. 1;
FIG. 3 is a schematic diagram of one embodiment of a configuration file format provided herein;
FIG. 4 is a schematic flow chart diagram illustrating another embodiment of an algorithm module scheduling method provided herein;
FIG. 5 is a detailed flow chart of the algorithm module scheduling method shown in FIG. 4;
FIG. 6 is a schematic flow chart diagram illustrating a method for scheduling algorithm modules according to yet another embodiment of the present disclosure;
FIG. 7 is a detailed flow chart of the algorithm module scheduling method of FIG. 6;
FIG. 8 is a schematic structural diagram of an embodiment of an algorithm module scheduling apparatus provided in the present application;
FIG. 9 is a schematic structural diagram of another embodiment of an algorithm module scheduling device provided in the present application;
FIG. 10 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to solve the problems in the prior art, the method and the device optimize the realization of the algorithm task queue in the image processing algorithm scheme, realize self-adaptive task priority marking and queue sequencing, improve the overall processing efficiency of the algorithm scheme and prevent the algorithm scheme from being stuck, and realize the establishment and scheduling of the self-adaptive thread pool according to the complexity of the algorithm scheme and the device difference of deployment and operation, thereby improving the utilization of device resources and the overall scheme processing efficiency.
Referring to fig. 1 and fig. 2 in detail, fig. 1 is a schematic flowchart of an embodiment of an algorithm module scheduling method provided in the present application, and fig. 2 is a detailed flowchart of the algorithm module scheduling method shown in fig. 1.
The algorithm module scheduling method is applied to an algorithm module scheduling device, wherein the algorithm module scheduling device can be a server or a system formed by the cooperation of the server and terminal equipment. Correspondingly, each part, for example, each unit, sub-unit, module, and sub-module, included in the algorithm module scheduling apparatus may be all disposed in the server, or may be disposed in the server and the terminal device, respectively.
Further, the server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules, for example, software or software modules for providing distributed servers, or as a single software or software module, and is not limited herein. In some possible implementations, the algorithm module scheduling method of the embodiments of the present application may be implemented by a processor calling a computer readable instruction stored in a memory.
Specifically, as shown in fig. 1, the algorithm module scheduling method in the embodiment of the present application specifically includes the following steps:
step S11: and acquiring a configuration file, wherein the configuration file comprises a plurality of algorithm modules with sequential dependency relationship of front and back results.
In the embodiment of the application, in the image processing algorithm scheme operating framework, each algorithm module is taken as a task to be scheduled and executed. The method comprises the steps that different algorithm schemes can correspondingly generate different configuration files, an algorithm module scheduling device marks priorities of all algorithm modules by analyzing the configuration files, the algorithm modules are automatically sequenced according to the marked priorities after being used as tasks and added into a task queue, and the algorithm module tasks with higher priorities are obtained from the queue and executed in preference to other executed threads.
Referring now to fig. 3, a possible configuration file format provided by the present application is described, and fig. 3 is a schematic diagram of an embodiment of a configuration file format provided by the present application.
As shown in fig. 3, the content in the configuration file can be understood as a complete Pipeline of image processing algorithm schemes, represented by Pipeline, i.e. algorithm scheme Pipeline.
The image processing scheme can be composed of different algorithm function modules, the algorithm function modules are represented by algorithm nodes in a configuration file, namely an algorithm scheme production line is composed of different algorithm nodes, namely the algorithm nodes.
The method comprises the steps that a sequential dependency relationship of front and back results exists among algorithm function modules, namely image processing results need to be transmitted and processed among different algorithm function modules according to a sequence, each algorithm node in a configuration file comprises an Input stream and an Output stream, and the sequential dependency relationship among the algorithm function modules is represented by a connecting line of the Input stream of one algorithm node and the Input stream of the next algorithm node, namely an algorithm scheme pipeline.
The algorithm scheme of the configuration file comprises that the whole information of the pipeline comprises an initial input stream and a tail-end output stream of the whole pipeline, wherein the initial input stream is used for inputting the input stream to the first algorithm node, and the tail-end output stream is used for receiving the output stream of the last algorithm node.
Step S12: and configuring the priorities of the algorithm modules by utilizing the sequential dependency relationship of the front and rear results in the configuration file.
In the embodiment of the present application, the algorithm module scheduling device obtains and analyzes the content of the configuration file, and converts the entire algorithm pipeline into a Directed Acyclic Graph (DAG) according to Input stream information and Output stream information configured by each algorithm node in the configuration file. The starting point of the directed acyclic graph is an initial input stream given by the overall information of the algorithm scheme pipeline in the configuration file, and the end point of the directed acyclic graph is a given final output stream.
Specifically, a frame of image data needs to be processed by all algorithm function modules in an algorithm scheme pipeline to obtain a final output result, and the analysis of the frame of image data is completed, so that the whole algorithm scheme pipeline needs to be regarded as a whole. In order to improve the efficiency of the image processing algorithm scheme, it is necessary to ensure that the algorithm node that outputs the final result can obtain enough resources to perform data processing, i.e., the priority of the result processing node is set to be the highest.
Similarly, the algorithm module scheduling device can make sure that the priority of the downstream algorithm node in the whole algorithm scheme pipeline is higher than the priority of the upstream node. Therefore, the algorithm module scheduling device can set the priority of the initial algorithm node to be 0 according to the directed acyclic graph generated by analysis, the subsequent downstream nodes are sequentially subjected to increasing processing according to the sequence priority, and the larger the priority value array is, the higher the priority is.
Step S13: and creating a task queue according to the priorities of the algorithm modules.
In the embodiment of the application, when the algorithm module scheduling device performs task scheduling, each algorithm node is used as a task to be processed, and after the condition preparation required by the algorithm node is completed, the algorithm node task is scheduled to be added into the task queue. After the task queue is added, the algorithm module scheduling device can adjust the queue position of the task according to the sequence of the task entering the queue and the priority of the task.
Referring to fig. 4 and fig. 5, fig. 4 is a schematic flowchart of another embodiment of an algorithm module scheduling method provided in the present application, and fig. 5 is a specific flowchart of the algorithm module scheduling method shown in fig. 4.
Specifically, as shown in fig. 4, step S13 in the algorithm module scheduling method shown in fig. 1 specifically includes the following sub-steps:
step S131: and adding the current algorithm module into the tail part of a task queue, wherein the task queue comprises a plurality of algorithm modules.
In the embodiment of the present application, with reference to the specific flowchart shown in fig. 5, after the preparation of the algorithm node task execution condition is completed, the algorithm module scheduling device starts to schedule the algorithm modules in the task queue.
The algorithm module scheduling device adds the current algorithm module as a new task into the task queue and places the new task at the tail of the task queue.
Step S132: and comparing the priority of the current algorithm module with the priority of the previous algorithm module in the task queue.
In the embodiment of the application, the algorithm module scheduling device compares the priority of the new task with the priority of the previous task in the task queue. Here, the priority is determined by the scheme of step S12, and is not described in detail here.
Step S133: and when the priority of the current algorithm module is lower than that of the previous algorithm module, the position of the current algorithm module in the task queue is not adjusted.
In the embodiment of the application, if the priority of the new task is lower than that of the previous task, the algorithm module scheduling device keeps the positions of the two tasks in the task queue unchanged, and exits from the sorting operation of the task queue.
Step S134: and exchanging the queue positions of the current algorithm module and the previous algorithm module in the task queue when the priority of the current algorithm module is higher than that of the previous algorithm module, and updating the task queue.
In this embodiment, if the priority of the new task is higher than the priority of the previous task, the algorithm module scheduling device continues to perform step S135 after interchanging the positions of the two tasks in the task queue.
Step S135: and continuously comparing the priority of the current algorithm module with the priority of the previous algorithm module in the updated task queue until the priority of the current algorithm module in the updated task queue is lower than the priority of the previous algorithm module or the current algorithm module is positioned at the head of the updated task queue.
In the embodiment of the application, the algorithm module scheduling device completes the addition of the current algorithm module into the task queue and after the sequencing operation is completed, the algorithm module scheduling device releases the resource lock of the task queue, so that the insertion sequencing of the tasks can be normally performed during the next scheduling.
In other embodiments, the algorithm module scheduling device may also adopt another way of adding a new task into the task queue, which is specifically as follows:
the algorithm module scheduling device acquires the priority of the current algorithm module and the priorities of all queue algorithm modules in the task queue, and then compares the priority of the current algorithm module with the priorities of all queue algorithm modules.
And when the priority of the current algorithm module is higher than the priorities of all queue algorithm modules, adding the current algorithm module into the head of the task queue by the algorithm module scheduling device, and updating the task queue.
And when the priority of the current algorithm module is lower than the priority of one or more queue algorithm modules, the current algorithm module scheduling device adds the current algorithm module to the task queue at the next bit of the queue algorithm module with the lowest priority in the one or more queue algorithm modules. For example, the priority of the current algorithm module is 0.90, the priorities of a plurality of queue algorithm modules existing in the task queue are higher than the priority of the current algorithm module, and the specific priorities are 0.95, 0.98 and 0.92 respectively, at this time, the current algorithm module of the algorithm module scheduling device is added into the task queue, and the current algorithm module is placed at the next position of the queue algorithm module corresponding to the priority of 0.92.
Step S14: and processing data by a scheduling algorithm module based on the front-back relation of the queue positions in the task queue.
In the embodiment of the application, an algorithm module scheduling device acquires a configuration file, wherein the configuration file comprises a plurality of algorithm modules with a sequential dependency relationship between a front result and a rear result; configuring the priorities of the algorithm modules by utilizing the sequential dependency relationship of the front and rear results in the configuration file; creating a task queue according to the priorities of the algorithm modules; and processing data by a scheduling algorithm module based on the front-back relation of the queue positions in the task queue. By the method, the algorithm module scheduling method realizes self-adaptive task priority marking and queue sequencing, improves the overall processing efficiency of the algorithm scheme and prevents the algorithm scheme from being stuck.
Referring to fig. 6 and fig. 7, fig. 6 is a schematic flowchart of another embodiment of an algorithm module scheduling method provided in the present application, and fig. 7 is a specific flowchart of the algorithm module scheduling method shown in fig. 6.
The algorithm module scheduling method can automatically acquire the equipment information of the program running equipment, and can self-adaptively select a proper thread pool strategy to create and schedule by integrating the conditions of the number of the CPU cores and the complexity of the running algorithm scheme in the equipment information.
Specifically, as shown in fig. 6, the algorithm module scheduling method in the embodiment of the present application specifically includes the following steps:
step S21: acquiring device information, wherein the device information comprises a processor core number.
In this embodiment of the present application, the algorithm module scheduling device obtains device information of the program running device, where the device information may include a CPU core number, a GPU core number, and the like. In addition, the algorithm module scheduling device can also obtain the number of algorithm nodes in the configuration file and the like.
Step S22: and judging whether the number of the processor cores in the equipment information is larger than a first preset threshold value.
In this embodiment, the algorithm module scheduling device may set the first preset threshold to be 32, or other specific values, for evaluating the processing resources of the program running device. When the processor core number of the program execution device is larger than 32, the flow proceeds to step S23; when the number of processor cores of the program execution device is equal to or less than 32, the process proceeds to step S24.
Step S23: and creating a corresponding number of thread pools based on the number of the processor cores, and executing the tasks in the task queue through the execution threads in the corresponding number of thread pools.
In the embodiment of the application, when the number of processor cores of the program running device is greater than a preset threshold value, the algorithm module scheduling device executes the multithreading pool strategy.
Specifically, in the initialization stage in the operation process of the whole algorithm scheme, thread pools with corresponding number are created according to the number of the CPU cores of the program operation equipment, and the one-to-one correspondence relationship between each thread pool and each CPU core is set.
The algorithm module scheduling device creates a plurality of execution threads in each thread pool, the execution threads are set with the affinity with the CPU core corresponding to the thread pool, each execution thread can be ensured to monopolize one CPU core resource, and resource consumption caused by thread switching is avoided.
For example, a work stealing algorithm is adopted in the thread scheduling policy, that is, each execution thread has a corresponding thread execution queue, in the actual running process, the task priority queue can equally distribute all tasks in the queue to the thread execution queue corresponding to each execution thread, and after the execution thread completes the tasks in the queue corresponding to the execution thread, another execution thread can randomly steal the tasks until the tasks are completed.
It should be noted that, when the execution thread randomly goes to another execution thread stealing task, the execution thread is not limited by the thread pool.
When the algorithm scheme is operated on program operation equipment with a large number of CPU cores, namely program operation equipment with the number of CPU cores larger than 32 in the obtained equipment information is mainly suitable for server equipment, and the number of algorithm nodes in the pipeline of the algorithm scheme is large, for example, the number of the algorithm nodes is larger than 32, the algorithm module scheduling device selects the multithreading pool strategy to establish and schedule the thread pool. Under the condition, the multithreading pool strategy can maximally utilize hardware resources of the equipment and accelerate the analysis processing efficiency of the algorithm scheme.
Step S24: and creating a plurality of execution threads, putting the execution threads into one thread pool, and executing the tasks in the task queue through the execution threads in the thread pool.
In the embodiment of the application, when the number of the processor cores of the program running equipment is less than or equal to the preset threshold value, the algorithm module scheduling device executes the single thread pool strategy.
Specifically, in the initialization stage during the operation of the whole algorithm scheme, a plurality of execution threads are created at one time and put into a thread pool, and when a task comes, one execution thread is waken up from the thread pool for the execution processing of the task. And when the execution thread completes the current task, the priority task queue is inquired, if any task exists, the execution is continued after the task is acquired, and the task does not enter the dormancy. Therefore, the processes of creating, scheduling execution, destroying and the like of each thread are managed in a unified mode, the system time is saved, and the overall stability of the program is improved.
When the algorithm scheme is operated in the program operation equipment with less CPU core number, namely the obtained equipment information is in the program operation equipment with less CPU core number than 32, the algorithm module scheduling device is mainly suitable for embedded equipment, and the single thread pool strategy is selected by the algorithm module scheduling device to establish and schedule the thread pool. Under the condition, the single thread pool strategy uses equipment resources as much as possible to meet the processing requirement of the algorithm scheme under the condition that the equipment condition allows, and the requirement that the scheme processing cannot be met due to the fact that the number of executed threads is small under the multi-thread pool strategy.
In other embodiments, as shown in fig. 7, the algorithm module scheduling device may further consider the number of algorithm nodes in the configuration file as a reference factor for selecting the single-thread pool policy or the multi-thread pool policy.
Specifically, the algorithm module scheduling device determines whether the number of processor cores in the device information is greater than a first preset threshold, and whether an algorithm node in the configuration file is greater than a second preset threshold. For example, the algorithm module scheduling device determines whether the number of CPU cores of the program running device is greater than 32 and whether the algorithm node in the configuration file is greater than 32.
And if the number of CPU cores of the program running equipment is more than 32 and the number of algorithm nodes in the configuration file is more than 32, the algorithm module scheduling device executes the multithreading strategy. And if the number of CPU cores of the program running equipment is less than or equal to 32 and/or the number of algorithm nodes in the configuration file is less than or equal to 32, the algorithm module scheduling device executes the single-thread strategy.
In the embodiment of the application, the algorithm module scheduling device combines the adaptive priority task queue with the adaptive thread pool creation scheduling, so that unnecessary thread switching and destruction are reduced when the image processing algorithm scheme runs on equipment, and the overall execution efficiency of the algorithm scheme can be effectively improved. In the aspect of resource utilization of equipment, hardware resources of the equipment are greatly utilized to accelerate analysis processing of an image processing algorithm, and the overall utilization rate of the equipment resources is improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
To implement the algorithm module scheduling method of the foregoing embodiment, the present application further provides an algorithm module scheduling apparatus, and please refer to fig. 8 specifically, where fig. 8 is a schematic structural diagram of an embodiment of the algorithm module scheduling apparatus provided in the present application.
The algorithm module scheduling apparatus 300 of the embodiment of the present application includes an obtaining module 31, a configuring module 32, a creating module 33, and a scheduling module 34.
The obtaining module 31 is configured to obtain a configuration file, where the configuration file includes a plurality of algorithm modules having a sequential dependency relationship between previous and next results.
The configuration module 32 is configured to configure the priorities of the algorithm modules by using the dependency relationship between the sequence of the previous and next results in the configuration file.
The creating module 33 is configured to create a task queue according to the priorities of the algorithm modules.
The scheduling module 34 is configured to schedule the algorithm module to process data based on the front-back relationship of the queue positions in the task queue.
To implement the algorithm module scheduling method of the foregoing embodiment, the present application further provides another algorithm module scheduling apparatus, and please refer to fig. 9 specifically, where fig. 9 is a schematic structural diagram of another embodiment of the algorithm module scheduling apparatus provided in the present application.
The algorithm module scheduling apparatus 400 of the embodiment of the present application includes a memory 41 and a processor 42, wherein the memory 41 and the processor 42 are coupled.
The memory 41 is used for storing program data and the processor 42 is used for executing the program data to implement the algorithm module scheduling method described in the above embodiments.
In the present embodiment, the processor 42 may also be referred to as a CPU (Central Processing Unit). The processor 42 may be an integrated circuit chip having signal processing capabilities. The processor 42 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor 42 may be any conventional processor or the like.
To implement the algorithm module scheduling method of the above embodiment, the present application further provides a computer-readable storage medium, as shown in fig. 10, the computer-readable storage medium 500 is used for storing program data 51, and when being executed by a processor, the program data 51 is used for implementing the algorithm module scheduling method of the above embodiment.
The present application also provides a computer program product, wherein the computer program product comprises a computer program operable to cause a computer to execute the algorithm module scheduling method according to the embodiment of the present application. The computer program product may be a software installation package.
The algorithm module scheduling method according to the above embodiment of the present application may be stored in a device, for example, a computer readable storage medium, when the algorithm module scheduling method is implemented in the form of a software functional unit and sold or used as an independent product. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (13)

1. An algorithm module scheduling method, characterized in that the algorithm module scheduling method comprises:
acquiring a configuration file, wherein the configuration file comprises a plurality of algorithm modules with sequential dependency relationship of front and back results;
configuring the priorities of the algorithm modules by utilizing the sequential dependency relationship of the front and rear results in the configuration file;
creating a task queue according to the priorities of the algorithm modules;
and processing data by a scheduling algorithm module based on the front-back relation of the queue positions in the task queue.
2. The algorithmic module scheduling method of claim 1,
the configuring the priorities of the algorithm modules by using the sequential dependency relationship of the front and rear results in the configuration file comprises:
generating a data flow graph of the plurality of algorithm modules by utilizing the sequential dependency relationship of the front and back results in the configuration file;
and configuring the priorities of a plurality of algorithm modules in the data flow diagram from small to large according to the input and output directions of the data flow diagram.
3. The algorithmic module scheduling method of claim 2,
the creating of the task queue according to the priorities of the algorithm modules comprises the following steps:
adding a current algorithm module into the tail part of a task queue, wherein the task queue comprises a plurality of algorithm modules;
comparing the priority of the current algorithm module with the priority of the previous algorithm module in the task queue;
and when the priority of the current algorithm module is lower than that of the previous algorithm module, the position of the current algorithm module in the task queue is not adjusted.
4. The algorithmic module scheduling method of claim 3,
after comparing the priority of the current algorithm module with the priority of the previous algorithm module in the task queue, the algorithm module scheduling method further comprises:
when the priority of the current algorithm module is higher than that of the previous algorithm module, exchanging the queue positions of the current algorithm module and the previous algorithm module in the task queue, and updating the task queue;
and continuously comparing the priority of the current algorithm module with the priority of the previous algorithm module in the updated task queue until the priority of the current algorithm module in the updated task queue is lower than the priority of the previous algorithm module, or the current algorithm module is positioned at the head of the updated task queue.
5. The algorithmic module scheduling method of claim 2,
the creating of the task queue according to the priorities of the algorithm modules comprises the following steps:
acquiring the priority of the current algorithm module and the priorities of all queue algorithm modules in a task queue;
comparing the priority of the current algorithm module with the priorities of all queue algorithm modules;
when the priority of the current algorithm module is higher than the priorities of all queue algorithm modules, adding the current algorithm module into the head of the task queue, and updating the task queue;
and when the priority of the current algorithm module is lower than the priority of one or more queue algorithm modules, adding the current algorithm module to the task queue at the next bit of the queue algorithm module with the lowest priority in the one or more queue algorithm modules.
6. The algorithmic module scheduling method of claim 1,
the data processing based on the queue position front-back relation scheduling algorithm module in the task queue comprises the following steps:
acquiring a resource lock of the task queue, and starting to schedule an algorithm module in the task queue to process data;
acquiring a new algorithm module, and adding the new algorithm module into the task queue according to the priority;
and releasing the resource lock of the task queue after the task queue is updated.
7. The algorithmic module scheduling method of claim 1,
the data processing based on the queue position front-back relation scheduling algorithm module in the task queue comprises the following steps:
acquiring equipment information, wherein the equipment information comprises a processor core number;
judging whether the number of processor cores in the equipment information is larger than a first preset threshold value or not;
if yes, creating a corresponding number of thread pools based on the processor cores, and executing the tasks in the task queue through executing threads in the corresponding number of thread pools;
and if not, creating a plurality of execution threads, putting the execution threads into one thread pool, and executing the tasks in the task queue through the execution threads in the thread pool.
8. The algorithmic module scheduling method of claim 7,
the creating a corresponding number of thread pools based on the processor cores comprises:
setting a one-to-one corresponding relation between the thread pool and the processor;
and setting a plurality of execution threads in each thread pool, and setting the affinity of each execution thread and the processor corresponding to the thread pool.
9. The algorithmic module scheduling method of claim 7,
after the device information is obtained, the algorithm module scheduling method further includes:
judging whether the number of processor cores in the equipment information is greater than a first preset threshold value or not, and whether an algorithm module node in the configuration file is greater than a second preset threshold value or not;
if yes, creating a corresponding number of thread pools based on the processor cores, and executing the tasks in the task queue through executing threads in the corresponding number of thread pools;
and if not, creating a plurality of execution threads, putting the execution threads into one thread pool, and executing the tasks in the task queue through the execution threads in the thread pool.
10. The algorithmic module scheduling method of claim 1,
the algorithm module scheduling method further comprises the following steps:
analyzing the configuration file to obtain a plurality of algorithm nodes in the configuration file, wherein each algorithm node corresponds to at least one algorithm module;
obtaining a node input stream and a node output stream of each algorithm node;
determining the pipeline sequence of the algorithm nodes according to the node input streams and the node output streams of the algorithm nodes;
and determining the sequential dependency relationship of the front and rear results of the corresponding algorithm modules by using the pipeline sequence of the algorithm nodes.
11. An algorithm module scheduling device is characterized by comprising an acquisition module, a configuration module, a creation module and a scheduling module; wherein the content of the first and second substances,
the acquisition module is used for acquiring a configuration file, wherein the configuration file comprises a plurality of algorithm modules with sequential dependency relationship of front and back results;
the configuration module is used for configuring the priorities of the algorithm modules by utilizing the sequential dependency relationship of the front and rear results in the configuration file;
the creating module is used for creating a task queue according to the priorities of the algorithm modules;
and the scheduling module is used for processing data based on the queue position front-back relation scheduling algorithm module in the task queue.
12. An algorithmic module scheduling means comprising a processor and a memory, the memory having stored therein program data, the processor being adapted to execute the program data to perform the algorithmic module scheduling method of any of claims 1 to 10.
13. A computer-readable storage medium for storing program data which, when executed by a processor, is adapted to implement the algorithm module scheduling method of any of claims 1-10.
CN202210412394.6A 2022-04-19 2022-04-19 Algorithm module scheduling method, algorithm module scheduling device and readable storage medium Active CN114489867B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210412394.6A CN114489867B (en) 2022-04-19 2022-04-19 Algorithm module scheduling method, algorithm module scheduling device and readable storage medium
PCT/CN2022/114395 WO2023201947A1 (en) 2022-04-19 2022-08-24 Methods, systems, and storage media for task dispatch

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210412394.6A CN114489867B (en) 2022-04-19 2022-04-19 Algorithm module scheduling method, algorithm module scheduling device and readable storage medium

Publications (2)

Publication Number Publication Date
CN114489867A true CN114489867A (en) 2022-05-13
CN114489867B CN114489867B (en) 2022-09-06

Family

ID=81489501

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210412394.6A Active CN114489867B (en) 2022-04-19 2022-04-19 Algorithm module scheduling method, algorithm module scheduling device and readable storage medium

Country Status (2)

Country Link
CN (1) CN114489867B (en)
WO (1) WO2023201947A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114675863A (en) * 2022-05-27 2022-06-28 浙江大华技术股份有限公司 Algorithm configuration file updating method and related method, device, equipment and medium
CN114880395A (en) * 2022-07-05 2022-08-09 浙江大华技术股份有限公司 Algorithm scheme operation method, visualization system, terminal device and storage medium
CN116149830A (en) * 2023-04-20 2023-05-23 北京邮电大学 Mass data processing method and device based on double-scale node scheduling strategy
WO2023201947A1 (en) * 2022-04-19 2023-10-26 Zhejiang Dahua Technology Co., Ltd. Methods, systems, and storage media for task dispatch

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101692208A (en) * 2009-10-15 2010-04-07 北京交通大学 Task scheduling method and task scheduling system for processing real-time traffic information
US20150339168A1 (en) * 2014-05-23 2015-11-26 Osr Open Systems Resources, Inc. Work queue thread balancing
CN105511956A (en) * 2014-09-24 2016-04-20 中国电信股份有限公司 Method and system for task scheduling based on share scheduling information
CN105791254A (en) * 2014-12-26 2016-07-20 阿里巴巴集团控股有限公司 Network request processing method, device and terminal
CN105992008A (en) * 2016-03-30 2016-10-05 南京邮电大学 Multilevel multitask parallel decoding algorithm on multicore processor platform
CN106681840A (en) * 2016-12-30 2017-05-17 郑州云海信息技术有限公司 Tasking scheduling method and device for cloud operating system
CN107092962A (en) * 2016-02-17 2017-08-25 阿里巴巴集团控股有限公司 A kind of distributed machines learning method and platform
CN107168781A (en) * 2017-04-07 2017-09-15 广东银禧科技股份有限公司 A kind of 3D printing subtask scheduling method and apparatus
CN108268319A (en) * 2016-12-31 2018-07-10 ***通信集团河北有限公司 Method for scheduling task, apparatus and system
CN110837410A (en) * 2019-10-30 2020-02-25 北京奇艺世纪科技有限公司 Task scheduling method and device, electronic equipment and computer readable storage medium
CN111209099A (en) * 2019-12-31 2020-05-29 苏州浪潮智能科技有限公司 Multi-thread pool scheduling method and scheduling terminal based on ganesha service
CN111290842A (en) * 2018-12-10 2020-06-16 北京京东尚科信息技术有限公司 Task execution method and device
CN111367785A (en) * 2018-12-26 2020-07-03 中兴通讯股份有限公司 SDN-based fault detection method and device and server
CN111382177A (en) * 2020-03-09 2020-07-07 中国邮政储蓄银行股份有限公司 Service data task processing method, device and system
CN111737075A (en) * 2020-06-19 2020-10-02 浙江大华技术股份有限公司 Execution sequence determination method and device, storage medium and electronic device
CN111949386A (en) * 2020-07-09 2020-11-17 北京齐尔布莱特科技有限公司 Task scheduling method, system, computing device and readable storage medium
US20200380011A1 (en) * 2019-05-29 2020-12-03 International Business Machines Corporation Work Assignment in Parallelized Database Synchronization
CN112988340A (en) * 2019-12-18 2021-06-18 湖南亚信软件有限公司 Task scheduling method, device and system
CN113179304A (en) * 2021-04-22 2021-07-27 平安消费金融有限公司 Message issuing method, system, device and storage medium
CN113535367A (en) * 2021-09-07 2021-10-22 北京达佳互联信息技术有限公司 Task scheduling method and related device
CN113722103A (en) * 2021-09-10 2021-11-30 奇安信科技集团股份有限公司 Encryption card calling control method and communication equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6871011B1 (en) * 2000-09-28 2005-03-22 Matsushita Electric Industrial Co., Ltd. Providing quality of service for disks I/O sub-system with simultaneous deadlines and priority
CN111104211A (en) * 2019-12-05 2020-05-05 山东师范大学 Task dependency based computation offload method, system, device and medium
CN111813554A (en) * 2020-07-17 2020-10-23 济南浪潮数据技术有限公司 Task scheduling processing method and device, electronic equipment and storage medium
CN114489867B (en) * 2022-04-19 2022-09-06 浙江大华技术股份有限公司 Algorithm module scheduling method, algorithm module scheduling device and readable storage medium

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101692208A (en) * 2009-10-15 2010-04-07 北京交通大学 Task scheduling method and task scheduling system for processing real-time traffic information
US20150339168A1 (en) * 2014-05-23 2015-11-26 Osr Open Systems Resources, Inc. Work queue thread balancing
CN105511956A (en) * 2014-09-24 2016-04-20 中国电信股份有限公司 Method and system for task scheduling based on share scheduling information
CN105791254A (en) * 2014-12-26 2016-07-20 阿里巴巴集团控股有限公司 Network request processing method, device and terminal
CN107092962A (en) * 2016-02-17 2017-08-25 阿里巴巴集团控股有限公司 A kind of distributed machines learning method and platform
CN105992008A (en) * 2016-03-30 2016-10-05 南京邮电大学 Multilevel multitask parallel decoding algorithm on multicore processor platform
CN106681840A (en) * 2016-12-30 2017-05-17 郑州云海信息技术有限公司 Tasking scheduling method and device for cloud operating system
CN108268319A (en) * 2016-12-31 2018-07-10 ***通信集团河北有限公司 Method for scheduling task, apparatus and system
CN107168781A (en) * 2017-04-07 2017-09-15 广东银禧科技股份有限公司 A kind of 3D printing subtask scheduling method and apparatus
CN111290842A (en) * 2018-12-10 2020-06-16 北京京东尚科信息技术有限公司 Task execution method and device
CN111367785A (en) * 2018-12-26 2020-07-03 中兴通讯股份有限公司 SDN-based fault detection method and device and server
US20200380011A1 (en) * 2019-05-29 2020-12-03 International Business Machines Corporation Work Assignment in Parallelized Database Synchronization
CN110837410A (en) * 2019-10-30 2020-02-25 北京奇艺世纪科技有限公司 Task scheduling method and device, electronic equipment and computer readable storage medium
CN112988340A (en) * 2019-12-18 2021-06-18 湖南亚信软件有限公司 Task scheduling method, device and system
CN111209099A (en) * 2019-12-31 2020-05-29 苏州浪潮智能科技有限公司 Multi-thread pool scheduling method and scheduling terminal based on ganesha service
CN111382177A (en) * 2020-03-09 2020-07-07 中国邮政储蓄银行股份有限公司 Service data task processing method, device and system
CN111737075A (en) * 2020-06-19 2020-10-02 浙江大华技术股份有限公司 Execution sequence determination method and device, storage medium and electronic device
CN111949386A (en) * 2020-07-09 2020-11-17 北京齐尔布莱特科技有限公司 Task scheduling method, system, computing device and readable storage medium
CN113179304A (en) * 2021-04-22 2021-07-27 平安消费金融有限公司 Message issuing method, system, device and storage medium
CN113535367A (en) * 2021-09-07 2021-10-22 北京达佳互联信息技术有限公司 Task scheduling method and related device
CN113722103A (en) * 2021-09-10 2021-11-30 奇安信科技集团股份有限公司 Encryption card calling control method and communication equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
AKRAM, N ET AL.: "《Efficient Task Allocation for Real-Time Partitioned Scheduling on Multi-Core Systems》", 《PROCEEDINGS OF 2019 16TH INTERNATIONAL BHURBAN CONFERENCE ON APPLIED SCIENCES AND TECHNOLOGY (IBCAST)》 *
南洋等: "基于网格平台的实时任务调度算法", 《科技通报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023201947A1 (en) * 2022-04-19 2023-10-26 Zhejiang Dahua Technology Co., Ltd. Methods, systems, and storage media for task dispatch
CN114675863A (en) * 2022-05-27 2022-06-28 浙江大华技术股份有限公司 Algorithm configuration file updating method and related method, device, equipment and medium
CN114675863B (en) * 2022-05-27 2022-10-04 浙江大华技术股份有限公司 Algorithm configuration file updating method and related method, device, equipment and medium
CN114880395A (en) * 2022-07-05 2022-08-09 浙江大华技术股份有限公司 Algorithm scheme operation method, visualization system, terminal device and storage medium
CN116149830A (en) * 2023-04-20 2023-05-23 北京邮电大学 Mass data processing method and device based on double-scale node scheduling strategy

Also Published As

Publication number Publication date
CN114489867B (en) 2022-09-06
WO2023201947A1 (en) 2023-10-26

Similar Documents

Publication Publication Date Title
CN114489867B (en) Algorithm module scheduling method, algorithm module scheduling device and readable storage medium
CN110769278B (en) Distributed video transcoding method and system
CN107391243B (en) Thread task processing equipment, device and method
CN110489217A (en) A kind of method for scheduling task and system
CN111949386A (en) Task scheduling method, system, computing device and readable storage medium
JP2010086128A (en) Multi-thread processor and its hardware thread scheduling method
CN109710416B (en) Resource scheduling method and device
CN111026541B (en) Rendering resource scheduling method, device, equipment and storage medium
US7310803B2 (en) Method and system for executing multiple tasks in a task set
US20070008983A1 (en) Threshold on unblocking a processing node that is blocked due to data packet passing
CN115658153A (en) Sleep lock optimization method and device, electronic equipment and storage medium
CN111767125A (en) Task execution method and device, electronic equipment and storage medium
CN114518917B (en) Algorithm module scheduling method, algorithm module scheduling device and readable storage medium
CN110750362A (en) Method and apparatus for analyzing biological information, and storage medium
JP2000148513A (en) Method and device for controlling task
CN115767188A (en) Message request processing method, system, device and readable storage medium
CN113760494B (en) Task scheduling method and device
CN114035928A (en) Distributed task allocation processing method
CN108845794A (en) A kind of streaming operation frame, method, readable medium and storage control
CN111382983B (en) Workflow control method, workflow node and system
CN113760403A (en) State machine linkage method and device
CN114546631A (en) Task scheduling method, control method, core, electronic device and readable medium
CN115712507B (en) Method for calculating task priority of ship gateway
CN112994969B (en) Service detection method, device, equipment and storage medium
CN112988422A (en) Asynchronous message processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant