CN110688229B - Task processing method and device - Google Patents

Task processing method and device Download PDF

Info

Publication number
CN110688229B
CN110688229B CN201910966643.4A CN201910966643A CN110688229B CN 110688229 B CN110688229 B CN 110688229B CN 201910966643 A CN201910966643 A CN 201910966643A CN 110688229 B CN110688229 B CN 110688229B
Authority
CN
China
Prior art keywords
computing
cpu
task
tasks
category
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910966643.4A
Other languages
Chinese (zh)
Other versions
CN110688229A (en
Inventor
张超
陈卓
辛建康
王柏生
何玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Technology Beijing Co Ltd
Original Assignee
Apollo Intelligent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Intelligent Technology Beijing Co Ltd filed Critical Apollo Intelligent Technology Beijing Co Ltd
Priority to CN201910966643.4A priority Critical patent/CN110688229B/en
Publication of CN110688229A publication Critical patent/CN110688229A/en
Application granted granted Critical
Publication of CN110688229B publication Critical patent/CN110688229B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Multi Processors (AREA)

Abstract

The application discloses a task processing method and a task processing device, and relates to an automatic driving technology. The specific implementation scheme is as follows: acquiring a scheduling file, wherein the scheduling file is used for indicating a Central Processing Unit (CPU) corresponding to each calculation task; each type of calculation task forms at least one calculation link, and all or part of types of calculation tasks belonging to the same calculation link correspond to the same CPU; and distributing various computing tasks to corresponding CPUs to execute according to the scheduling file. According to the method and the device, all or part of the types of the computing tasks on the same computing link correspond to the same CPU, so that the end-to-end processing time delay in the task processing process is short, the time delay jitter is small, and the stable operation of the electronic equipment is ensured.

Description

Task processing method and device
Technical Field
The embodiment of the application relates to a computer technology, in particular to an automatic driving technology.
Background
Along with the development of science and technology, more and more tools in life tend to be intelligent, and especially unmanned automobiles provide great convenience for the life of people in the future.
In the unmanned vehicle system, a large number of complex calculation tasks exist, aiming at the large number of calculation tasks, the traditional mode is that all CPUs share one task queue, and for each CPU, after the CPU finishes executing one task, the next task needing to be executed is determined from the task queue according to a first-in first-out criterion. The execution method may cause that end-to-end task processing delay (the time duration for processing a task of a calculation link) is relatively large, and further the stable operation of the unmanned vehicle is influenced.
Disclosure of Invention
The embodiment of the application provides a task processing method and device, which can enable end-to-end processing delay in a task processing process to be short and delay jitter to be small.
In a first aspect, the present application provides a task processing method, including: acquiring a scheduling file, wherein the scheduling file is used for indicating a Central Processing Unit (CPU) corresponding to each calculation task; the system comprises a plurality of computing tasks, wherein each computing task forms at least one computing link, all or part of computing tasks belonging to the same computing link correspond to the same CPU, and the output of an upstream computing task in any two adjacent computing tasks on the computing link is the input of a downstream computing task; and distributing various computing tasks to corresponding CPUs for execution according to the scheduling file.
In the scheme, because the scheduling file indicates the CPUs corresponding to various computing tasks, the CPUs executing the computing tasks of the same category are fixed, and the end-to-end processing delay can be reduced to a certain extent; meanwhile, all or part of the types of computing tasks of the same computing link correspond to the same CPU, so that the tasks with data dependency relationship on the same computing link can be executed by the same CPU as far as possible, the processing result of the upstream task required by the downstream task is in the cache corresponding to the same CPU, data does not need to be acquired from the caches corresponding to other CPUs, the end-to-end time delay is short, the time delay jitter is small, and when the equipment for executing the task processing method is unmanned, the normal operation of an unmanned vehicle can be ensured.
In one possible design, the obtaining the schedule file includes: acquiring a configuration file, wherein the configuration file comprises input channels and output channels of various computing tasks; acquiring at least one calculation link composed of various calculation tasks according to the configuration file; and generating the scheduling file according to the at least one calculation link and a preset condition.
The scheme provides a specific implementation mode for generating the scheduling file.
In one possible design, the preset condition includes a first preset condition, and the generating the scheduling file according to the at least one calculation link and the preset condition includes: grouping various computing tasks according to the at least one computing link and the first preset condition to obtain at least one group; and generating the scheduling file according to the at least one packet. Wherein the first preset condition comprises: the N-N computing tasks on the first computing link are distributed into the same group; the first computing link is any one of the at least one computing link, N is the total number of classes of computing tasks included in the first computing link, N is the number of classes of computing tasks, which are commonly included in the first computing link and other computing links, allocated to other groups, N is a positive integer, and N is an integer;
the first preset condition in the scheme can enable the calculation tasks on the same calculation link to be distributed to the same CPU as much as possible.
In an optional manner, the generating the schedule file according to the at least one packet includes: for each group, determining CPUs corresponding to various computing tasks included in the group; and generating the scheduling file according to the CPUs corresponding to the various computing tasks included in each group.
According to the scheme, the calculation tasks on the same calculation link can be distributed to the same CPU as much as possible, and the efficiency of generating the scheduling file is high.
In another optional manner, the generating the scheduling file according to the at least one packet further includes: and generating the scheduling file according to the at least one group and the second preset condition. Wherein the second preset condition comprises: for each packet: if various computing tasks included in the group are distributed to the same CPU to be executed, so that the load of the CPU is smaller than a preset load, the various computing tasks included in the group all correspond to the CPU; if the various computing tasks included in the group are distributed to the same CPU to be executed so that the load of the CPU is greater than or equal to the preset load, the computing tasks of all the categories included in the group correspond to at least two CPUs, and the computing tasks of all the categories included in the group are distributed to the at least two CPUs to be executed so that the loads of the at least two CPUs are all smaller than the preset load.
According to the scheme, the computing tasks on the same computing link can be distributed to the same CPU as much as possible.
In one possible design, the second preset condition further includes: if one CPU corresponds to all or part of the categories of computing tasks included in each of the at least two groups, the attributes of the computing tasks of the categories corresponding to the CPU are the same.
The method and the device can enable the attributes of the computing tasks corresponding to the same CPU to be the same, so that the efficiency of executing the computing tasks with long processing time cannot be influenced by the computing tasks with long processing time, and further end-to-end processing time delay is reduced.
In one possible design, the second preset condition further includes: the attributes of the computing tasks of the categories corresponding to any one of the at least two CPUs are the same.
The method and the device can enable the attributes of the computing tasks corresponding to the same CPU to be the same, so that the efficiency of executing the computing tasks with long processing time cannot be influenced by the computing tasks with long processing time, and further end-to-end processing time delay is reduced.
In one possible design, allocating various computing tasks to corresponding CPUs for execution according to the scheduling file includes: for various computing tasks corresponding to a first CPU, the first CPU is any one CPU included in the electronic device: acquiring priority information of various computing tasks corresponding to the first CPU; the priority of a downstream computing task in the same computing link is higher than that of an upstream computing task; and distributing various computing tasks corresponding to the first CPU for execution according to the priority information.
In the scheme, the priority of the downstream computing task on the same computing link is higher than that of the upstream computing task, and the CPU starts to execute the computing task with the highest priority again when executing one computing task, so that the task which needs to be processed first on the same computing link can be processed first, the end-to-end time delay is short, and when the electronic equipment executing the task processing method is unmanned, the normal operation of an unmanned vehicle can be ensured.
In a possible design, the allocating, according to the priority information, each type of computation task corresponding to the first CPU for execution includes: when a first category of computing task needs to be allocated to the first CPU for execution, judging whether the state of the first category of computing task is a first state, wherein the first category of computing task is any one category of computing tasks corresponding to the first CPU; if so, distributing a second type of computing task to the first CPU for execution, wherein the second type of computing task is a computing task with a second state and a highest priority in various computing tasks corresponding to the first CPU; wherein, the first state of any kind of calculation task is any one of the following states: the computing task of the category does not exist at present, the computing task of the category is executed completely, and the computing task of the category is in a non-execution state; the non-execution state of any one category of computing tasks indicates that the corresponding CPU suspends the execution of the category of computing tasks; the second state of any one class of computing tasks is that the class of computing tasks exists and is not in a non-executing state.
In the scheme, when the first category of computing tasks is in a non-execution state, the execution of the first category of computing tasks can be skipped in time to execute other categories of computing tasks, and the end-to-end processing delay can be reduced. In addition, the task which is processed first on the same calculation link can be processed first, the end-to-end time delay is short, and when the electronic equipment executing the task processing method is unmanned vehicles, the normal operation of the unmanned vehicles can be ensured.
In a second aspect, an embodiment of the present application provides a task processing device, including: the system comprises an acquisition module, a scheduling module and a processing module, wherein the acquisition module is used for acquiring a scheduling file, and the scheduling file is used for indicating a Central Processing Unit (CPU) corresponding to each type of calculation task; the system comprises a plurality of computing tasks, wherein each computing task forms at least one computing link, all or part of computing tasks belonging to the same computing link correspond to the same CPU, and the output of an upstream computing task in any two adjacent computing tasks on the computing link is the input of a downstream computing task; and the distribution module is used for distributing various computing tasks to the corresponding CPU to be executed according to the scheduling file.
In one possible design, the obtaining module is specifically configured to: acquiring a configuration file, wherein the configuration file comprises input channels and output channels of various computing tasks; acquiring at least one calculation link composed of various calculation tasks according to the configuration file; and generating the scheduling file according to the at least one calculation link and a preset condition.
In a possible design, the preset condition includes a first preset condition, and the obtaining module is specifically configured to: grouping various computing tasks according to the at least one computing link and the first preset condition to obtain at least one group; and generating the scheduling file according to the at least one packet.
In one possible design, the first preset condition includes: the N-N computing tasks on the first computing link are distributed into the same group; the first computing link is any one of the at least one computing link, N is the total number of classes of computing tasks included in the first computing link, N is the number of classes of computing tasks, which are commonly included in the first computing link and other computing links, allocated to other groups, N is a positive integer, and N is an integer.
In one possible design, the obtaining module is specifically configured to: for each group, determining CPUs corresponding to various computing tasks included in the group; and generating the scheduling file according to the CPUs corresponding to the various computing tasks included in each group.
In a possible design, the preset condition further includes a second preset condition, and the obtaining module is specifically configured to: and generating the scheduling file according to the at least one group and the second preset condition.
In one possible design, the second preset condition includes: for each packet: if various computing tasks included in the group are distributed to the same CPU to be executed, so that the load of the CPU is smaller than a preset load, the various computing tasks included in the group all correspond to the CPU; if the various computing tasks included in the group are distributed to the same CPU to be executed so that the load of the CPU is greater than or equal to the preset load, the computing tasks of all the categories included in the group correspond to at least two CPUs, and the computing tasks of all the categories included in the group are distributed to the at least two CPUs to be executed so that the loads of the at least two CPUs are all smaller than the preset load.
In one possible design, the second preset condition further includes: if one CPU corresponds to all or part of the categories of computing tasks included in each of the at least two groups, the attributes of the computing tasks of the categories corresponding to the CPU are the same.
In one possible design, the second preset condition further includes: the attributes of the computing tasks of the categories corresponding to any one of the at least two CPUs are the same.
In one possible design, the allocation module is specifically configured to: for various computing tasks corresponding to a first CPU, the first CPU is any one CPU included in the electronic device: acquiring priority information of various computing tasks corresponding to the first CPU; the priority of a downstream computing task in the same computing link is higher than that of an upstream computing task; and distributing various computing tasks corresponding to the first CPU for execution according to the priority information.
In a possible design, the allocating, according to the priority information, each type of computation task corresponding to the first CPU for execution includes: when a first category of computing task needs to be allocated to the first CPU for execution, judging whether the state of the first category of computing task is a first state, wherein the first category of computing task is any one category of computing tasks corresponding to the first CPU; if so, distributing a second type of computing task to the first CPU for execution, wherein the second type of computing task is a computing task with a second state and a highest priority in various computing tasks corresponding to the first CPU; wherein, the first state of any kind of calculation task is any one of the following states: the computing task of the category does not exist at present, the computing task of the category is executed completely, and the computing task of the category is in a non-execution state; the non-execution state of any one category of computing tasks indicates that the corresponding CPU suspends the execution of the category of computing tasks; the second state of any one class of computing tasks is that the class of computing tasks exists and is not in a non-executing state.
In a third aspect, the present application provides an electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of the first aspect and any possible design of the first aspect.
In a fourth aspect, the present application provides a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of the first aspect as well as any possible design of the first aspect.
One embodiment in the above application has the following advantages or benefits: the end-to-end processing time delay in the task processing process is short, the time delay jitter is small, and the stable operation of electronic equipment (such as unmanned vehicles) is guaranteed. The scheduling file is used for indicating the Central Processing Units (CPUs) corresponding to various computing tasks, distributing the various computing tasks to corresponding CPU execution tasks according to the scheduling file, and all or part of the various computing tasks on the same computing link correspond to the same CPU, so that the problems of prolonged end-to-end processing time and large jitter in the task processing process in the prior art are solved, the technical effects of short end-to-end processing time delay and small time delay jitter in the task processing process are achieved, and stable operation of electronic equipment (such as unmanned vehicles) is guaranteed.
Other effects of the above-described alternative will be described below with reference to specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a first schematic diagram illustrating a topology structure of a computational link according to an embodiment of the present application;
FIG. 2 is a schematic diagram of task processing of a conventional electronic device provided in an embodiment of the present application;
FIG. 3 is a flowchart of a task processing method provided by an embodiment of the present application;
fig. 4 is a schematic diagram of a topology structure of a computational link according to an embodiment of the present application;
fig. 5 is a second flowchart of a task processing method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a task processing device according to an embodiment of the present application;
fig. 7 is a block diagram of an electronic device for implementing a task processing method according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple. The terms "first," "second," and the like in this application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
First, some elements related to the present application will be described.
1. In the operation process of the electronic equipment, a plurality of calculation links exist in the electronic equipment, and each calculation link comprises various types of calculation tasks. In the same computing link, the tasks with the same source of processed data and the same processing process are called the tasks of the same category.
A schematic topology of a computing link is shown in fig. 1, where a computing link 1 includes a computing task of a category a, a computing task of a category b, and a computing task of a category c, and a computing link 2 includes a computing task of a category e, a computing task of a category f, and a computing task of a category c. The computing tasks on the same computing link have a data dependency relationship, for example, in a one-time end-to-end processing process, the execution of the computing task of the category c needs to depend on the execution result of the computing task of the category b, and the execution of the computing task of the category b needs to depend on the execution result of the computing task of the category a. It will be appreciated that the computing task of category b is a downstream computing task of the computing task of category a, and is an upstream computing task of the computing task of category c.
2. End-to-end processing: the primary end-to-end processing process refers to a process that is executed in sequence from an upstream computing task to a downstream computing task on a computing link until the last downstream task is executed. For example, for the computational link 1, the one-time end-to-end processing includes: the computing task 1 of the category a is started to be executed, the execution is finished to obtain a processing result 1, a computing task 2 of the category b depending on the processing result 1 is generated, the computing task 2 of the category b is executed (namely, the processing result 1 is processed), and the execution is finished to obtain a processing result 2; after the computing task 4 of the category f on the computing link 2 is also executed to obtain the processing result 3, the computing task 3 of the category c depending on the processing result 2 and the processing result 3 is generated, the computing task 3 of the category c is executed (i.e. the processing result 2 and the processing result 3 are processed), and after the computing task 3 of the category c is executed, the end-to-end processing procedure corresponding to the computing link 1 is completed.
3. End-to-end processing delay: the time duration from the execution of the upstream computing task to the execution of the downstream computing task on a computing link to the completion of the execution of the last downstream computing task.
For better understanding of the present application, a conventional task processing method is explained below.
In a conventional task processing method, CPUs included in an electronic device share the same task queue, and process a task according to a first-in first-out rule.
Referring to fig. 1 and 2, the electronic device includes 3 CPUs: the task queue comprises a CPU1, a CPU2 and a CPU3, wherein the task queue comprises a calculation task 1, a calculation task 2, a calculation task 3 and a calculation task 4 which are accurate and ready, the calculation task 1 and the calculation task 2 are calculation tasks of a category a, the calculation task 3 is a calculation task of a category b, and the calculation task 4 is a calculation task of a category e. CPU1 is performing compute task 1, CPU2 is performing compute task 3, and CPU3 is performing compute task 4. After the CPU2 completes execution of compute task 3, it will generate compute tasks of class c: compute task 5, then compute task 5 enters the task queue. If the CPU2 has completed executing compute task 3, CPU1 is still executing compute task 1, and CPU3 is still executing compute task 4, i.e. the currently idle CPU is CPU 2. Because compute task 2 is already in the task queue ready in the task queue before compute task 5 enters the task queue, CPU2 will execute compute task 2 first, and not compute task 5.
In the above example, after the computation task 3 is completed, the newly generated computation task 5 should be executed next to complete an end-to-end processing procedure. However, the calculation task 2 is executed first according to the first-in first-out rule, and the calculation task 5 is executed after the calculation task 2 is processed, so that the end-to-end delay corresponding to the calculation link 1 becomes long. Meanwhile, in the above example, the CPU executing a certain computation task is random, the computation task 3 needs to depend on the processing result (not shown in the figure) of the computation task 6 (the computation task of the category a), and if the computation task 6 is executed by the CPU1, the CPU2 needs to obtain the processing result of the computation task 6 from the cache of the CPU1 to execute the computation task 3, which also prolongs the end-to-end delay corresponding to the computation link 1. In addition, in the above example, the CPU executing one type of computation task is also not fixed, the task processing complexity is high, and the end-to-end processing delay is also extended to a certain extent.
In order to solve the above technical problem, the present application provides a task processing method. The task processing method of the present application will be described below using specific examples.
Fig. 3 is a flowchart of a task processing method according to an embodiment of the present application, where an execution subject of the embodiment may be an electronic device. Referring to fig. 3, the method of the present embodiment includes:
step S301, obtaining a scheduling file, wherein the scheduling file is used for indicating CPUs corresponding to various computing tasks; the computing tasks of all or part of categories belonging to the same computing link correspond to the same CPU, and the output of an upstream computing task in any two adjacent computing tasks on at least one computing link is the input of a downstream computing task.
The scheduling file may include identifiers of various computing tasks and identifiers of CPUs corresponding to the various computing tasks.
In one mode, obtaining the schedule file can be realized through a 1-a 3:
a1, obtaining a configuration file, wherein the configuration file comprises input channels and output channels of various computing tasks.
The configuration file may be a DAG file loaded when the electronic device is powered on, and the configuration file indicates an input channel and an output channel of each type of computing task.
a2, obtaining at least one calculation link composed of various calculation tasks according to the configuration file.
Because the configuration file indicates the input channel and the output channel of each type of computing task, at least one computing link formed by each type of computing task can be obtained according to the configuration file.
Exemplarily, the input channel and the output channel of the aa-class computing task are 1 and 2, and the input channel and the output channel of the bb-class computing task are 2 and 3, respectively, then the aa-class computing task and the bb-class computing task belong to the same computing link, and the aa-class computing task is an upstream computing task of the bb-class computing task.
a3, generating a scheduling file according to at least one calculation link composed of various calculation tasks and preset conditions.
In a specific implementation, generating the scheduling file according to at least one computing link composed of various computing tasks and preset conditions may include:
a31, grouping the various computing tasks according to at least one computing link formed by the computing tasks and a first preset condition to obtain at least one group.
Wherein, the first preset condition may include: the N-N computing tasks on the first computing link are distributed into the same group; the first computing link is any one of at least one computing link composed of various computing tasks, N is the total number of classes of the computing tasks included in the first computing link, N is the number of classes of the computing tasks, which are commonly included in the first computing link and other computing links, allocated to other groups, N is a positive integer, and N is an integer.
In one approach, the class N-N computing tasks on the first computing link may be consecutive class N-N computing tasks on the first computing link.
a32, generating a schedule file according to at least one packet.
In one approach, generating the schedule file from at least one packet includes: for each group, determining CPUs corresponding to various computing tasks included in the group; and generating a scheduling file according to the CPUs corresponding to the various computing tasks included in the group. I.e. each grouping comprises a CPU for each class of computational tasks.
In another approach, generating the schedule file from the at least one packet includes: and generating a scheduling file according to the second preset condition and at least one packet.
Wherein, the second preset condition may include: for each packet: if various computing tasks included in the group are distributed to the same CPU to be executed, so that the load of the CPU is smaller than the preset load, the various computing tasks included in the group all correspond to the CPU; if the various types of computing tasks included in the group are distributed to the same CPU to be executed so that the load of the CPU is greater than or equal to the preset load, the computing tasks of all types included in the group correspond to at least two CPUs, and the computing tasks of all types included in the group are distributed to the at least two CPUs to be executed so that the loads of the at least two CPUs are all smaller than the preset load. The second preset condition may further include: in the case where there is an idle CPU that does not correspond to any category of computing task, all or part of the categories of computing tasks included in the packet correspond to the idle CPU.
The preset conditions can enable various computing tasks on the same computing link to be distributed to the same CPU as much as possible, so that the end-to-end processing delay is reduced.
Optionally, the second preset condition further includes: if one CPU corresponds to all or part of the categories of computing tasks included in each of the at least two groups, the attributes of the computing tasks of the categories corresponding to the CPU are the same. The scheme can ensure that the attributes of the calculation tasks executed on the same CPU are the same, so that the efficiency of executing the calculation tasks with long processing time cannot be influenced by the calculation tasks with long processing time, and the end-to-end processing time delay is further reduced.
Optionally, the second preset condition further includes: if the calculation tasks of each category included in one group correspond to at least two CPUs, the attributes of the calculation tasks of each category corresponding to any one of the at least two CPUs are the same. The scheme can ensure that the attributes of the calculation tasks executed on the same CPU are the same, so that the efficiency of executing the calculation tasks with long processing time cannot be influenced by the calculation tasks with long processing time, and the end-to-end processing time delay is further reduced.
Wherein, the calculation tasks of the same category can be generated periodically. The attributes of the two categories of computing tasks being the same may include: the difference between the execution time of the computing task of the category A and the execution time of the computing task of the category B is smaller than or equal to a preset time difference, the difference between a first frequency and a second frequency is smaller than or equal to a preset frequency, the first frequency is the frequency generated by the computing task of the category A, and the second frequency is the frequency generated by the computing task of the category B. For example, the attributes are the same for the tasks of the high-frequency short-time-consuming categories, and the attributes are the same for the tasks of the low-frequency long-time-consuming categories.
In summary, the preset condition can be understood as: computing tasks of various types on the same computing link are distributed to the same CPU as much as possible to be executed, and if the load is too high, computing tasks of all the types on the same computing link can be distributed to at least two CPUs to be executed; distributing various types of computing tasks with the same attribute to the same CPU as much as possible for execution; the computing tasks on different computing links are not distributed to the same CPU to be executed as much as possible.
Fig. 4 is a schematic diagram of a topology of a computing link according to an embodiment of the present application, and a process of generating a scheduling file is exemplarily described below with reference to fig. 4.
Referring to fig. 4, the calculation link a includes calculation tasks of category a, calculation tasks of category B, calculation tasks of category C, and calculation tasks of category H, and the total number of categories of calculation tasks included in the calculation link a is 4; the calculation link b comprises calculation tasks of a category D, calculation tasks of a category E, calculation tasks of a category G and calculation tasks of a category H, and the total category number of the calculation tasks included in the calculation link b is 4; the computing link c includes computing tasks of computing task class D, computing tasks of class F, computing tasks of class G, and computing tasks of class H, and the total number of classes of computing tasks included is 4.
The calculation tasks of the category a, the calculation tasks of the category B, the calculation tasks of the category C, the calculation tasks of the category H, the calculation tasks of the category D, the calculation tasks of the category E, the calculation tasks of the category F, and the calculation tasks of the category G are grouped, the calculation tasks of the category a, the calculation tasks of the category B, and the calculation tasks of the category C form a group a (calculation tasks of the category H, which are included together with the calculation link B, are allocated to the same group B, and calculation tasks of the category H, which are included together with the calculation link a, are allocated to the same group a), the calculation tasks of the category D, the calculation tasks of the category E, the calculation tasks of the category G, and the calculation tasks of the category H form a group B (calculation tasks of the category H, which are 4-0 or 4-4, are allocated to the same group B), and the calculation tasks of the category F form a group C (calculation tasks of the category C, which are 4-3 or 1 are allocated to the same group B) In the group, the calculation tasks of the category D, G, H included in common with the calculation link b are assigned to the group b).
For the group a, the computing tasks of the category B and the computing tasks of the category C included in the group a are distributed to the same cpu (a) to be executed, so that the load of the cpu (a) is smaller than the preset load, and the computing tasks of the category a, the computing tasks of the category B and the computing tasks of the category C included in the group a all correspond to the cpu (a). The CPU (a) is not fixed, and may be a CPU determined from CPUs included in the electronic device, and the load of the CPU is smaller than the preset load when the CPU executes the calculation task of the category a, the calculation task of the category B, and the calculation task of the category C included in the group a.
For packet b, the computational tasks included in packet b: the calculation tasks of the category D, the calculation tasks of the category E, the calculation tasks of the category G and the calculation tasks of the category H are distributed to the same cpu (b) to be executed so that the load of the cpu (b) is greater than or equal to the preset load, the calculation tasks of the category D and the calculation tasks of the category E with the same attribute are distributed to the cpu (b) so that the load of the cpu (b) is less than the preset load, and the calculation tasks of the category G and the calculation tasks of the category H with the same attribute are distributed to the cpu (c) so that the load of the cpu (c) is less than the preset load, and therefore, the calculation tasks of the category D, the calculation tasks of the category E may correspond to the cpu (b), and the calculation tasks of the category G and the calculation tasks of the category H may correspond to the cpu (c).
For the group c, if the electronic device includes cpu (d) in addition to cpu (a), cpu (b), and cpu (c), the calculation task of the category F included in the group c may correspond to cpu (d). When the electronic device includes not only the CPUs (a), (b), and (c) but also other CPUs, the calculation task of the category F included in the group c may correspond to one of the CPUs (a) to (c). If the attributes of the computing task of the category D, the computing task of the category E, and the computing task of the category F are the same, and the computing task of the category D, the computing task of the category E, and the computing task of the category F are allocated to the same cpu (b) to be executed, so that the load of the cpu (b) is smaller than the preset load, the computing task of the category F may also correspond to the cpu (b).
In summary, the CPU corresponding to each of the calculation task of the category a, the calculation task of the category B, the calculation task of the category C, the calculation task of the category H, the calculation task of the category D, the calculation task of the category E, the calculation task of the category G, and the calculation task of the category F is obtained, and the scheduling file can be generated from the CPU corresponding to each of the calculation tasks of the categories.
It is to be understood that the generated schedule file may be stored in the memory of the electronic device in advance, and the schedule file is read from the memory of the electronic device during the task processing.
And step S302, distributing various computing tasks to corresponding CPUs to execute according to the scheduling files.
And distributing various computing tasks to corresponding CPUs for execution according to the instructions of the scheduling files. For example, if the scheduling file indicates that the computing task of the category a, the computing task of the category B, and the computing task of the category C correspond to the same cpu (a), the computing task of the category a, the computing task of the category B, and the computing task of the category C are allocated to the cpu (a) for execution; the scheduling file indicates that the calculation tasks of the type D and the calculation tasks of the type E correspond to the same CPU (b), and then the calculation tasks of the type D and the calculation tasks of the type E are distributed to the CPU (b) for execution; and (c) the scheduling file indicates that the computing tasks of the category G and the computing tasks of the category H correspond to the CPU (c), and the computing tasks of the category G and the computing tasks of the category H are distributed to the CPU (c) for execution. And (c) the scheduling file indicates that the computing task of the type F corresponds to the CPU (b), and the computing task of the type F is distributed to the CPU (b) for execution.
In the embodiment, because the scheduling file indicates the CPUs corresponding to the various computing tasks, the CPUs executing the computing tasks of the same category are fixed, and the end-to-end processing delay can be reduced to a certain extent; meanwhile, all or part of the types of computing tasks of the same computing link correspond to the same CPU, so that the tasks with data dependency relationship on the same computing link can be executed by the same CPU as far as possible, the processing result of the upstream computing task required by the downstream computing task is in the cache corresponding to the same CPU, data does not need to be acquired from the caches corresponding to other CPUs, the end-to-end time delay is short, the time delay jitter is small, and when the electronic equipment executing the task processing method is unmanned, the normal operation of the unmanned vehicle can be ensured.
The following describes "assigning various types of computing tasks to corresponding CPUs for execution" in the embodiment shown in fig. 3 by using a specific embodiment.
Fig. 5 is a second flowchart of the task processing method provided in the embodiment of the present application, and a method for allocating various types of computation tasks corresponding to a first CPU to the first CPU for execution is taken as an example in the embodiment, where the first CPU is any one of the CPUs. Referring to fig. 5, the method of the present embodiment includes:
s501, acquiring priority information of various computing tasks corresponding to a first CPU; wherein the priority of the downstream computing task is higher than that of the upstream computing task in the same computing link.
And various computing tasks corresponding to the first CPU are obtained according to the scheduling file.
Under the condition that various computing tasks corresponding to the first CPU belong to the same computing link, the priority of the various computing tasks corresponding to the first CPU meets the condition that the priority of a downstream computing task is higher than the priority of an upstream computing task in the same computing link.
Illustratively, with continued reference to FIG. 4, for compute link a, the compute tasks for class A, the compute tasks for class B, the compute tasks for class C, and the compute tasks for class H have priorities from high to low as: a computing task of category H, a computing task of category C, a computing task of category B, and a computing task of category a. For the computing link b, the priorities of the computing tasks of category D, the computing tasks of category E, the computing tasks of category G, and the computing tasks of category H are, from high to low: a computing task of category H, a computing task of category G, a computing task of category E, and a computing task of category D. For the computing link c, the priority of the computing tasks of the category D, the computing tasks of the category F, the computing tasks of the category G and the computing tasks of the category H is from high to low as the computing tasks of the category H, the computing tasks of the category G, the computing tasks of the category F and the computing tasks of the category D.
Under the condition that various computing tasks corresponding to the first CPU belong to a plurality of computing links, the priority of a downstream computing task in the same computing link in the various computing tasks corresponding to the first CPU is higher than that of an upstream computing task, and the priorities among the various computing tasks on different computing links can be preset.
And step S502, distributing various computing tasks corresponding to the first CPU for execution according to the priority information.
In one mode: distributing various computing tasks corresponding to the first CPU to be executed according to the priority information, wherein the method comprises the following steps:
b1, when the first category of computing task needs to be allocated to the first CPU for execution, judging whether the state of the first category of computing task is the first state, wherein the first category of computing task is any one category of computing tasks corresponding to the first CPU.
That is, when the first type of computing task needs to be allocated to the first CPU for execution, it is first determined whether the state of the first type of computing task is the first state. Wherein, the first state of any kind of calculation task is any one of the following states: the computing task of the category does not exist currently (namely the computing task of the category is not accurately ready currently), the computing task of the category is executed completely, and the computing task of the category is in a non-execution state; the non-execution state of any one category of computing tasks instructs the corresponding CPU to suspend execution of the category of computing tasks.
b2, if the state of the first category of computing tasks is the first state, distributing the second category of computing tasks to the first CPU for execution, wherein the second category of computing tasks is the computing tasks with the state of the second state and the highest priority among the computing tasks of the categories corresponding to the first CPU.
The second state of any category of computing tasks is that the computing tasks of the category exist (i.e., the computing tasks of the category are currently and accurately ready) and are not in a non-execution state. That is, the CPU starts executing the calculation task of the highest priority class again every time it finishes executing one calculation task.
Illustratively, with continued reference to fig. 4, the computing tasks of the category a, the computing tasks of the category B, and the computing tasks of the category C correspond to the cpu (a), and the priority of the computing tasks of the category a, the computing tasks of the category B, and the computing tasks of the category C is, from high to low: the computing task of the category C, the computing task of the category B, and the computing task of the category a, that is, at this time, the CPU (a) is the first CPU, and the various computing tasks corresponding to the first CPU are the computing task of the category a, the computing task of the category B, and the computing task of the category C. If the computing task of the category B needs to be allocated to the execution cpu (a) for execution, the computing task of the category B is the computing task of the first category.
Next, if the state of the computing task of the category B is the first state, it is determined whether the state of the computing task of the category C is the first state, if the computing task of the category C is not the first state, it is determined whether the state of the computing task of the category C is the second state, and if the state of the computing task of the category C is the second state, the computing task of the category C is assigned to the cpu (a) to be executed, and at this time, the computing task of the category C is the computing task of the second category. If the state of the computing task of the category C is the first state, judging whether the state of the computing task of the category B is the first state, if the state of the computing task of the category B is not the first state, judging whether the state of the computing task of the category B is the second state, if the state of the computing task of the category B is the second state, distributing the computing task of the category B to a CPU (a) for execution, and at the moment, the computing task of the category B is the computing task of the second category.
In this embodiment, the priority of the downstream computing task on the same computing link is higher than that of the upstream computing task, and the CPU starts to execute from the computing task of the highest priority class again each time it finishes executing one computing task, so that it can be ensured that the task that should be processed first on the same computing link can be processed first, that is, the problem of priority inversion can be solved, and thus the end-to-end time delay is short, and when the electronic device executing the task processing method is an unmanned vehicle, normal operation of the unmanned vehicle can be ensured.
The task processing method provided by the present application is explained above, and the following describes a task processing device provided by the present application.
Fig. 6 is a schematic structural diagram of a task processing device according to an embodiment of the present application, and as shown in fig. 6, the device according to the embodiment may include: an acquisition module 601 and an allocation module 602.
An obtaining module 601, configured to obtain a scheduling file, where the scheduling file is used to indicate a central processing unit CPU corresponding to each of various computing tasks; the system comprises a plurality of computing tasks, wherein each computing task forms at least one computing link, all or part of computing tasks belonging to the same computing link correspond to the same CPU, and the output of an upstream computing task in any two adjacent computing tasks on the computing link is the input of a downstream computing task; and the allocating module 602 is configured to allocate each type of computing task to a corresponding CPU for execution according to the scheduling file.
Optionally, the obtaining module 601 is specifically configured to: acquiring a configuration file, wherein the configuration file comprises input channels and output channels of various computing tasks; acquiring at least one calculation link composed of various calculation tasks according to the configuration file; and generating the scheduling file according to the at least one calculation link and a preset condition.
Optionally, the preset condition includes a first preset condition, and the obtaining module 601 is specifically configured to: grouping various computing tasks according to the at least one computing link and the first preset condition to obtain at least one group; and generating the scheduling file according to the at least one packet.
Optionally, the first preset condition includes: the N-N computing tasks on the first computing link are distributed into the same group; the first computing link is any one of the at least one computing link, N is the total number of classes of computing tasks included in the first computing link, N is the number of classes of computing tasks, which are commonly included in the first computing link and other computing links, allocated to other groups, N is a positive integer, and N is an integer.
Optionally, the obtaining module 601 is specifically configured to: for each group, determining CPUs corresponding to various computing tasks included in the group; and generating the scheduling file according to the CPUs corresponding to the various computing tasks included in each group.
Optionally, the preset condition further includes a second preset condition, and the obtaining module 601 is specifically configured to: and generating the scheduling file according to the at least one group and the second preset condition.
Optionally, the second preset condition includes: for each packet: if various computing tasks included in the group are distributed to the same CPU to be executed, so that the load of the CPU is smaller than a preset load, the various computing tasks included in the group all correspond to the CPU; if the various computing tasks included in the group are distributed to the same CPU to be executed so that the load of the CPU is greater than or equal to the preset load, the computing tasks of all the categories included in the group correspond to at least two CPUs, and the computing tasks of all the categories included in the group are distributed to the at least two CPUs to be executed so that the loads of the at least two CPUs are all smaller than the preset load.
Optionally, the second preset condition further includes: if one CPU corresponds to all or part of the categories of computing tasks included in each of the at least two groups, the attributes of the computing tasks of the categories corresponding to the CPU are the same.
Optionally, the second preset condition further includes: the attributes of the computing tasks of the categories corresponding to any one of the at least two CPUs are the same.
Optionally, the allocating module 602 is specifically configured to: for various computing tasks corresponding to a first CPU, the first CPU is any one CPU included in the electronic device: acquiring priority information of various computing tasks corresponding to the first CPU; the priority of a downstream computing task in the same computing link is higher than that of an upstream computing task; and distributing various computing tasks corresponding to the first CPU for execution according to the priority information.
Optionally, the allocating, according to the priority information, each type of computation task corresponding to the first CPU for execution includes: when a first category of computing task needs to be allocated to the first CPU for execution, judging whether the state of the first category of computing task is a first state, wherein the first category of computing task is any one category of computing tasks corresponding to the first CPU; if so, distributing a second type of computing task to the first CPU for execution, wherein the second type of computing task is a computing task with a second state and a highest priority in various computing tasks corresponding to the first CPU; wherein, the first state of any kind of calculation task is any one of the following states: the computing task of the category does not exist at present, the computing task of the category is executed completely, and the computing task of the category is in a non-execution state; the non-execution state of any one category of computing tasks indicates that the corresponding CPU suspends the execution of the category of computing tasks; the second state of any one class of computing tasks is that the class of computing tasks exists and is not in a non-executing state.
The apparatus of this embodiment may be configured to implement the technical solutions of the above method embodiments, and the implementation principles and technical effects are similar, which are not described herein again.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 7 is a block diagram of an electronic device according to a task processing method according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 7, the electronic apparatus includes: one or more processors 701, a memory 702, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 7, one processor 701 is taken as an example.
The memory 702 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor to cause the at least one processor to perform the task processing method provided by the application. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the task processing method provided by the present application.
Memory 702, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules (e.g., acquisition module 601 and distribution module 602 shown in fig. 6) corresponding to the task processing methods in the embodiments of the present application. The processor 701 executes various functional applications of the server and data processing, i.e., implements the task processing method in the above-described method embodiments, by executing the non-transitory software programs, instructions, and modules stored in the memory 702.
The memory 702 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device of the task processing method, and the like. Further, the memory 702 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 702 may optionally include a memory remotely located from the processor 701, and these remote memories may be connected through a network to an electronic device that implements the task processing method. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device implementing the task processing method may further include: an input device 703 and an output device 704. The processor 701, the memory 702, the input device 703 and the output device 704 may be connected by a bus or other means, and fig. 7 illustrates an example of a connection by a bus.
The input device 703 may receive entered numeric or character information and generate key signal inputs related to user settings and function controls of the XXX electronic device, such as a touch screen, keypad, mouse, track pad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, or other input device. The output devices 704 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the method, the scheduling file is obtained and used for indicating the Central Processing Units (CPUs) corresponding to various computing tasks, the various computing tasks are distributed to the corresponding CPUs to be executed according to the scheduling file, all or part of the computing tasks on the same computing link correspond to the same CPU, and the priority of the upstream computing task is higher than that of the downstream computing task in the same computing link. The problems of prolonging the end-to-end processing time and large jitter in the task processing process in the prior art are solved, so that the technical effects of short end-to-end processing time delay and small time delay jitter in the task processing process and ensuring the stable operation of electronic equipment (such as an unmanned vehicle) are achieved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (16)

1. A task processing method, comprising:
acquiring a configuration file, wherein the configuration file comprises input channels and output channels of various computing tasks;
acquiring at least one calculation link composed of various calculation tasks according to the configuration file;
grouping various computing tasks according to the at least one computing link and a first preset condition to obtain at least one group;
generating a scheduling file according to the at least one group and a second preset condition; the scheduling file is used for indicating the Central Processing Units (CPUs) corresponding to various computing tasks; the system comprises a plurality of computing tasks, wherein each computing task forms at least one computing link, all or part of computing tasks belonging to the same computing link correspond to the same CPU, and the output of an upstream computing task in any two adjacent computing tasks on the computing link is the input of a downstream computing task;
distributing various computing tasks to corresponding CPUs to be executed according to the scheduling files;
the second preset condition includes:
for each packet:
if various computing tasks included in the group are distributed to the same CPU to be executed so that the load of the CPU is smaller than a preset load, the various computing tasks included in the group all correspond to the CPU;
if the various computing tasks included in the group are distributed to the same CPU to be executed so that the load of the CPU is greater than or equal to the preset load, the computing tasks of all the categories included in the group correspond to at least two CPUs, and the computing tasks of all the categories included in the group are distributed to the at least two CPUs to be executed so that the loads of the at least two CPUs are all smaller than the preset load.
2. The method according to claim 1, wherein the first preset condition comprises: the N-N computing tasks on the first computing link are distributed into the same group;
the first computing link is any one of the at least one computing link, N is the total number of classes of computing tasks included in the first computing link, N is the number of classes of computing tasks, which are commonly included in the first computing link and other computing links, allocated to other groups, N is a positive integer, and N is an integer.
3. The method of claim 1, wherein the generating the schedule file from the at least one packet comprises:
for each group, determining CPUs corresponding to various computing tasks included in the group;
and generating the scheduling file according to the CPUs corresponding to the various computing tasks included in each group.
4. The method according to claim 1, wherein the second preset condition further comprises:
if one CPU corresponds to all or part of the categories of computing tasks included in each of the at least two groups, the attributes of the computing tasks of the categories corresponding to the CPU are the same.
5. The method according to claim 1, wherein the second preset condition further comprises:
the attributes of the computing tasks of the categories corresponding to any one of the at least two CPUs are the same.
6. The method according to any one of claims 1 to 5, wherein distributing various computing tasks to corresponding CPUs for execution according to the scheduling file comprises:
for various computing tasks corresponding to a first CPU, the first CPU is any one CPU included in the electronic device:
acquiring priority information of various computing tasks corresponding to the first CPU; the priority of a downstream computing task in the same computing link is higher than that of an upstream computing task;
and distributing various computing tasks corresponding to the first CPU for execution according to the priority information.
7. The method according to claim 6, wherein said allocating the various types of computing tasks corresponding to the first CPU for execution according to the priority information comprises:
when a first category of computing task needs to be allocated to the first CPU for execution, judging whether the state of the first category of computing task is a first state, wherein the first category of computing task is any one category of computing tasks corresponding to the first CPU;
if so, distributing a second type of computing task to the first CPU for execution, wherein the second type of computing task is a computing task with a second state and a highest priority in various computing tasks corresponding to the first CPU;
wherein, the first state of any kind of calculation task is any one of the following states: the computing task of the category does not exist at present, the computing task of the category is executed completely, and the computing task of the category is in a non-execution state; the non-execution state of any one category of computing tasks indicates that the corresponding CPU suspends the execution of the category of computing tasks; the second state of any one class of computing tasks is that the class of computing tasks exists and is not in a non-executing state.
8. A task processing apparatus, comprising:
the acquisition module is used for acquiring configuration files, and the configuration files comprise input channels and output channels of various computing tasks; acquiring at least one calculation link composed of various calculation tasks according to the configuration file; grouping various computing tasks according to the at least one computing link and a first preset condition to obtain at least one group; generating a scheduling file according to the at least one group and a second preset condition; the scheduling file is used for indicating the Central Processing Units (CPUs) corresponding to various computing tasks; the system comprises a plurality of computing tasks, wherein each computing task forms at least one computing link, all or part of computing tasks belonging to the same computing link correspond to the same CPU, and the output of an upstream computing task in any two adjacent computing tasks on the computing link is the input of a downstream computing task;
the distribution module is used for distributing various computing tasks to corresponding CPUs for execution according to the scheduling files;
the second preset condition includes:
for each packet:
if various computing tasks included in the group are distributed to the same CPU to be executed, so that the load of the CPU is smaller than a preset load, the various computing tasks included in the group all correspond to the CPU;
if the various computing tasks included in the group are distributed to the same CPU to be executed so that the load of the CPU is greater than or equal to the preset load, the computing tasks of all the categories included in the group correspond to at least two CPUs, and the computing tasks of all the categories included in the group are distributed to the at least two CPUs to be executed so that the loads of the at least two CPUs are all smaller than the preset load.
9. The apparatus of claim 8, wherein the first preset condition comprises: the N-N computing tasks on the first computing link are distributed into the same group;
the first computing link is any one of the at least one computing link, N is the total number of classes of computing tasks included in the first computing link, N is the number of classes of computing tasks, which are commonly included in the first computing link and other computing links, allocated to other groups, N is a positive integer, and N is an integer.
10. The apparatus of claim 8, wherein the obtaining module is specifically configured to:
for each group, determining CPUs corresponding to various computing tasks included in the group;
and generating the scheduling file according to the CPUs corresponding to the various computing tasks included in each group.
11. The apparatus of claim 8, wherein the second preset condition further comprises:
if one CPU corresponds to all or part of the categories of computing tasks included in each of the at least two groups, the attributes of the computing tasks of the categories corresponding to the CPU are the same.
12. The apparatus of claim 8, wherein the second preset condition further comprises:
the attributes of the computing tasks of the categories corresponding to any one of the at least two CPUs are the same.
13. The device according to any one of claims 8 to 12, wherein the distribution module is specifically configured to:
for various computing tasks corresponding to a first CPU, the first CPU is any one CPU included in the electronic device:
acquiring priority information of various computing tasks corresponding to the first CPU; the priority of a downstream computing task in the same computing link is higher than that of an upstream computing task;
and distributing various computing tasks corresponding to the first CPU for execution according to the priority information.
14. The apparatus according to claim 13, wherein said allocating the various types of computing tasks corresponding to the first CPU for execution according to the priority information comprises:
when a first category of computing task needs to be allocated to the first CPU for execution, judging whether the state of the first category of computing task is a first state, wherein the first category of computing task is any one category of computing tasks corresponding to the first CPU;
if so, distributing a second type of computing task to the first CPU for execution, wherein the second type of computing task is a computing task with a second state and a highest priority in various computing tasks corresponding to the first CPU;
wherein, the first state of any kind of calculation task is any one of the following states: the computing task of the category does not exist at present, the computing task of the category is executed completely, and the computing task of the category is in a non-execution state; the non-execution state of any one category of computing tasks indicates that the corresponding CPU suspends the execution of the category of computing tasks; the second state of any one category of computing tasks is that the category of computing tasks exists and is not in a non-executing state.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
CN201910966643.4A 2019-10-12 2019-10-12 Task processing method and device Active CN110688229B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910966643.4A CN110688229B (en) 2019-10-12 2019-10-12 Task processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910966643.4A CN110688229B (en) 2019-10-12 2019-10-12 Task processing method and device

Publications (2)

Publication Number Publication Date
CN110688229A CN110688229A (en) 2020-01-14
CN110688229B true CN110688229B (en) 2022-08-02

Family

ID=69112637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910966643.4A Active CN110688229B (en) 2019-10-12 2019-10-12 Task processing method and device

Country Status (1)

Country Link
CN (1) CN110688229B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269480B (en) * 2020-02-17 2022-06-14 百度在线网络技术(北京)有限公司 Task allocation path determining method and device and electronic equipment
CN111694647A (en) * 2020-06-08 2020-09-22 北京百度网讯科技有限公司 Task scheduling method, device and storage medium for automatic driving vehicle
CN111506413B (en) * 2020-07-02 2020-09-18 上海有孚智数云创数字科技有限公司 Intelligent task scheduling method and system based on business efficiency optimization

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001100884A (en) * 1999-09-30 2001-04-13 Fujitsu Ltd Task-managing device and computer-readable recording medium with task-managing program recorded therein
CN102467415A (en) * 2010-11-03 2012-05-23 大唐移动通信设备有限公司 Service facade task processing method and equipment
CN103631657A (en) * 2013-11-19 2014-03-12 浪潮电子信息产业股份有限公司 Task scheduling algorithm based on MapReduce
CN107943577A (en) * 2016-10-12 2018-04-20 百度在线网络技术(北京)有限公司 Method and apparatus for scheduler task
CN108694087A (en) * 2017-03-31 2018-10-23 英特尔公司 For the dynamic load leveling in the network interface card of optimal system grade performance
CN109379727A (en) * 2018-10-16 2019-02-22 重庆邮电大学 Task distribution formula unloading in car networking based on MEC carries into execution a plan with cooperating

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001100884A (en) * 1999-09-30 2001-04-13 Fujitsu Ltd Task-managing device and computer-readable recording medium with task-managing program recorded therein
CN102467415A (en) * 2010-11-03 2012-05-23 大唐移动通信设备有限公司 Service facade task processing method and equipment
CN103631657A (en) * 2013-11-19 2014-03-12 浪潮电子信息产业股份有限公司 Task scheduling algorithm based on MapReduce
CN107943577A (en) * 2016-10-12 2018-04-20 百度在线网络技术(北京)有限公司 Method and apparatus for scheduler task
CN108694087A (en) * 2017-03-31 2018-10-23 英特尔公司 For the dynamic load leveling in the network interface card of optimal system grade performance
CN109379727A (en) * 2018-10-16 2019-02-22 重庆邮电大学 Task distribution formula unloading in car networking based on MEC carries into execution a plan with cooperating

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Assessing the Impact of the CPU Power-Saving Modes on the Task-Parallel Solution of Sparse Linear Systems;José I. Aliaga 等;《Cluster Computing》;Springer;20140905;第1335–1348页 *
并行分布计算中的任务调度及其分类;陈华平 等;《计算机科学》;20010115;第28卷(第1期);第45-48页 *

Also Published As

Publication number Publication date
CN110688229A (en) 2020-01-14

Similar Documents

Publication Publication Date Title
CN110806923B (en) Parallel processing method and device for block chain tasks, electronic equipment and medium
CN110688229B (en) Task processing method and device
CN111694646B (en) Resource scheduling method, device, electronic equipment and computer readable storage medium
JP7214786B2 (en) Scheduling method, device, device and medium for deep learning inference engine
US8434085B2 (en) Scalable scheduling of tasks in heterogeneous systems
US8516487B2 (en) Dynamic job relocation in a high performance computing system
JP7282823B2 (en) MEMORY ACCESS REQUEST SCHEDULING METHOD, APPARATUS, ELECTRONIC DEVICE, COMPUTER READABLE MEDIUM AND COMPUTER PROGRAM
CN111506401B (en) Automatic driving simulation task scheduling method and device, electronic equipment and storage medium
CN111459645B (en) Task scheduling method and device and electronic equipment
CN111782365A (en) Timed task processing method, device, equipment and storage medium
CN111259205A (en) Graph database traversal method, device, equipment and storage medium
CN114356547A (en) Low-priority blocking method and device based on processor virtualization environment
CN111782341A (en) Method and apparatus for managing clusters
CN110688327B (en) Video memory management method and device, electronic equipment and computer readable storage medium
US10203988B2 (en) Adaptive parallelism of task execution on machines with accelerators
US8977752B2 (en) Event-based dynamic resource provisioning
CN111176838B (en) Method and device for distributing embedded vector to node in bipartite graph
CN113760638A (en) Log service method and device based on kubernets cluster
US20140068082A1 (en) Collaborative method and system to balance workload distribution
CN111767059A (en) Deployment method and device of deep learning model, electronic equipment and storage medium
CN111416860B (en) Transaction processing method and device based on block chain, electronic equipment and medium
CN113971083A (en) Task scheduling method, device, equipment, medium and product
CN113971082A (en) Task scheduling method, device, equipment, medium and product
CN113760968A (en) Data query method, device, system, electronic equipment and storage medium
CN111292223A (en) Graph calculation processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211112

Address after: 105 / F, building 1, No. 10, Shangdi 10th Street, Haidian District, Beijing 100085

Applicant after: Apollo Intelligent Technology (Beijing) Co.,Ltd.

Address before: 2 / F, *** building, No. 10, Shangdi 10th Street, Haidian District, Beijing 100085

Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant