CN113961346A - Data cache management and scheduling method and device, electronic equipment and storage medium - Google Patents

Data cache management and scheduling method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113961346A
CN113961346A CN202111250995.3A CN202111250995A CN113961346A CN 113961346 A CN113961346 A CN 113961346A CN 202111250995 A CN202111250995 A CN 202111250995A CN 113961346 A CN113961346 A CN 113961346A
Authority
CN
China
Prior art keywords
cache
data set
scheduling
user
scheduler
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111250995.3A
Other languages
Chinese (zh)
Inventor
吕冬冬
刘青松
梁家恩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unisound Intelligent Technology Co Ltd
Original Assignee
Unisound Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unisound Intelligent Technology Co Ltd filed Critical Unisound Intelligent Technology Co Ltd
Priority to CN202111250995.3A priority Critical patent/CN113961346A/en
Publication of CN113961346A publication Critical patent/CN113961346A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention relates to a management and scheduling method, a device, electronic equipment and a storage medium of a data cache, wherein the method is applied to a system comprising a cache controller and a cache scheduler and comprises the following steps: an API-server of the cluster receives a request of a user for creating a cache data set; the cache controller monitors a request for creating a cache data set task, and calls a callback function to create the cache data set according to a value defined by a user; and the cache scheduler receives the request for creating the cache data set, and issues the cache data set according to a corresponding scheduling strategy, so that the cache data set is successfully created. According to the method and the device, the cache data are managed as the self-defined resource of the cluster, so that the cache can be monitored and controlled, the management of the whole life cycle is realized, a user can access more scheduling strategies through the cache scheduler, richer means can be provided on the management of the cache, and more complex scenes can be adapted; the scheduling of resources is more efficient, and the problem of manual operation and maintenance is avoided.

Description

Data cache management and scheduling method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of data cache management, in particular to a method and a device for managing and scheduling data cache, electronic equipment and a storage medium.
Background
At present, the cache of the traditional large-scale machine learning platform constructed based on kubernets is not based on life cycle management; there is no way for caching on machine learning platforms to have corresponding automated lifecycle management as there are containers that are owned by kubernets; when the deep learning task of the platform uses the cache, the deep learning task can only be manually assigned to the node with the cache, and when the ultra-large-scale task uses the cache at the same time, no effective strategy is available for scheduling the adaptation relationship between the cache and the task.
Disclosure of Invention
The invention provides a method and a device for managing and scheduling a data cache, electronic equipment and a storage medium, which can solve the technical problems.
The technical scheme for solving the technical problems is as follows:
in a first aspect, an embodiment of the present invention provides a method for managing and scheduling a data cache, which is applied to a system including a cache controller and a cache scheduler, and includes:
an API-server of the cluster receives a request of a user for creating a cache data set;
the cache controller monitors a request for creating a cache data set task, and calls a callback function to create the cache data set according to a value defined by a user;
the cache scheduler receives a request for creating a cache data set, and issues the cache data set according to a corresponding scheduling strategy, and the cache data set is successfully created;
the user-defined values include: capacity of the cached data set, status of the cached data set, data path of underlying storage of the cached data set.
Further, the method for managing and scheduling data cache further includes:
the API-server of the cluster receives a task creation request with the cache data set submitted by a user;
a native controller of a cluster receives a task creation request;
the cache scheduler issues resources to the nodes according to the information of the cache data set, the creation information of the pod, the information of the nodes and the corresponding scheduling strategy information;
the agent creates a task at the node.
Further, the method for managing and scheduling data cache further includes:
the cache scheduler acquires the information of the cache data set and the information of the node from an API-server of the cluster;
wherein caching the information of the data set comprises: capacity of a cache data set, state of the cache data set, and data path of bottom storage of the cache data set; the information of the node includes: the total memory size of each machine and the size of the distributable memory of the node per se;
and the information of the Cache data set and the information of the nodes are both placed in the Cache.
Furthermore, in the above method for managing and scheduling data buffer,
the cache scheduler comprises a session module, the session module determines a proper node according to the scheduling strategy of the cluster and the information of the current cluster, binds the resource with the node name, and invokes the agent of the node to create the resource.
Further, in the method for managing and scheduling a data cache, the method includes:
the scheduling policy is loaded on the cache scheduler in a plug-in mode, a user specifies the name of the scheduling policy when the user uses the scheduling policy, and the cache scheduler automatically selects the corresponding scheduling policy to schedule the resources when the cache scheduler schedules the resources.
Further, the method for managing and scheduling a data cache further includes:
the cache controller receives requests of updating and deleting the cache data set from users, and updates and deletes the cache data set;
further, the method for managing and scheduling a data cache further includes:
the cache controller automatically maintains a life cycle of the cached data set.
In a second aspect, an embodiment of the present invention further provides a device for managing and scheduling a data cache, which is applied to a system including a cache controller and a cache scheduler, and includes:
a receiving module: an API-server for a cluster receives a request of a user for creating a cache data set;
a creation module: the cache controller monitors the request for creating the cache data set task, and calls a callback function to create the cache data set according to a value defined by a user;
a sending module: the cache scheduler receives a request for creating a cache data set, and issues the cache data set according to a corresponding scheduling strategy, and the cache data set is successfully created;
the user-defined values include: capacity of the cached data set, status of the cached data set, and data paths of underlying storage of the cached data set.
In a third aspect, an embodiment of the present invention further provides an electronic device, including: a processor and a memory;
the processor is used for executing the data cache management and scheduling method according to any one of the above by calling the program or the instruction stored in the memory.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium stores a program or instructions, and the program or instructions cause a computer to execute any one of the above methods for managing and scheduling a data cache.
The invention has the beneficial effects that: the invention relates to a management and scheduling method, a device, electronic equipment and a storage medium of a data cache, wherein the method is applied to a system comprising a cache controller and a cache scheduler and comprises the following steps: an API-server of the cluster receives a request of a user for creating a cache data set; the cache controller monitors a request for creating a cache data set task, and calls a callback function to create the cache data set according to a value defined by a user; and the cache scheduler receives the request for creating the cache data set, and issues the cache data set according to a corresponding scheduling strategy, so that the cache data set is successfully created. According to the method and the device, the cache data are managed as the self-defined resource of the cluster, so that the cache can be monitored and controlled, the management of the whole life cycle is realized, a user can access more scheduling strategies through the cache scheduler, richer means can be provided on the management of the cache, and more complex scenes can be adapted; the scheduling of resources is more efficient, and the problem of manual operation and maintenance is avoided.
Drawings
Fig. 1 is a first diagram of a method for managing and scheduling a data cache according to an embodiment of the present invention;
fig. 2 is a second method for managing and scheduling a data cache according to an embodiment of the present invention;
fig. 3 is a diagram of a device for managing and scheduling data caches according to an embodiment of the present invention;
fig. 4 is a schematic block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
In order that the above objects, features and advantages of the present application can be more clearly understood, the present disclosure will be further described in detail with reference to the accompanying drawings and examples. It is to be understood that the embodiments described are only a few embodiments of the present disclosure, and not all embodiments. The specific embodiments described herein are merely illustrative of the disclosure and are not limiting of the application. All other embodiments derived by one of ordinary skill in the art from the described embodiments of the present application are within the scope of the present application.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
Fig. 1 is a first diagram of a method for managing and scheduling a data cache according to an embodiment of the present invention.
In a first aspect, an embodiment of the present invention provides a method for managing and scheduling a data cache, which is applied to a system including a cache controller and a cache scheduler, and includes three steps S101 to S103:
s101: the API-server of the cluster receives a request from a user to create a cached data set.
Specifically, in the embodiment of the application, a custom resource such as a cache data set is registered in kubernets, and when a subsequent user creates the resource such as the cache data set, the API server of the cluster can sense the creation request of the user.
S102: and the cache controller monitors the request for creating the cache data set task and calls a callback function to create the cache data set according to the user self-defined value.
Specifically, in this embodiment of the present application, the user-defined value may be a field of a cache resource, such as a capacity of a cache data set, a state of the cache data set, such as cache initialization, cache synchronization, cache completion, a data path of a bottom layer storage of the cache, and the like; the data caching controller customizes and polls an external resource request event, such as a cache data set creation request of a user, and then calls a callback function inside the controller to create a cache data set, wherein the callback function calls API (application programming interface) interfaces of various cache engines to cache acquired data to a memory or SSD (solid state disk) of a certain node of a bottom storage.
S103: and the cache scheduler receives the request for creating the cache data set, and issues the cache data set according to a corresponding scheduling strategy, so that the cache data set is successfully created.
Specifically, in the embodiment of the application, the user can access more scheduling strategies through the extended cache scheduler, and the method has richer means on the management of the cache, can adapt to more complex scenes, and can successfully create the cache data set.
Fig. 2 is a second method for managing and scheduling a data cache according to an embodiment of the present invention.
Further, with reference to fig. 2, the method for managing and scheduling data cache further includes four steps S201 to S204:
s201: and the API-server of the cluster receives a task creation request with the cache data set submitted by a user.
Specifically, in the embodiment of the present application, a user submits a training task with a cached data set, such as T, and the API server receives the task creation request.
S202: a native controller of the cluster receives a create request for a task.
Specifically, in the embodiment of the present application, the native controller of the cluster can sense the request for task creation.
S203: and the cache scheduler issues resources to the nodes according to the information of the cache data set, the creation information of the pod, the information of the nodes and the corresponding scheduling strategy information.
Specifically, in this embodiment of the application, the cache scheduler may issue the resource according to the corresponding information of the cache data set in the cacha, the creation information of the pod, and the information of the node resource, and issue the resource to the corresponding node according to the corresponding policy, for example, directly schedule the resource to the node with the cache data set.
S204: the agent creates a task at the node.
Specifically, in the embodiment of the present application, after the task is scheduled by the cache scheduler, the cache scheduler may select a most appropriate node for the resource, the creation request may be issued to an agent of the node, the agent receives the creation request, invokes the resource of the node itself, creates a corresponding task, and reads the cache data to start model training.
Further, the method for managing and scheduling data cache further includes:
and the cache scheduler acquires the information of the cache data set and the information of the node from the API-server of the cluster.
Wherein caching the information of the data set comprises: capacity of a cache data set, state of the cache data set, and data path of bottom storage of the cache data set; the information of the node includes: the node itself has a total memory size per machine, and a size of allocable memory.
And the information of the Cache data set and the information of the nodes are both placed in the Cache.
Specifically, in the embodiment of the application, the cache scheduler pulls the information of the custom cache data set from the API-server; the information of the pulled Cache data set, such as the size of the Cache data set, the data path of the Cache data set, and the like, the information of the node, the total memory size of each machine of the node, the size of the distributable memory, and the like, are all placed in the Cache.
Furthermore, in the above method for managing and scheduling data buffer,
the cache scheduler comprises a session module, the session module determines a proper node according to the scheduling strategy of the cluster and the information of the current cluster, binds the resource with the node name, and invokes the agent of the node to create the resource.
Specifically, in the embodiment of the application, a session module is arranged in a controller, and the session module is responsible for completing policy scheduling, when an exemplary user needs to realize Bingpack, a scheduling policy can preferentially cache data to a full machine, and a machine with enough memory is reserved for scheduling other tasks; the session module selects a proper node according to the scheduling policy of the cluster and the information of the current cluster, binds the resource with the node name, and then calls the agent of the node to create the resource.
Further, in the method for managing and scheduling a data cache, the method includes:
the scheduling policy is loaded on the cache scheduler in a plug-in mode, a user specifies the name of the scheduling policy when the user uses the scheduling policy, and the cache scheduler automatically selects the corresponding scheduling policy to schedule the resources when the cache scheduler schedules the resources.
Specifically, in the embodiment of the application, the scheduling policy is loaded on the cache scheduler in a plug-in mode, and when the scheduling policy is to be used by a user, the user only needs to specify the name of the scheduling policy, and when the cache scheduler schedules, the cache scheduler automatically selects the corresponding policy to schedule the resource, so that the scheduling of the resource is more efficient, and the problem of manual operation and maintenance is solved.
Further, the method for managing and scheduling a data cache further includes:
and the cache controller receives a request of updating and deleting the cache data set from a user, and updates and deletes the cache data set.
Specifically, in the embodiment of the present application, the operations such as updating and deleting the cache data set may be managed through the operations such as updating and deleting.
Further, the method for managing and scheduling a data cache further includes:
the cache controller automatically maintains a life cycle of the cached data set.
Specifically, in the embodiment of the present application, the cache controller may automatically maintain the life cycle of the entire cache data set, for example, when the user needs to expand the cache capacity, the controller may call the corresponding API to expand the underlying cache.
Fig. 3 is a diagram of a device for managing and scheduling a data cache according to an embodiment of the present invention.
In a second aspect, an embodiment of the present invention further provides a device for managing and scheduling a data cache, which is applied in a system including a cache controller and a cache scheduler, and with reference to fig. 3, the device includes:
the receiving module 301: the API-server for the cluster receives a request from a user to create a cached data set.
Specifically, in the embodiment of the present application, a custom resource such as a cache data set is registered in kubernets, and when a subsequent user creates the resource such as the cache data set, the receiving module 301 in the API server of the cluster can sense the creation request of the user.
The creation module 302: and the cache controller monitors the request for creating the cache data set task and calls a callback function to create the cache data set according to the user self-defined value.
Specifically, in this embodiment of the present application, the user-defined value may be a field of a cache resource, such as a capacity of a cache data set, a state of the cache data set, such as cache initialization, cache synchronization, cache completion, a data path of a bottom layer storage of the cache, and the like; the data caching controller customizes and polls an external resource request event, such as a request for creating a cached data set of a user, and then calls a creating module 302 in a callback function inside the controller to create the cached data set, and the callback function calls API interfaces of various caching engines to cache acquired data to a memory or SSD of a certain node of a bottom storage cache.
The issuing module 303: and the cache scheduler receives the request for creating the cache data set, and issues the cache data set according to a corresponding scheduling strategy, so that the cache data set is successfully created.
Specifically, in the embodiment of the present application, through the extended cache scheduler, the issuing module 303 may issue the cache data set according to a corresponding scheduling policy, so that a user may access more scheduling policies, and have richer means in cache management, and may adapt to more complex scenes to successfully create the cache data set.
In a third aspect, an embodiment of the present invention further provides an electronic device, including: a processor and a memory;
the processor is used for executing the data cache management and scheduling method according to any one of the above by calling the program or the instruction stored in the memory.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium stores a program or instructions, and the program or instructions cause a computer to execute any one of the above methods for managing and scheduling a data cache.
Fig. 4 is a schematic block diagram of an electronic device provided by an embodiment of the present disclosure.
As shown in fig. 4, the electronic apparatus includes: at least one processor 401, at least one memory 402, and at least one communication interface 403. The various components in the electronic device are coupled together by a bus system 404. A communication interface 403 for information transmission with an external device. It is understood that the bus system 404 is used to enable communications among the components. The bus system 404 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, the various buses are labeled as bus system 404 in fig. 4.
It will be appreciated that the memory 402 in this embodiment can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory.
In some embodiments, memory 402 stores the following elements, executable units or data structures, or a subset thereof, or an expanded set thereof: an operating system and an application program.
The operating system includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing tasks according to hardware. The application programs, including various application programs such as a Media Player (Media Player), a Browser (Browser), etc., are used to implement various application services. A program for implementing any method of the management and scheduling method for a data cache provided in the embodiments of the present application may be included in an application program.
In this embodiment of the present application, the processor 401 calls a program or an instruction stored in the memory 402, specifically, may be a program or an instruction stored in an application program, and the processor 401 is configured to execute steps of various embodiments of a method for managing and scheduling a data cache according to this embodiment of the present application.
An API-server of the cluster receives a request of a user for creating a cache data set;
the cache controller monitors a request for creating a cache data set task, and calls a callback function to create the cache data set according to a value defined by a user;
the cache scheduler receives a request for creating a cache data set, and issues the cache data set according to a corresponding scheduling strategy, and the cache data set is successfully created;
the user-defined values include: capacity of the cached data set, status of the cached data set, data path of underlying storage of the cached data set.
Any method of the management and scheduling methods for the data cache provided in the embodiments of the present application may be applied to the processor 401, or implemented by the processor 401. The processor 401 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 401. The Processor 401 may be a general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The steps of any one of the methods for managing and scheduling a data cache provided in the embodiments of the present application may be directly implemented as the execution of a hardware decoding processor, or implemented by the combination of hardware and software units in the decoding processor. The software elements may be located in ram, flash, rom, prom, or eprom, registers, among other storage media that are well known in the art. The storage medium is located in the memory 402, and the processor 401 reads information in the memory 402 and completes the steps of a data cache management and scheduling method in combination with hardware thereof.
Those skilled in the art will appreciate that although some embodiments described herein include some features included in other embodiments instead of others, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments.
Those skilled in the art will appreciate that the description of each embodiment has a respective emphasis, and reference may be made to the related description of other embodiments for those parts of an embodiment that are not described in detail.
Although the embodiments of the present application have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the present application, and such modifications and variations fall within the scope defined by the appended claims.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A management and scheduling method of data cache is applied to a system comprising a cache controller and a cache scheduler, and comprises the following steps:
an API-server of the cluster receives a request of a user for creating a cache data set;
the cache controller monitors the request for creating the cache data set task, and calls a callback function to create the cache data set according to a value defined by a user;
the cache scheduler receives a request for creating the cache data set, and issues the cache data set according to a corresponding scheduling strategy, wherein the cache data set is created successfully;
wherein the user-defined values comprise: capacity of the cached data set, status of the cached data set, and data paths of underlying storage of the cached data set.
2. The method of claim 1, further comprising:
the API-server of the cluster receives a task creation request with the cache data set submitted by a user;
a native controller of the cluster receives a creation request of the task;
the cache scheduler issues resources to the nodes according to the information of the cache data set, the creation information of the pod, the information of the nodes and the corresponding scheduling strategy information;
the agent creates the task at the node.
3. The method of claim 2, further comprising:
the cache scheduler acquires the information of the cache data set and the information of the node from an API-server of the cluster;
wherein the information of the cached data set comprises: capacity of a cache data set, state of the cache data set, and data path of bottom storage of the cache data set;
the information of the node includes: the total memory size of each machine and the size of the distributable memory of the node per se;
and the information of the Cache data set and the information of the node are both placed in the Cache.
4. The method for managing and scheduling a data cache of claim 1,
the cache scheduler comprises a session module, wherein the session module determines a proper node according to a scheduling strategy of the cluster and the information of the current cluster, binds the resource with the node name, and calls an agent of the node to create the resource.
5. The method for managing and scheduling a data cache of claim 1,
the scheduling policy is loaded on the cache scheduler in a plug-in mode, a user specifies the name of the scheduling policy when the user uses the scheduling policy, and the cache scheduler automatically selects the corresponding scheduling policy to schedule the resources when the cache scheduler schedules the resources.
6. The method of claim 1, further comprising:
the cache controller receives a request of updating and deleting the cache data set by a user,
and updating and deleting the cache data set.
7. The method of claim 1, further comprising: the cache controller automatically maintains a life cycle of the cached data set.
8. A management and scheduling device for data cache, which is applied to a system comprising a cache controller and a cache scheduler, comprises:
a receiving module: an API-server for a cluster receives a request of a user for creating a cache data set;
a creation module: the cache controller monitors the request for creating the cache data set task, and calls a callback function to create the cache data set according to a value defined by a user;
a sending module: the cache scheduler receives the request for creating the cache data set, and issues the cache data set according to a corresponding scheduling strategy, and the cache data set is successfully created;
wherein the user-defined values comprise: capacity of the cached data set, status of the cached data set, and data paths of underlying storage of the cached data set.
9. An electronic device, comprising: a processor and a memory;
the processor is configured to execute a method of managing and scheduling a data cache according to any one of claims 1 to 7 by calling a program or instructions stored in the memory.
10. A computer-readable storage medium storing a program or instructions for causing a computer to perform a method of managing and scheduling a data cache according to any one of claims 1 to 7.
CN202111250995.3A 2021-10-26 2021-10-26 Data cache management and scheduling method and device, electronic equipment and storage medium Pending CN113961346A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111250995.3A CN113961346A (en) 2021-10-26 2021-10-26 Data cache management and scheduling method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111250995.3A CN113961346A (en) 2021-10-26 2021-10-26 Data cache management and scheduling method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113961346A true CN113961346A (en) 2022-01-21

Family

ID=79467284

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111250995.3A Pending CN113961346A (en) 2021-10-26 2021-10-26 Data cache management and scheduling method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113961346A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114697325A (en) * 2022-03-15 2022-07-01 浪潮云信息技术股份公司 Automatic deployment method and operation and maintenance device for cluster virtualization resource management platform cache equipment
CN115114361A (en) * 2022-06-25 2022-09-27 上海道客网络科技有限公司 Resource retrieval method and system based on container cloud platform unified interface
CN115964176A (en) * 2023-01-05 2023-04-14 海马云(天津)信息技术有限公司 Cloud computing cluster scheduling method, electronic device and storage medium
CN117033000A (en) * 2023-10-09 2023-11-10 合肥中科类脑智能技术有限公司 Data scheduling method and equipment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114697325A (en) * 2022-03-15 2022-07-01 浪潮云信息技术股份公司 Automatic deployment method and operation and maintenance device for cluster virtualization resource management platform cache equipment
CN115114361A (en) * 2022-06-25 2022-09-27 上海道客网络科技有限公司 Resource retrieval method and system based on container cloud platform unified interface
CN115114361B (en) * 2022-06-25 2023-07-07 上海道客网络科技有限公司 Resource retrieval method and system based on unified interface of container cloud platform
CN115964176A (en) * 2023-01-05 2023-04-14 海马云(天津)信息技术有限公司 Cloud computing cluster scheduling method, electronic device and storage medium
CN117033000A (en) * 2023-10-09 2023-11-10 合肥中科类脑智能技术有限公司 Data scheduling method and equipment
CN117033000B (en) * 2023-10-09 2024-01-05 合肥中科类脑智能技术有限公司 Data scheduling method and equipment

Similar Documents

Publication Publication Date Title
CN113961346A (en) Data cache management and scheduling method and device, electronic equipment and storage medium
US8639772B2 (en) Centralized application resource manager
CN105190555B (en) Centralized task schedule
US7441240B2 (en) Process scheduling apparatus, process scheduling method, program for process scheduling, and storage medium recording a program for process scheduling
CN110221901A (en) Container asset creation method, apparatus, equipment and computer readable storage medium
WO2018076755A1 (en) Method and apparatus for issuing upgrade package
CN110175677A (en) Automatic update method, device, computer equipment and storage medium
CN110427258B (en) Resource scheduling control method and device based on cloud platform
CN103714137B (en) The delet method and system of a kind of data file
CN106776395B (en) A kind of method for scheduling task and device of shared cluster
WO2021109750A1 (en) Node management method, device and apparatus, storage medium, and system
CN110851285B (en) Resource multiplexing method, device and equipment based on GPU virtualization
CN110275768A (en) Data processing method, device and electronic equipment
US8769521B2 (en) Calendar aware adaptive patching of a computer program
CN107682389B (en) Method, terminal and computer readable storage medium for executing network request
CN115686805A (en) GPU resource sharing method and device, and GPU resource sharing scheduling method and device
CN109067562A (en) Dynamic configuration update method and system, server and medium based on intelligent pointer
CN111190719B (en) Method, device, medium and electronic equipment for optimizing cluster resource allocation
CN105824660A (en) Application program updating method and terminal
CN111294377A (en) Network request sending method of dependency relationship, terminal device and storage medium
WO2022100364A1 (en) Method, apparatus and computing device for configuring compute mode
CN109634573A (en) Android operation system App development approach, device and medium
US11422861B2 (en) Data processing method and computer device
US20210026809A1 (en) Data caching method and node based on hyper-converged infrastructure
CN114153525A (en) AI model service sharing method and system for power grid regulation and control business

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination