CN110389842B - Dynamic resource allocation method, device, storage medium and equipment - Google Patents

Dynamic resource allocation method, device, storage medium and equipment Download PDF

Info

Publication number
CN110389842B
CN110389842B CN201910681471.6A CN201910681471A CN110389842B CN 110389842 B CN110389842 B CN 110389842B CN 201910681471 A CN201910681471 A CN 201910681471A CN 110389842 B CN110389842 B CN 110389842B
Authority
CN
China
Prior art keywords
amount
resource amount
parallelism
resource
target operation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910681471.6A
Other languages
Chinese (zh)
Other versions
CN110389842A (en
Inventor
杨小可
雷赛龄
张游
孟少川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN201910681471.6A priority Critical patent/CN110389842B/en
Publication of CN110389842A publication Critical patent/CN110389842A/en
Application granted granted Critical
Publication of CN110389842B publication Critical patent/CN110389842B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Abstract

The invention discloses a dynamic resource allocation method, a dynamic resource allocation device, a storage medium and equipment. The method comprises the following steps: acquiring the total amount of resources in a resource library, the used amount of resources and the available amount of resources of a target operation and the parallelism of the target operation; monitoring the execution time of the target operation in a data shuffling stage; if the execution time of the data shuffling stage is longer than a preset time, adjusting the used resource amount and the available resource amount of a target operation and the parallelism of the target operation according to a preset rule; determining the residual resource amount in the resource library according to the total resource amount in the resource library and the adjusted available resource amount of the target operation; judging whether the adjusted available resource amount is smaller than the residual resource amount; and if the adjusted available resource amount is less than the residual resource amount, continuously monitoring the execution time of the target operation in the data shuffling stage. The method and the device can improve the operation efficiency of the big data frame by dynamically adjusting the parallelism.

Description

Dynamic resource allocation method, device, storage medium and equipment
Technical Field
The present application relates to the field of big data analysis and processing, and in particular, to a dynamic resource allocation method, apparatus, storage medium, and device.
Background
With the development of informatization, data to be processed by enterprises are increased explosively, and the data volume reaches TB level and PB level. To support the analysis and processing of such large-scale data, various large data frameworks, tools and techniques have come to mind, and Spark is one of them. Taking Spark as an example, Spark is a big data processing framework constructed around speed, usability and complex analysis, and a mapping-reduction model (Map-Reduce) is promoted to a higher level by adopting a data Shuffle (Shuffle) mode in the data processing process, and the performance of Spark is many times faster than that of other big data processing technologies by utilizing memory data storage and near real-time processing capability.
At present, when spark is used for job calculation, a fixed parallelism parameter is usually set for jobs with Shuffle operation, and dynamic parallelism parameter adjustment cannot be provided, so that the problem that when the spark resource configuration is too small, memory overflow is easily caused, or when the resource configuration is too large, the job application cannot reach the resource is caused. Therefore, how to dynamically adjust the parallelism to improve the operation performance of the big data frame becomes a problem to be solved in the prior art.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method, an apparatus, a storage medium, and a device for dynamically allocating resources, which dynamically adjust a parallelism degree to improve an operation performance of a big data frame.
To achieve the above object, an embodiment of the present application provides a dynamic resource allocation method, including:
acquiring the total amount of resources in a resource library, the used amount of resources and the available amount of resources of a target operation and the parallelism of the target operation;
monitoring the execution time of the target operation in a data shuffling stage;
if the execution time of the data shuffling stage is longer than the preset time, adjusting the used resource amount and the available resource amount of the target operation and the parallelism of the target operation according to a preset rule;
determining the residual resource amount in the resource library according to the total resource amount in the resource library and the adjusted available resource amount of the target operation;
judging whether the adjusted available resource amount is smaller than the residual resource amount;
and if the adjusted available resource amount is less than the residual resource amount, continuously monitoring the execution time of the target operation in the data shuffling stage.
Preferably, the adjusting the amount of available resources allocated to the target job and the parallelism of the target job according to the predetermined rule includes:
doubling the available resource amount, adjusting the used resource amount to be one third of the available resource amount, and doubling the parallelism;
judging whether the adjusted parallelism is more than three times of the adjusted used resource amount or not;
if so, the parallelism is set to be three times the amount of used resources after adjustment.
Preferably, if the adjusted parallelism is less than three times of the adjusted used resource amount, the parallelism is set to be doubled.
Preferably, if the adjusted available resource amount is greater than or equal to the remaining resource amount, the execution time of the target job in the data shuffling stage is not monitored any more, and an alarm signal is generated.
Preferably, the preset duration is a preset operation completion duration of the target operation.
Preferably, the total amount of resources in the resource pool and the available amount of resources allocated to the target job include the number of CPUs.
An embodiment of the present application further provides a dynamic resource allocation apparatus, including:
the data acquisition module is used for acquiring the total amount of resources in a resource library, the used resource amount and the available resource amount of the target operation and the parallelism of the target operation;
the execution time monitoring module is used for monitoring the execution time of the target operation in the data shuffling stage;
the parameter adjusting module is used for adjusting the available resource amount distributed by the target operation and the parallelism of the target operation according to a preset rule if the execution time of the data shuffling stage is greater than a preset time length;
the residual resource amount determining module is used for determining the residual resource amount in the resource library according to the total resource amount in the resource library and the adjusted resource amount occupied by the target operation;
the first judging module is used for judging whether the adjusted available resource amount is smaller than the residual resource amount;
and the circulation module is used for continuously monitoring the execution time of the target operation in the data shuffling stage if the adjusted available resource amount is less than the residual resource amount.
Preferably, the parameter adjusting module includes:
the parameter setting unit is used for doubling the available resource amount, adjusting the used resource amount to be one third of the available resource amount and doubling the parallelism degree;
the second judgment module is used for judging whether the adjusted parallelism is more than three times of the adjusted used resource amount;
a parallelism setting unit configured to, if greater than the predetermined threshold, set the parallelism to three times the adjusted amount of used resources.
The embodiment of the present application further provides a computer device, which includes a processor and a memory for storing processor-executable instructions, where the processor executes the instructions to implement the above steps.
Embodiments of the present application also provide a computer-readable storage medium having stored thereon computer instructions, which when executed, implement the above steps.
As can be seen from the above technical solutions provided in the embodiments of the present application, the used resource amount, the available resource amount, and the parallelism of the job are dynamically adjusted by monitoring the execution time of the data shuffle stage (shuffle stage) to find more appropriate configuration parameters, so that the operation efficiency of the big data frame is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
FIG. 1 is a flow chart of a dynamic resource allocation method in an embodiment of the present application;
FIG. 2 is a data processing flow diagram of dynamic resource allocation according to an embodiment of the present application;
FIG. 3 is a flow chart of data processing of a configuration parameter adjustment module for dynamic resource allocation according to an embodiment of the present application;
FIG. 4 is a data processing flow diagram of a dynamic resource allocation determination module according to an embodiment of the present disclosure;
fig. 5 is a schematic block diagram of a dynamic resource allocation apparatus according to an embodiment of the present application;
fig. 6 is a schematic diagram of a computer device provided in an embodiment of the present application.
Detailed Description
The embodiment of the application provides a dynamic resource allocation method, a dynamic resource allocation device, a storage medium and equipment.
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art without any inventive work based on the embodiments in the present application shall fall within the scope of protection of the present application.
In the big data framework, the total amount of resources that can be used by a job is managed in the resource library, and when a big data task is run, the system allocates corresponding resources to the job, specifically, the resources are usually the number of process executors, the number of CPUs of each execute and the size of a memory, and of course, other resources may also be included, which is not limited in this application. After the resources are allocated, how to set the parallelism parameter is to match the parallelism parameter with the allocated resources, so that the allocated resources are fully utilized, and the operation efficiency of the big data frame is greatly influenced.
Taking a large data frame spark as an example, the parallelism refers to the number of tasks of each stage in spark operation. For example, if 100 CPUs are allocated to a job, then 100 tasks can be executed in parallel, and therefore, the parallelism needs to be set to be at least 100 to fully and effectively utilize the cluster resources, and finally, the performance and the execution speed of the whole Spark job are improved. However, in the prior art, after the resources are allocated to the job, a fixed parallelism parameter is usually set, and how to dynamically adjust the parallelism parameter becomes a key for improving the operation performance of the big data frame.
Referring to fig. 1 and fig. 2, a flow chart of dynamic resource allocation provided by the present application is shown. The method specifically comprises the following steps:
s101: and acquiring the total amount of resources in the resource library, the used resource amount and the available resource amount of the target operation and the parallelism of the target operation.
In some embodiments, determining the total amount of resources in the resource pool, i.e. the upper limit of resources that can be acquired, may be the total number of CPUs in the cluster; the amount of resources used for the target job may be the number of CPUs currently running the target job, and the amount of resources available for the target job refers to the maximum number of CPUs that can run the target job.
S102: monitoring an execution time of the target job in a data shuffling stage.
S103: and if the execution time of the data shuffling stage is greater than the preset time, adjusting the resource amount and the parallelism of the target operation according to a preset rule.
The data shuffling stage, namely the shuffle stage, describes a process from the output of the map task to the input of the reduce task, and the performance of the shuffle directly affects the performance and throughput of the whole program, so that the execution time of the data shuffling stage needs to be monitored to judge the operation condition of the operation.
In some embodiments, the execution time of the data shuffling stage is monitored and compared with a preset time length, and if the execution time is longer than the preset time length, the resource amount occupied by the operation and the parallelism are adjusted; the preset time length may be a preset completion time length of the target job.
In a specific embodiment, the configuration parameter adjustment module in fig. 2 may refer to fig. 3, and may double the amount of available resources allocated by the target job, and adjust the amount of currently used resources to be one third of the amount of available resources, and then double the parallelism parameter. When the parallelism is more than three times of the executor, the parallel task can have the condition of serial stacking, and the maximization of the efficiency cannot be realized, therefore, after the parallelism parameter is doubled, whether the doubled parallelism parameter is more than three times of the current used resource amount needs to be judged, and if the doubled parallelism parameter is more than three times of the current used resource amount, the parallelism parameter is set to be three times of the used resource amount; and if the doubled parallelism parameter is less than or equal to three times of the used resource amount, taking the doubled parallelism parameter.
For example, the target job has 10 CPUs for the amount of used resources, 30 CPUs for the amount of available resources, and 10 for parallelism. Monitoring that the execution time of the data shuffling stage is longer than the preset time, adjusting the available resource amount to 60 CPUs, the current used resource amount to 20 CPUs and the parallelism to 20 according to the method, wherein the parallelism at the moment is less than three times of the used resource amount, so that the parallelism parameter can be set to 20.
S104: and determining the residual resource amount in the resource library according to the adjusted resource amount occupied by the target operation.
In some embodiments, the available resource amount allocated by the target job after the adjustment is input through the interface of the resource library, and the remaining resource amount in the resource library is determined.
S105: and judging whether the adjusted available resource amount is larger than the residual resource amount.
Refer to the flow shown in fig. 4.
S106: and if the adjusted available resource amount is less than the residual resource amount, continuously monitoring the execution time of the target operation in the data shuffling stage.
For example, the total amount of resources in the resource pool is 100 CPUs, and after the adjustment, the available resource amount allocated to the target job is 60 CPUs, and the remaining resource amount is 40 CPUs. It can be seen that, if the adjusted available resource amount is less than the remaining resource amount, the execution time of the target job in the data shuffling stage is continuously monitored. And when the adjusted available resource amount is greater than or equal to the residual resource amount, the upper limit of the available resources is reached, the dynamic tuning is quitted, and alarm information is sent out.
Referring to fig. 5, the present application further provides a dynamic resource allocation apparatus, including:
a data obtaining module 411, configured to obtain a total amount of resources in a resource library, an amount of used resources of a target job, an amount of available resources, and a parallelism of the target job;
an execution time monitoring module 412, configured to monitor an execution time of the target job in the data shuffling stage;
a parameter adjusting module 413, configured to adjust, according to a predetermined rule, an available resource amount allocated by a target job and a parallelism of the target job if an execution time of the data shuffling stage is greater than a preset duration;
a remaining resource amount determining module 414, configured to determine a remaining resource amount in the resource library according to the total resource amount in the resource library and the resource amount occupied by the adjusted target job;
a first determining module 415, configured to determine whether the adjusted available resource amount is smaller than the remaining resource amount;
and a loop module 416, configured to continue to monitor the execution time of the target job in the data shuffling stage if the adjusted available resource amount is smaller than the remaining resource amount.
As shown in fig. 6, the present application further provides a computer device, which includes a processor and a memory for storing processor-executable instructions, and when the processor executes the instructions, the steps of the method are implemented.
The present application also provides a computer readable storage medium having stored thereon computer instructions which, when executed, implement the steps of the above-described method.
According to the method and the device, under the conditions that the calculated data amount of the operation is larger than the expected data amount and different periods of data amount are different, the condition that the operation has memory overflow or cannot apply enough resources due to the inherent parallelism parameter configuration is avoided, and therefore the operation failure times are reduced. Meanwhile, the dynamic expansion time of spark resource allocation is reduced, the execution time of the shuffle stage is monitored, the number of executors is adjusted, and relatively appropriate minExcustomers-maxExcustomers are found out, so that the operation efficiency of the operation is improved, and the parallelism of the operation can be automatically adjusted while the resource allocation is dynamically expanded and contracted.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The apparatuses and modules illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions.
For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, the functionality of the various modules may be implemented in the same one or more software and/or hardware implementations as the present application.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. With this understanding in mind, the present solution, or portions thereof that contribute to the prior art, may be embodied in the form of a software product, which in a typical configuration includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The computer software product may include instructions for causing a computing device (which may be a personal computer, a server, or a network device, etc.) to perform the methods described in the various embodiments or portions of embodiments of the present application. The computer software product may be stored in a memory, which may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium. Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include transitory computer readable media (transient media), such as modulated data signals and carrier waves.
All the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
While the present application has been described with examples, those of ordinary skill in the art will appreciate that there are numerous variations and permutations of the present application without departing from the spirit of the application, and it is intended that the appended claims encompass such variations and permutations without departing from the spirit of the application.

Claims (8)

1. A method for dynamic resource allocation, comprising:
acquiring the total amount of resources in a resource library, the used amount of resources and the available amount of resources of a target operation and the parallelism of the target operation;
monitoring the execution time of the target operation in a data shuffling stage;
if the execution time of the data shuffling stage is longer than the preset time, adjusting the used resource amount and the available resource amount of the target operation and the parallelism of the target operation according to a preset rule;
determining the residual resource amount in the resource library according to the total resource amount in the resource library and the adjusted available resource amount of the target operation;
judging whether the adjusted available resource amount is smaller than the residual resource amount;
if the adjusted available resource amount is less than the residual resource amount, continuing to monitor the execution time of the target operation in the data shuffling stage;
the adjusting the available resource amount allocated by the target operation and the parallelism of the target operation according to the preset rule comprises the following steps:
doubling the available resource amount, adjusting the used resource amount to be one third of the available resource amount, and doubling the parallelism;
judging whether the adjusted parallelism is more than three times of the adjusted used resource amount;
if so, the parallelism is set to be three times the amount of used resources after adjustment.
2. The method of claim 1, wherein the parallelism is set to a doubled value if the adjusted parallelism is less than three times the adjusted amount of used resources.
3. The method of claim 1, wherein if the adjusted amount of available resources is greater than or equal to the amount of remaining resources, then the execution time of the target job in the data shuffle stage is no longer monitored and an alarm signal is generated.
4. The method of claim 1, wherein the preset duration is a preset job completion duration for the target job.
5. The method of claim 1, wherein the total amount of resources in the resource pool, the amount of resources used for the target job, and the amount of resources available comprise a number of CPUs.
6. A dynamic resource allocation apparatus, comprising:
the data acquisition module is used for acquiring the total amount of resources in a resource library, the used resource amount and the available resource amount of the target operation and the parallelism of the target operation;
the execution time monitoring module is used for monitoring the execution time of the target operation in the data shuffling stage;
the parameter adjusting module is used for adjusting the available resource amount distributed by the target operation and the parallelism of the target operation according to a preset rule if the execution time of the data shuffling stage is greater than a preset time length;
a residual resource amount determining module, configured to determine a residual resource amount in the resource library according to the total resource amount in the resource library and the adjusted available resource amount of the target job;
the first judging module is used for judging whether the adjusted available resource amount is smaller than the residual resource amount;
the loop module is used for continuously monitoring the execution time of the target operation in the data shuffling stage if the adjusted available resource amount is less than the residual resource amount;
the parameter adjustment module comprises:
the parameter setting unit is used for doubling the available resource amount, adjusting the used resource amount to be one third of the available resource amount and doubling the parallelism;
the second judgment module is used for judging whether the adjusted parallelism is more than three times of the adjusted used resource amount;
a parallelism setting unit configured to, if greater than the predetermined threshold, set the parallelism to three times the adjusted amount of used resources.
7. A computer device comprising a processor and a memory for storing processor-executable instructions which, when executed by the processor, implement the steps of the method of any one of claims 1 to 5.
8. A computer readable storage medium having stored thereon computer instructions which, when executed, implement the steps of the method of any one of claims 1 to 5.
CN201910681471.6A 2019-07-26 2019-07-26 Dynamic resource allocation method, device, storage medium and equipment Active CN110389842B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910681471.6A CN110389842B (en) 2019-07-26 2019-07-26 Dynamic resource allocation method, device, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910681471.6A CN110389842B (en) 2019-07-26 2019-07-26 Dynamic resource allocation method, device, storage medium and equipment

Publications (2)

Publication Number Publication Date
CN110389842A CN110389842A (en) 2019-10-29
CN110389842B true CN110389842B (en) 2022-09-20

Family

ID=68287614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910681471.6A Active CN110389842B (en) 2019-07-26 2019-07-26 Dynamic resource allocation method, device, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN110389842B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111597037B (en) * 2020-04-15 2023-06-16 中电金信软件有限公司 Job allocation method, job allocation device, electronic equipment and readable storage medium
CN116578397A (en) * 2020-04-20 2023-08-11 支付宝(杭州)信息技术有限公司 Data resource processing method, device, equipment and storage medium
CN111858030B (en) * 2020-06-17 2024-03-22 北京百度网讯科技有限公司 Resource processing method and device for job, electronic equipment and readable storage medium
CN112463290A (en) * 2020-11-10 2021-03-09 中国建设银行股份有限公司 Method, system, apparatus and storage medium for dynamically adjusting the number of computing containers
CN113010551B (en) * 2021-03-02 2022-05-10 北京三快在线科技有限公司 Resource caching method and device
CN113391911B (en) * 2021-07-05 2024-03-26 中国工商银行股份有限公司 Dynamic scheduling method, device and equipment for big data resources
CN114371975A (en) * 2021-12-21 2022-04-19 浪潮通信信息***有限公司 Big data component parameter adjusting method and device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109189563A (en) * 2018-07-25 2019-01-11 腾讯科技(深圳)有限公司 Resource regulating method, calculates equipment and storage medium at device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9342355B2 (en) * 2013-06-20 2016-05-17 International Business Machines Corporation Joint optimization of multiple phases in large data processing
CN106033371B (en) * 2015-03-13 2019-06-21 杭州海康威视数字技术股份有限公司 A kind of dispatching method and system of video analytic tasks
US20170093966A1 (en) * 2015-09-28 2017-03-30 International Business Machines Corporation Managing a shared pool of configurable computing resources having an arrangement of a set of dynamically-assigned resources
CN105426254A (en) * 2015-12-24 2016-03-23 北京轻元科技有限公司 Graded cloud computing resource customizing method and system
CN109413125A (en) * 2017-08-18 2019-03-01 北京京东尚科信息技术有限公司 The method and apparatus of dynamic regulation distributed system resource
CN109522100B (en) * 2017-09-19 2023-03-31 阿里巴巴集团控股有限公司 Real-time computing task adjusting method and device
CN109324894A (en) * 2018-08-13 2019-02-12 中兴飞流信息科技有限公司 PC cluster method, apparatus and computer readable storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109189563A (en) * 2018-07-25 2019-01-11 腾讯科技(深圳)有限公司 Resource regulating method, calculates equipment and storage medium at device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Spark Load Balancing Strategy Optimization Based on Internet of Things;Suzhen Wang;《2018 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery 》;20190221;全文 *
Spark数据处理平台中资源动态分配技术研究;杨忙忙;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20170315;第2017年卷(第3期);I138-3817 *
基于关键阶段分析的Spark性能预测模型;葛庆宝等;《计算机***应用》;20180815(第08期);全文 *

Also Published As

Publication number Publication date
CN110389842A (en) 2019-10-29

Similar Documents

Publication Publication Date Title
CN110389842B (en) Dynamic resource allocation method, device, storage medium and equipment
CN107431696B (en) Method and cloud management node for application automation deployment
US9928245B2 (en) Method and apparatus for managing memory space
CN108089856B (en) Page element monitoring method and device
CN107770088B (en) Flow control method and device
CN110401700B (en) Model loading method and system, control node and execution node
US9229775B2 (en) Dynamically adjusting global heap allocation in multi-thread environment
JP7039631B2 (en) Methods, devices, devices, and storage media for managing access requests
CN116167463B (en) Distributed model training container scheduling method and device for intelligent computing
US8769233B2 (en) Adjusting the amount of memory allocated to a call stack
US9128754B2 (en) Resource starvation management in a computer system
CN110737717A (en) database migration method and device
CN113760658A (en) Monitoring method, device and equipment
CN113590285A (en) Method, system and equipment for dynamically setting thread pool parameters
WO2023217118A1 (en) Code test method and apparatus, and test case generation method and apparatus
CN112596898A (en) Task executor scheduling method and device
CN110019497B (en) Data reading method and device
CN105740073A (en) Method and apparatus for dynamically controlling quantity of operation system processes
CN115617494A (en) Process scheduling method and device in multi-CPU environment, electronic equipment and medium
CN111435327A (en) Log record processing method, device and system
CN115033459A (en) CPU utilization monitoring method and device and storage medium
CN111459474B (en) Templated data processing method and device
CN110908870B (en) Method and device for monitoring resources of mainframe, storage medium and equipment
CN106648550B (en) Method and device for concurrently executing tasks
CN110297714B (en) Method and device for acquiring PageRank based on large-scale graph dataset

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant