CN116320033A - Resource scheduling optimization method and device - Google Patents

Resource scheduling optimization method and device Download PDF

Info

Publication number
CN116320033A
CN116320033A CN202310272020.3A CN202310272020A CN116320033A CN 116320033 A CN116320033 A CN 116320033A CN 202310272020 A CN202310272020 A CN 202310272020A CN 116320033 A CN116320033 A CN 116320033A
Authority
CN
China
Prior art keywords
function
preset
resource scheduling
cloud platform
scheduling optimization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310272020.3A
Other languages
Chinese (zh)
Inventor
彭飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Construction Bank Corp
CCB Finetech Co Ltd
Original Assignee
China Construction Bank Corp
CCB Finetech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Bank Corp, CCB Finetech Co Ltd filed Critical China Construction Bank Corp
Priority to CN202310272020.3A priority Critical patent/CN116320033A/en
Publication of CN116320033A publication Critical patent/CN116320033A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1074Peer-to-peer [P2P] networks for supporting data block transmission mechanisms
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the invention relates to the technical field of cloud functions, and discloses a resource scheduling optimization method and device, which are applied to a functional service cloud platform. According to the method and the device, the function call network relation topological graph is generated, the called frequency of each function is obtained through calculation by utilizing the topological graph and the preset scheduling calculation model, and various resources including cache are distributed for each function according to the called frequency, so that the technical problems that the running environment of each function in the cloud function is solidified and container resources cannot be dynamically adjusted and a cache mechanism is lacked in the prior art are solved, and the technical effects of improving the resource utilization rate and the running efficiency of the cloud function are achieved.

Description

Resource scheduling optimization method and device
Technical Field
The embodiment of the invention relates to the technical field of cloud functions, in particular to a resource scheduling optimization method and device.
Background
Function as a service (Functions as a Service, faaS) is a new way for building and deploying server-side software for cloud functions. FaaS is also commonly referred to as a cloud function, which allows code to be executed in response to events without building complex infrastructure associated with launching micro-service applications. Hosting software applications on the cloud typically requires configuration and management of the operating system, WEB services of the virtual server, while using FaaS hosts physical hardware, virtual resources, operating systems, WEB services to the FaaS server for automated processing so that developers can focus on developing individual functions in the application.
FaaS has various advantages as soon as possible, but is still in a popularization and development stage, the function of the FaaS has single related function, solidification and linkage are not strong, when the cloud function is triggered to operate, a service end can initialize the operation environment of the function, the size of resources allocated by the operation environment is usually well defined in advance and is fixed, if the subsequent operation load of the function is overlarge, only human intervention is carried out, and the container resources cannot be adjusted timely, frequently and dynamically; and FaaS also lacks a caching mechanism, making running time lengthy.
Disclosure of Invention
The embodiment of the invention provides a resource scheduling optimization method and device, which solve the technical problems that the running environment of each function in cloud functions is solidified, container resources cannot be dynamically adjusted and a caching mechanism is lacked in the prior art.
In a first aspect, the present application provides a resource scheduling optimization method applied to a functional service cloud platform, where the method includes:
generating a function call network relation topological graph based on a preset routing table, wherein the preset routing table is used for recording basic information of each function in the cloud platform and call relations among the functions;
determining the called times of each function in different time periods based on the function call network relation topological graph and a preset call calculation model, wherein the preset call calculation model is a model which is obtained in advance and used for calculating the use frequency of each function in the cloud platform;
and based on the calling times, allocating corresponding resources for each function in the cloud platform by using a preset scheduling tool, wherein the allocated resources at least comprise computing resources and cache resources.
In a second aspect, the present application provides a resource scheduling optimization device configured to a functional service cloud platform, where the device includes:
the topology generation unit is used for generating a function call network relation topology diagram based on a preset routing table, wherein the preset routing table is used for recording basic information of each function in the cloud platform and call relations among the functions;
the call frequency calculation unit is used for determining the call times of each function in different time periods based on the function call network relation topological graph and a preset call calculation model, wherein the preset call calculation model is a model which is obtained through pre-training and used for calculating the use frequency of each function in the cloud platform;
and the resource allocation unit is used for allocating corresponding resources for each function in the cloud platform by utilizing a preset scheduling tool based on the calling times, wherein the allocated resources at least comprise computing power resources and cache resources.
In a third aspect, the present application provides a resource scheduling optimization apparatus, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the resource scheduling optimization method of the first aspect of the present application.
In a fourth aspect, the present application provides a computer readable storage medium storing computer instructions for causing a processor to implement the resource scheduling optimization method of the first aspect of the present application when executed.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, implements the resource scheduling optimization method of the first aspect of the present application.
The embodiment of the invention discloses a resource scheduling optimization method and a device, which are applied to a functional service cloud platform, wherein the method comprises the following steps: generating a function call network relation topological graph based on a preset routing table; determining the called times of each function in different time periods based on a function call network relation topological graph and a preset call calculation model; and allocating corresponding resources for each function in the cloud platform by using a preset scheduling tool based on the called times, wherein the allocated resources at least comprise computing power resources and cache resources. According to the method and the device, the function call network relation topological graph is generated, the called frequency of each function is obtained through calculation by utilizing the topological graph and the preset scheduling calculation model, and various resources including cache are distributed for each function according to the called frequency, so that the technical problems that the running environment of each function in the cloud function is solidified and container resources cannot be dynamically adjusted and a cache mechanism is lacked in the prior art are solved, and the technical effects of improving the resource utilization rate and the running efficiency of the cloud function are achieved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a resource scheduling optimization method provided by an embodiment of the present invention;
FIG. 2 is a block diagram of a resource scheduling optimization device according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a resource scheduling optimization device according to an embodiment of the present invention.
Detailed Description
In order to make the present invention better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 1 is a flowchart of a resource scheduling optimization method provided by an embodiment of the present invention, where the method is applied to a functional service cloud platform, and may be executed by a resource scheduling optimization device, where the resource scheduling optimization device may be implemented in a form of hardware and/or software, and may be generally integrated in a server. The data acquisition, storage, use, processing and the like in the technical scheme meet the relevant regulations of national laws and regulations.
As shown in fig. 1, the resource scheduling optimization method specifically includes the following steps:
s101, generating a function call network relation topological graph based on a preset routing table, wherein the preset routing table is used for recording basic information of each function in the cloud platform and call relations among the functions.
Specifically, a routing system is built in the functional service cloud platform, a preset routing table is initialized before the routing system operates, the cloud platform can generate a function call network relationship topological graph based on call relationships among functions in the preset routing table, and the topological graph can clearly indicate call relationships among parent-level routes and child-level routes of the functions.
The preset routing table stores routing relations among functions in the cloud platform in a database, specifically, the preset routing table contains basic information of the functions and calling relations among the functions, the basic information of the functions comprises function names, corresponding routing names and effective marks, the calling relations among the functions comprise father-level routes of the functions and routing weights, the routing weights represent weight values of the father-level routes corresponding to the functions, and when the functions are operated, the functions with larger routing weights can be operated preferentially according to the preset routing table.
S102, determining the called times of each function in different time periods based on a function call network relation topological graph and a preset call calculation model, wherein the preset call calculation model is a model which is obtained in advance and used for calculating the use frequency of each function in the cloud platform.
Specifically, after a function call network relation topological graph is obtained, operating data of each function is input into a preset call calculation model, and the call times of each function in different time periods are calculated, so that the call frequency among each function in the cloud platform is determined.
S103, based on the called times, corresponding resources are allocated to each function in the cloud platform by using a preset scheduling tool, wherein the allocated resources at least comprise computing resources and cache resources.
Specifically, after the called times of each function are obtained, corresponding resources are allocated to each function in the cloud platform by using a preset scheduling tool, obviously, more computing power resources and cache resources can be allocated to the function with more called times, so that the resources of the cloud platform can be effectively utilized, waste is avoided, and meanwhile, the running efficiency of the cloud platform is greatly improved due to the fact that the function with more called times is allocated with more resources. It should be noted that, because the function call network relationship topology graph is dynamically changed, the cloud platform utilizes the function call network relationship topology graph to dynamically allocate the resources of the function.
Optionally, the functional servicing cloud function is a FaaS-based cloud function.
Specifically, the cloud platform formed by the FaaS cloud functions is a cloud function of a serverless execution environment, and can construct, calculate, operate and manage services in a functional form without maintaining an own infrastructure. The resource scheduling optimization method is executed on the cloud platform formed by the FaaS cloud functions, on one hand, the execution environment without server of the FaaS cloud functions is utilized, and on the other hand, the defects that the traditional FaaS cloud functions cannot be used for resource allocation and lack of a caching mechanism are overcome through dynamic scheduling of resources.
According to the method and the device, the function call network relation topological graph is generated, the called frequency of each function is obtained through calculation by utilizing the topological graph and the preset scheduling calculation model, and various resources including cache are distributed for each function according to the called frequency, so that the technical problems that the running environment of each function in the cloud function is solidified and container resources cannot be dynamically adjusted and a cache mechanism is lacked in the prior art are solved, and the technical effects of improving the resource utilization rate and the running efficiency of the cloud function are achieved.
Based on the above technical solutions, S101 specifically includes: generating a function call network relation topological graph based on a preconfigured annotation unit in a preset routing table and a preset function operation rule, wherein the annotation unit comprises annotation information added for each function in the preset routing table, and the annotation information at least comprises a routing name, a parent level routing and a cache time length corresponding to the function.
Specifically, an annotation unit and a preset function operation rule are set in the preset routing table, annotation information of each function is preset in the annotation unit, the annotation information can indicate basic information of each function, the basic information comprises a route name, a parent route, a cache time length and the like, and the preset function operation rule is used for representing operation rules between the parent route and the child route of the function. The relation between the upper and lower routes of each function and the operation rule can be obtained through the annotation unit and the preset function operation rule, so that a function call network relation topological graph is generated.
Optionally, the preset function operation rule at least includes: one function corresponds to one route, and one route corresponds to one route name; one route corresponds to a plurality of parent routes; one route corresponds to a plurality of sub-level routes; the plurality of parent routes are configured with different routing weights; sequentially running functions corresponding to the parent routes based on the sequence from high to low of the routing weights; a function is triggered to run only after all functions corresponding to the parent routes have finished running.
Specifically, one function in the cloud platform corresponds to one route, and one route corresponds to only one route name. The relationship between the route and the parent route is many-to-many, i.e. one route can correspond to a plurality of parent routes or a plurality of child routes. When a plurality of parent routes exist, functions corresponding to all the parent routes are sequentially operated according to the sequence from high to low of the route weights, one function is triggered to operate only after the functions corresponding to all the parent routes are waited to be operated, and the functions corresponding to the child routes can be operated only after the functions corresponding to all the parent routes are operated.
Optionally, the training method for presetting and calling the calculation model comprises the following steps: acquiring operation parameters of a preset number of functions when the functions are called, wherein the operation parameters comprise at least one of an operation period, an operation duration and an operation code number; and taking the operation parameters as training samples, and inputting the training samples into a linear regression model for training to obtain a preset calling calculation model.
Illustratively, since the objective is to predict the number of times a function is called in each period of the day, so as to analyze the resource usage rule of each function, the objective value is continuous, and a linear regression model in supervised learning needs to be selected. First, defining the feature value of the training sample as (x 1 ,x 2 ,x 3 ) Respectively representing the operation time period (per hour), the operation time length and the code line number in the operation parameters, wherein the characteristic values are as followsEach function is automatically collected when being called each time and stored as a training sample; and then defining a model, wherein the Lasso model in the linear regression model can be adopted for training due to less input of the characteristic values.
Specifically, the Lasso model is y=ω 01 x 12 x 23 x 3 Wherein ω is 0 、ω 1 、ω 2 、ω 3 All are model parameters, and residual terms are:
Figure BDA0004135066910000081
the penalty function defining the residual term is:
Figure BDA0004135066910000082
finally training, predicting and checking the algorithm through a scientific calculation kit SciKit Learn of Python3 to obtain a preset calling calculation model.
In summary, during the running of the cloud platform, each time a function is called, the function is collected (x 1 ,x 2 ,x 3 ) After each period is finished, counting the called times of the function of the period, namely y. Thus, enough sample data is formed for model training, and accurate (omega) is adjusted 0 、ω 1 、ω 2 、ω 3 ) And obtaining a preset calling calculation model.
Optionally, the preset scheduling tool is a distributed system scheduling tool.
Specifically, the cloud platform can obtain the called times of each function in each period according to the prediction of a preset calling calculation model, and then allocate more calculation power and cache resources for the functions through a distributed system scheduling tool to optimize the operation efficiency of the whole FaaS system, wherein the distributed system scheduling tool can select k8s (Kubernetes) scheduling, and the k8s scheduling is an open source and is used for managing containerized applications on a plurality of hosts in the cloud platform.
Based on the above technical solutions, S103, when allocating corresponding cache resources for each function in the cloud platform by using a preset scheduling tool, the resource scheduling optimization method further includes: calculating a parameter input identifier uniquely corresponding to each function based on the input data of each function; and inquiring preset identifiers in the cache database by using the parameter input identifiers, and distributing cache resources corresponding to the preset identifiers consistent with the parameter input identifiers to corresponding functions.
In particular, since different inputs will have different outputs for one function, even for the same function, different buffering paths are required for buffering according to different inputs. Because a great amount of data may exist in the input of a function, in order to enable different inputs of the function to quickly find corresponding caches, a unique corresponding parameter input identifier can be calculated based on the input data of the function, a preset identifier is arranged in a cache database, the parameter input identifier is uniquely corresponding to the preset identifier, and then corresponding caches can be obtained, and corresponding cache resources are allocated to the corresponding functions.
Optionally, calculating the unique corresponding parameter input identification based on the input data of each function includes: based on the input data of each function, calculating the parameter input identification uniquely corresponding to each function by utilizing the password hash function.
Specifically, the value of the cryptographic hash function (Message-Digest Algorithm 5, MD5) is unique, and if the input changes, the calculated MD5 value also changes, so that each function is assigned a unique corresponding parameter input identifier.
It should be noted that, due to the existence of the preset routing table and the function call network relationship topological graph, the cloud platform can easily find out which functions are "relied on" most times, and for example, when the function of inquiring identity information is frequently called in other functions, the cloud platform can automatically increase the caching weight of the function and preferentially establish and update the caching data for the function under the condition that the user does not explicitly specify the caching strategy of the function.
Optionally, the setting modes of the caching strategy of the cloud platform are multiple, including three modes of global configuration, function annotation and intelligent mode. The global configuration is to define the caching time of the functions in the configuration file of the cloud platform in advance, and the mode is effective for all the functions; the function annotation is to define the caching time of the function in advance in an annotation unit of the function, the mode is only effective for the function, and the priority level is highest; the intelligent mode is a function which depends on a machine learning algorithm, combines collected historical trigger information of each function, preferentially caches the functions with more times of being called, slow data updating and long running time, and dynamically allocates cache resources by the cloud platform according to the running condition of each function.
In summary, the resource scheduling optimization method provided by the application has the following advantages: (1) By specifying the parent route (default using function annotation to configure the parent route), the FaaS cloud platform automatically scans the parent-child dependencies of all functions, and the functions are invoked in sequence by the routing system built in the cloud platform. Not only ensures that the code block of each cloud function does not contain calling codes of other functions, but also completely decouples and enhances the reusability of the functions; the method can also be used for configuring the substitution explicit call codes, and the cloud platform driving function operation is a core function of the routing system. (2) Depending on the configuration analysis of the upper and lower levels of functions of the routing system, a function call network relation topological graph is internally generated, and each function is distributed with a routing address similar to a URL (Uniform Resource Locator ) by the cloud platform, so that the cloud platform can quickly locate the function to be operated when the cloud platform operates. Meanwhile, through the function call network relation topological graph, the cloud platform can also identify the multiplexing program of each function, and parameter support is provided for subsequent function optimization. (3) The device becomes a main driving program of function call due to the routing system, so that detailed data of call, operation and performance of each function can be collected by the device, the main function of the ML (Machine Learning) engine is to analyze the data by using a machine learning algorithm, a function operation fluctuation model is built, model parameters are continuously trained and optimized along with the increase of function call times, a preset call calculation model is obtained, and finally, accurate prediction of a high-emission period of each function is realized, and more cloud resources are distributed in advance. (4) The caching duration of each function is marked in an annotation mode, and a caching system in the cloud platform can utilize parameter input identification to cache different outputs according to different inputs of the functions, so that calculation results under different input conditions can be responded quickly.
Fig. 2 is a block diagram of a resource scheduling optimization device provided by an embodiment of the present invention, configured on a functional service cloud platform, where, as shown in fig. 2, the resource scheduling optimization device includes:
a topology generating unit 21, configured to generate a function call network relationship topology map based on a preset routing table, where the routing table is used to record basic information of each function in the cloud platform and call relationships between each function;
the call frequency calculation unit 22 is configured to determine the call times of each function in different time periods based on a function call network relationship topological graph and a preset call calculation model, where the preset call calculation model is a model that is obtained by training in advance and is used for calculating the use frequency of each function in the cloud platform;
the resource allocation unit 23 is configured to allocate, based on the number of called times, corresponding resources for each function in the cloud platform by using a preset scheduling tool, where the allocated resources at least include computing resources and cache resources.
Optionally, when the resource allocation unit 23 allocates corresponding cache resources for each function in the cloud platform by using a preset scheduling tool, the apparatus further includes:
the identification generating unit is used for calculating parameter input identifications uniquely corresponding to the functions based on the input data of the functions;
the resource allocation unit is further configured to query a preset identifier in the cache database by using the parameter input identifier, and allocate a cache resource corresponding to the preset identifier consistent with the parameter input identifier to the corresponding function.
Optionally, the identifier generating unit is specifically configured to:
based on the input data of each function, calculating the parameter input identification uniquely corresponding to each function by utilizing the password hash function.
Alternatively, the topology generation unit 21 is specifically configured to:
generating the function call network relation topological graph based on a pre-configured annotation unit and a pre-configured function operation rule in the pre-configured routing table, wherein the annotation unit comprises annotation information added for each function in the pre-configured routing table, and the annotation information at least comprises a routing name, a parent route and a cache time length corresponding to the function.
The resource scheduling optimization device provided by the embodiment of the invention can execute the resource scheduling optimization method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Fig. 3 is a schematic structural diagram of a resource scheduling optimization device according to an embodiment of the present invention. The resource scheduling optimization device is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 3, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as a resource scheduling optimization method.
In some embodiments, the resource scheduling optimization method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the resource scheduling optimization method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the resource scheduling optimization method in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
Embodiments of the present invention also provide a computer program product comprising computer executable instructions for performing the resource scheduling optimization method provided by any of the embodiments of the present invention when executed by a computer processor.
Computer program product in the implementation, the computer program code for carrying out operations of the present invention may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
Of course, the computer program product provided by the embodiments of the present application, the computer executable instructions of which are not limited to the method operations described above, may also perform the relevant operations in the methods provided by any of the embodiments of the present application.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention can be achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (14)

1. A resource scheduling optimization method, characterized by being applied to a functional service cloud platform, the method comprising:
generating a function call network relation topological graph based on a preset routing table, wherein the preset routing table is used for recording basic information of each function in the cloud platform and call relations among the functions;
determining the called times of each function in different time periods based on the function call network relation topological graph and a preset call calculation model, wherein the preset call calculation model is a model which is obtained in advance and used for calculating the use frequency of each function in the cloud platform;
and based on the calling times, allocating corresponding resources for each function in the cloud platform by using a preset scheduling tool, wherein the allocated resources at least comprise computing resources and cache resources.
2. The resource scheduling optimization method of claim 1, wherein generating a function call network relationship topology based on a preset routing table comprises:
generating the function call network relation topological graph based on a pre-configured annotation unit and a pre-configured function operation rule in the pre-configured routing table, wherein the annotation unit comprises annotation information added for each function in the pre-configured routing table, and the annotation information at least comprises a routing name, a parent route and a cache time length corresponding to the function.
3. The resource scheduling optimization method according to claim 2, wherein the preset function operation rule at least includes:
one function corresponds to one route, one route corresponds to one said route name;
one of the routes corresponds to a plurality of the parent routes;
one of the routes corresponds to a plurality of sub-level routes;
the plurality of parent routes are configured with different routing weights;
sequentially running functions corresponding to the parent routes based on the sequence from high to low of the routing weights;
a function is triggered to run only after all functions corresponding to the parent routes are run.
4. The resource scheduling optimization method according to claim 1, wherein the training method of the preset invocation calculation model includes:
acquiring operation parameters of a preset number of functions when the functions are called, wherein the operation parameters comprise at least one of an operation period, an operation duration and an operation code number;
and taking the operation parameters as training samples, and inputting the training samples into a linear regression model for training to obtain the preset calling calculation model.
5. The resource scheduling optimization method according to claim 1, wherein the preset scheduling tool is a distributed system scheduling tool.
6. The resource scheduling optimization method according to claim 1, wherein when allocating corresponding cache resources for each function in the cloud platform by using a preset scheduling tool, the method further comprises:
calculating a parameter input identifier uniquely corresponding to each function based on the input data of each function;
and inquiring a preset identifier in a cache database by using the parameter input identifier, and distributing cache resources corresponding to the preset identifier consistent with the parameter input identifier to a corresponding function.
7. The resource scheduling optimization method of claim 6, wherein calculating a unique corresponding parameter input identification based on input data of each function comprises:
based on the input data of each function, calculating the parameter input identification uniquely corresponding to each function by utilizing the password hash function.
8. The resource scheduling optimization method of claim 1, wherein the functional server cloud function is a FaaS-based cloud function.
9. A resource scheduling optimization device, configured to a functional server cloud platform, the device comprising:
the topology generation unit is used for generating a function call network relation topology diagram based on a preset routing table, wherein the preset routing table is used for recording basic information of each function in the cloud platform and call relations among the functions;
the call frequency calculation unit is used for determining the call times of each function in different time periods based on the function call network relation topological graph and a preset call calculation model, wherein the preset call calculation model is a model which is obtained through pre-training and used for calculating the use frequency of each function in the cloud platform;
and the resource allocation unit is used for allocating corresponding resources for each function in the cloud platform by utilizing a preset scheduling tool based on the calling times, wherein the allocated resources at least comprise computing power resources and cache resources.
10. The resource scheduling optimization device according to claim 9, wherein when the resource allocation unit allocates corresponding cache resources for each function in the cloud platform using a preset scheduling tool, the device further comprises:
the identification generating unit is used for calculating parameter input identifications uniquely corresponding to the functions based on the input data of the functions;
the resource allocation unit is further configured to query a preset identifier in a cache database by using the parameter input identifier, and allocate a cache resource corresponding to the preset identifier consistent with the parameter input identifier to a corresponding function.
11. The resource scheduling optimization device according to claim 10, wherein the identifier generating unit is specifically configured to:
based on the input data of each function, calculating the parameter input identification uniquely corresponding to each function by utilizing the password hash function.
12. A resource scheduling optimization device, the device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the resource scheduling optimization method of any one of claims 1-8.
13. A computer readable storage medium storing computer instructions for causing a processor to implement the resource scheduling optimization method of any one of claims 1-8 when executed.
14. A computer program product, characterized in that the computer program product comprises a computer program which, when executed by a processor, implements the resource scheduling optimization method according to any one of claims 1-8.
CN202310272020.3A 2023-03-16 2023-03-16 Resource scheduling optimization method and device Pending CN116320033A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310272020.3A CN116320033A (en) 2023-03-16 2023-03-16 Resource scheduling optimization method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310272020.3A CN116320033A (en) 2023-03-16 2023-03-16 Resource scheduling optimization method and device

Publications (1)

Publication Number Publication Date
CN116320033A true CN116320033A (en) 2023-06-23

Family

ID=86802795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310272020.3A Pending CN116320033A (en) 2023-03-16 2023-03-16 Resource scheduling optimization method and device

Country Status (1)

Country Link
CN (1) CN116320033A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118152304A (en) * 2024-05-10 2024-06-07 中国电信股份有限公司 Function cache allocation method and related equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118152304A (en) * 2024-05-10 2024-06-07 中国电信股份有限公司 Function cache allocation method and related equipment

Similar Documents

Publication Publication Date Title
CN112559007B (en) Parameter updating method and device of multitask model and electronic equipment
US11704123B2 (en) Automated orchestration of containers by assessing microservices
CN111158613B (en) Data block storage method and device based on access heat and storage equipment
US10521738B2 (en) Automated collaboration workflow generation in thing-sourcing environments
CN111143039B (en) Scheduling method and device of virtual machine and computer storage medium
CN115335821B (en) Offloading statistics collection
CN116320033A (en) Resource scheduling optimization method and device
CN115794341A (en) Task scheduling method, device, equipment and storage medium based on artificial intelligence
CN114466005B (en) Internet of things equipment arrangement
CN115840738A (en) Data migration method and device, electronic equipment and storage medium
CN114997414A (en) Data processing method and device, electronic equipment and storage medium
CN111340404A (en) Method and device for constructing index system and computer storage medium
WO2023089350A1 (en) An architecture for a self-adaptive computation management in edge cloud
CN114861039A (en) Parameter configuration method, device, equipment and storage medium of search engine
CN111198745A (en) Scheduling method, device, medium and electronic equipment for container creation
Fan et al. Knative autoscaler optimize based on double exponential smoothing
CN114610719B (en) Cross-cluster data processing method and device, electronic equipment and storage medium
CN116737370A (en) Multi-resource scheduling method, system, storage medium and terminal
CN116050159A (en) Simulation scene set generation method, device, equipment and medium
CN115543423A (en) Method, device and equipment for generating benchmarks and storage medium
CN112948461B (en) Method, apparatus, storage medium and program product for calendar data processing
CN115277568B (en) Data transmission method, device, equipment and storage medium
CN117170821B (en) Service processing method, device, electronic equipment and computer readable medium
CN116822259B (en) Evaluation information generation method and device based on scene simulation and electronic equipment
CN117992212A (en) Unified quantized cross-domain computing power scheduling method and device in multi-center environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination