CN117294715A - End-edge cloud scheduling optimization method, system and device - Google Patents

End-edge cloud scheduling optimization method, system and device Download PDF

Info

Publication number
CN117294715A
CN117294715A CN202311229342.6A CN202311229342A CN117294715A CN 117294715 A CN117294715 A CN 117294715A CN 202311229342 A CN202311229342 A CN 202311229342A CN 117294715 A CN117294715 A CN 117294715A
Authority
CN
China
Prior art keywords
data
decision result
result data
flow request
execution decision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311229342.6A
Other languages
Chinese (zh)
Inventor
张长浩
申书恒
傅欣艺
傅幸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202311229342.6A priority Critical patent/CN117294715A/en
Publication of CN117294715A publication Critical patent/CN117294715A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1012Server selection for load balancing based on compliance of requirements or conditions with available server resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Supply And Distribution Of Alternating Current (AREA)

Abstract

One or more embodiments of the present specification disclose a method, a system, and an apparatus for end-to-end cloud scheduling optimization. The method comprises the following steps: acquiring historical execution decision result data and running state data of a terminal under at least one flow request before the current flow request of a target service; determining constraint parameters for end-edge cloud scheduling optimization based on the running state data; providing the constraint parameters to a cloud end, and acquiring global parameters for end-to-end cloud scheduling optimization, which are obtained by the cloud end through a preset end-to-end cloud scheduling optimization algorithm in a set period and are obtained by optimizing the execution decision result data and the constraint parameters; determining execution decision result data of the current flow request according to the acquired global parameters; and performing calculation distribution on the terminal and the edge server according to the execution decision result data of the current flow request to obtain equipment for executing the current flow request.

Description

End-edge cloud scheduling optimization method, system and device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, a system, and an apparatus for end-to-end cloud scheduling optimization.
Background
Currently, with the continuous development of computer technology, the size scale of a scene algorithm model aiming at a target service is rapidly increased, the demand of a service scene on computational power resources is increased in order of magnitude, and the requirements of the target service flow request cannot be met only by continuously increasing machine budget and upgrading evolution of hardware.
In the related art, the edge server and the cloud server can be combined to realize resource management. However, the scheduling capabilities of such edge server and cloud server management systems are not open to targeted traffic. Therefore, it is needed to provide an end-to-edge cloud scheduling optimization method for performing coupling processing on a target service.
Disclosure of Invention
In one aspect, one or more embodiments of the present disclosure provide a method for end-to-end cloud scheduling optimization, including: the method comprises the steps of obtaining historical execution decision result data and running state data of a terminal under at least one flow request before the current flow request of a target service, wherein the historical execution decision result data are used for determining a decision result of executing a historical flow request, and the running state data are used for recording states of the terminal and an edge server under the historical flow request. And determining constraint parameters for end-edge cloud scheduling optimization based on the running state data, wherein the constraint parameters comprise flow value estimated data and calculation force estimated data. Providing the constraint parameters to a cloud end, and acquiring global parameters for end-to-end cloud scheduling optimization, which are obtained by the cloud end through a preset end-to-end cloud scheduling optimization algorithm in a set period and are obtained by optimizing the execution decision result data and the constraint parameters. And determining execution decision result data of the current flow request according to the acquired global parameters. And performing calculation distribution on the terminal and the edge server according to the execution decision result data of the current flow request to obtain equipment for executing the current flow request.
In another aspect, one or more embodiments of the present disclosure provide an end-to-edge cloud scheduling optimization system, including: terminal, edge server and high in the clouds server, wherein: the terminal acquires historical execution decision result data and running state data of at least one flow request before the current flow request aiming at the target service, and uploads the historical execution decision result data and the running state data to the cloud server, wherein the historical execution decision result data is used for determining a decision result of executing the historical flow request, and the running state data is used for recording states of the terminal and the edge server under the historical flow request. The cloud server determines constraint parameters for end-to-end cloud scheduling optimization based on the running state data, wherein the constraint parameters comprise flow value estimated data and calculation force estimated data, obtains global parameters for end-to-end cloud scheduling optimization, which are obtained by optimizing the execution decision result data and the constraint parameters through a preset end-to-end cloud scheduling optimization algorithm in a set period, and sends the global parameters to the terminal. And the terminal determines execution decision result data of the current flow request according to the acquired global parameters, and performs calculation distribution on the terminal and the edge server according to the execution decision result data of the current flow request to obtain equipment for executing the current flow request. And the edge server performs calculation unloading according to the execution decision result data of the current flow request.
In still another aspect, one or more embodiments of the present disclosure provide an end-edge cloud scheduling optimization apparatus, including: the historical data acquisition module is used for acquiring historical execution decision result data and running state data of the terminal under at least one flow request before the current flow request of the target service, wherein the historical execution decision result data is used for determining a decision result of executing the historical flow request, and the running state data is used for recording states of the terminal and the edge server under the historical flow request. And the running state data quantization module is used for determining constraint parameters for end-to-end cloud scheduling optimization based on the running state data, wherein the constraint parameters comprise flow value estimated data and calculation force estimated data. The global parameter solving module is used for providing the constraint parameters for the cloud end, and obtaining global parameters for carrying out end-edge cloud scheduling optimization, which are obtained by the cloud end through a preset end-edge cloud scheduling optimization algorithm in a set period, and the execution decision result data and the constraint parameters are subjected to optimization processing. And the execution decision result data solving module is used for determining execution decision result data of the current flow request according to the acquired global parameters. And the calculation power distribution module is used for carrying out calculation power distribution on the terminal and the edge server according to the execution decision result data of the current flow request to obtain equipment for executing the current flow request.
In yet another aspect, one or more embodiments of the present specification provide an electronic device comprising: a processor; and a memory arranged to store computer executable instructions that, when executed, enable the processor to: the method comprises the steps of obtaining historical execution decision result data and running state data of a terminal under at least one flow request before the current flow request of a target service, wherein the historical execution decision result data are used for determining a decision result of executing a historical flow request, and the running state data are used for recording states of the terminal and an edge server under the historical flow request. And determining constraint parameters for end-edge cloud scheduling optimization based on the running state data, wherein the constraint parameters comprise flow value estimated data and calculation force estimated data. Providing the constraint parameters to a cloud end, and acquiring global parameters for end-to-end cloud scheduling optimization, which are obtained by the cloud end through a preset end-to-end cloud scheduling optimization algorithm in a set period and are obtained by optimizing the execution decision result data and the constraint parameters. And determining execution decision result data of the current flow request according to the acquired global parameters. And performing calculation distribution on the terminal and the edge server according to the execution decision result data of the current flow request to obtain equipment for executing the current flow request.
In yet another aspect, one or more embodiments of the present specification further provide a storage medium for storing computer-executable instructions that, when executed by a processor, implement the following: the method comprises the steps of obtaining historical execution decision result data and running state data of a terminal under at least one flow request before the current flow request of a target service, wherein the historical execution decision result data are used for determining a decision result of executing a historical flow request, and the running state data are used for recording states of the terminal and an edge server under the historical flow request. And determining constraint parameters for end-edge cloud scheduling optimization based on the running state data, wherein the constraint parameters comprise flow value estimated data and calculation force estimated data. Providing the constraint parameters to a cloud end, and acquiring global parameters for end-to-end cloud scheduling optimization, which are obtained by the cloud end through a preset end-to-end cloud scheduling optimization algorithm in a set period and are obtained by optimizing the execution decision result data and the constraint parameters. And determining execution decision result data of the current flow request according to the acquired global parameters. And performing calculation distribution on the terminal and the edge server according to the execution decision result data of the current flow request to obtain equipment for executing the current flow request.
Drawings
In order to more clearly illustrate one or more embodiments of the present specification or the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described, and it is apparent that the drawings in the following description are only some embodiments described in one or more embodiments of the present specification, and other drawings may be obtained according to these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a schematic flow chart of a method of end-edge cloud scheduling optimization according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of a method of end-edge cloud scheduling optimization according to another embodiment of the present disclosure;
FIG. 3 is a schematic flow chart of a method of end-edge cloud scheduling optimization according to another embodiment of the present disclosure;
FIG. 4 is a schematic flow chart of a method of end-edge cloud scheduling optimization according to another embodiment of the present disclosure;
FIG. 5 is a schematic block diagram of a terminal edge cloud scheduling optimization system according to an embodiment of the present disclosure;
FIG. 6 is a schematic block diagram of a terminal edge cloud scheduling optimization apparatus according to an embodiment of the present disclosure;
Fig. 7 is a schematic block diagram of an electronic device in accordance with an embodiment of the present description.
Detailed Description
One or more embodiments of the present disclosure provide a method, a system, and a device for optimizing end-edge cloud scheduling, so as to solve the problem that an edge server and a cloud server management system do not have an open scheduling capability for a target service at present.
In order to enable a person skilled in the art to better understand the technical solutions in one or more embodiments of the present specification, the technical solutions in one or more embodiments of the present specification will be clearly and completely described below with reference to the drawings in one or more embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which may be made by one of ordinary skill in the art based on one or more embodiments of the present disclosure without departing from the scope of the invention as defined by the claims.
The existing edge server and cloud server management system can allocate and manage computing power resources between the edge server and the cloud server in the distributed system, so that the edge server and the cloud server in the distributed system can realize optimal utilization of the computing power resources and storage resources. However, when the edge server and the cloud server management system receive the traffic request for the target service, the historical traffic request for the target service is not considered in the computing resource scheduling between the edge server and the cloud server, that is, the computing resource allocation scheduling capability of the edge server and the cloud server management system is not opened to the service system. In addition, the edge server scheduling management system is an edge server computing architecture based scheduling system, can identify the computing power resource condition of each edge server, and performs computing power resource allocation scheduling among the edge servers based on the computing power resource condition of each edge server. Similarly, the edge server scheduling management system does not consider historical traffic requests for the target service when performing the computing resource allocation, i.e. the computing resource allocation scheduling capability of the edge server scheduling management system is not open to the service system. Based on this, the embodiments of the present disclosure provide a method, a system, and an apparatus for end-edge cloud scheduling optimization, where when a traffic request for a target service is received, computing power allocation data of a historical traffic request for the target service, state data of an executing device, and computing power resources of the executing device are used as constraint conditions for computing power resource allocation scheduling of the current traffic request, which are described in detail below.
FIG. 1 is a schematic flow chart of a method of end-edge cloud scheduling optimization according to an embodiment of the present disclosure, which may include:
s102, acquiring historical execution decision result data and running state data of a terminal under at least one flow request before the current flow request of a target service, wherein the historical execution decision result data is used for determining a decision result of executing the historical flow request, and the running state data is used for recording states of the terminal and an edge server under the historical flow request.
When the terminal receives the current flow request for the target service, the terminal reads historical execution decision result data stored in the terminal, wherein the historical execution decision result data is execution equipment information data under at least one flow request before the terminal receives the current flow request for the target service, the number of times of reading the historical flow request is selected according to program setting, for example, the number of times of reading the historical flow request can be selected according to set duration or set times. And different assignments are made according to whether the specific execution device is a terminal or an edge server under each flow request.
The terminal reads the running state data stored in the terminal when receiving the current flow request aiming at the target service, wherein the running state data is the running state data of the execution device under at least one flow request before the terminal receives the current flow request aiming at the target service. Each time the flow request is executed, the terminal is selected to execute or the edge server is selected to execute, when the flow request is executed at the terminal, the terminal stores the running state data of the flow request, and when the flow request is executed at the edge server, the edge server sends the running state data of the edge server to the terminal to store, so that the terminal can collect the running state data. The terminal reads historical execution decision result data and running state data under a plurality of flow requests before the current flow request and then sends the historical execution decision result data and the running state data to the cloud server for data processing.
S104, determining constraint parameters for end-edge cloud scheduling optimization based on the running state data, wherein the constraint parameters comprise flow value estimated data and calculation force estimated data.
The traffic value estimation data is used for quantifying the traffic value of the traffic request aiming at the target service, and the traffic value refers to the value brought by the current traffic request of the user for the service scene. The computing power estimated data is used for quantifying computing power resource data of the terminal and the edge server.
S106, constraint parameters are provided for the cloud, and global parameters for end-edge cloud scheduling optimization, which are obtained by the cloud through optimization processing of execution decision result data and constraint parameters through a preset end-edge cloud scheduling optimization algorithm in a set period, are obtained.
The cloud server performs quantization operation on the running state data to obtain constraint parameters for scheduling optimization, and performs optimization processing on the execution decision result data and the constraint parameters through a preset end-edge cloud scheduling optimization algorithm to obtain global parameters for performing end-edge cloud scheduling optimization. The core of the end-edge cloud scheduling optimization algorithm is to realize the maximum optimization target of the flow value under the premise of calculation force constraint. A linear optimization or other optimization algorithm may be selected for the specific solution.
S108, determining execution decision result data of the current flow request according to the acquired global parameters.
When the terminal receives the current flow request aiming at the target service, in order to determine whether the current flow request is executed at the terminal or at the edge server, the flow value of the current flow request and the calculation power resource data of the execution equipment need to be considered, because the data dimensions of the two parameters are different, the parameters with different dimensions need to be standardized, the global parameter is a dimension standardized coefficient, and the global parameter is substituted into a weighted solution algorithm to obtain execution decision result data of the current flow request.
S110, calculating power distribution is carried out on the terminal and the edge server according to the execution decision result data of the current flow request, and equipment for executing the current flow request is obtained.
By adopting the technical scheme of one or more embodiments of the present specification, each time the terminal receives a flow request for a target service, the historical execution decision result data and running state data before the current flow request are utilized. The terminal uploads the historical execution decision result data and the running state data to the cloud server, and the cloud server calculates and solves global parameters for determining the current flow request execution decision result according to preset end-edge cloud scheduling. And the terminal determines execution decision result data of the current flow request according to the solved global parameters, and performs calculation distribution on the terminal and the edge server according to the execution decision result data of the current flow request to obtain the equipment of the current flow request. And each time the terminal receives the flow request aiming at the target service, calculating force distribution is carried out by combining historical flow request data. The method provided by the technical scheme of one or more embodiments of the present disclosure can open the computing power resource allocation scheduling capability to the service system when the computing power allocation is performed. In addition, the cloud server carries out quantization operation on the running state data to obtain constraint parameters for scheduling optimization, and a preset end-edge cloud scheduling optimization algorithm is used for carrying out optimization processing on the execution decision result data and the constraint parameters to obtain global parameters for end-edge cloud scheduling optimization. The core of the end-edge cloud scheduling optimization algorithm is to realize the maximum optimization target of the flow value under the premise of calculation force constraint. The method provided by the technical scheme of one or more embodiments of the present disclosure can implement differential distribution of computing power for different value flows under the constraint of computing power resources, so that the global service effect is better.
In one embodiment, as shown in fig. 2, based on the operation state data, determining constraint parameters for end-edge cloud scheduling optimization, where the constraint parameters include flow value estimation data and calculation power estimation data (i.e., S104) may be performed as follows S1042-S1044:
s1042, extracting the scene data of the target service contained in the running state data, and carrying out quantization processing on the scene data to obtain the estimated data of the flow value.
In one embodiment, the traffic value estimation data is used to quantify the traffic value of the traffic request for the target service, where the traffic value refers to the value of the current traffic request of the user brought by the service scenario, and in this embodiment, the traffic value may be click rate data and/or conversion rate data of the target service.
In one embodiment, the scene data of the target service includes one or more of traffic request scale data, size data of a traffic request operation model, scene category data corresponding to the target service, and recommended refresh frequency information.
Substituting the flow request scale data, the size data of the flow request operation model, the scene type data corresponding to the target service and the recommended refreshing frequency information into a flow value quantization algorithm to obtain flow value estimated data.
In one embodiment, the flow value quantization algorithm is as follows: traffic value= (traffic request scale data x1+ size data x2 of traffic request operation model) scene category data corresponding to the target traffic recommended refresh frequency.
Different flow request scales need to be calculated by using different flow request operation models, and the two have mapping relation, and generally, the larger the flow request scale data is, the larger the size of the flow request operation model needs to be used.
S1044, extracting calculation force data of the execution device contained in the running state data, and carrying out quantization processing on the calculation force data of the execution device to obtain calculation force estimated data, wherein the calculation force data of the execution device is used for describing the calculation force of the device executing the flow request and time-consuming data for transmission.
In one embodiment, the performing device computing power data includes one or more of edge computing power data of an edge server, time consuming data of the edge server for performing the target service, time consuming data of the terminal for communicating with the edge server network, and time consuming data of the terminal for performing the target service.
Performing quantization processing on computing force data of the execution equipment to obtain computing force estimated data, wherein the method comprises the following steps: substituting the edge computing power data of the edge server, the time-consuming data of the edge server for executing the target service, the time-consuming data transmitted by the terminal and the edge server network and the time-consuming data of the terminal for executing the target service into a computing power quantization algorithm to obtain computing power estimated data.
Optionally, the edge computing power data of the edge server is single-machine current service QPS upper limit data or single-machine current service actual QPS data.
Optionally, the time-consuming data of the edge server for executing the target service is time-consuming data of the edge server for executing the current flow request according to the scene code.
Optionally, the network transmission time-consuming data between the terminal and the edge server includes network uplink transmission time-consuming data and network downlink transmission time-consuming data for executing the current flow request according to the scene code.
Optionally, the time-consuming data of the terminal executing the target service is script time-consuming data of the terminal executing the target service, and for different terminals, the same terminal is required to periodically sample the script time-consuming data in a period of time to calculate average data as a parameter value to return.
The cloud server performs quantization operation on the running state data to obtain constraint parameters for scheduling optimization, and performs optimization processing on the execution decision result data and the constraint parameters through a preset end-edge cloud scheduling optimization algorithm to obtain global parameters for performing end-edge cloud scheduling optimization. The core of the end-edge cloud scheduling optimization algorithm is to realize the maximum optimization target of the flow value under the premise of calculation force constraint. The method provided by the technical scheme of one or more embodiments of the present disclosure can implement differential distribution of computing power for different value flows under the constraint of computing power resources, so that the global service effect is better.
In addition, the cloud server solves constraint parameters according to the acquired operation state data, solves global parameters according to the constraint parameters and execution decision result data, transmits the global parameters to a terminal, and transmits operation data for computing power distribution to the terminal by adding an edge server, wherein the terminal stores historical execution decision result data before the current flow request for a target service, the data are summarized at the terminal, and the terminal collects the data and forms a stable data structure when receiving the flow request for the target service each time, and can form stable scheduling record data according to the collected data no matter whether the current flow request is executed at the terminal or the edge server. The unified data format and recovery mechanism are the basis and key that the whole system can be compatible with a scheduling algorithm, and the data are key data for offline solving and online application.
In one embodiment, as shown in fig. 3, the constraint parameters are provided to the cloud end, and global parameters (i.e. S106) for performing end-edge cloud scheduling optimization, which are obtained by the cloud end performing optimization processing on the decision result data and the constraint parameters through a preset end-edge cloud scheduling optimization algorithm in a set period, may be executed as follows S1062-S1066:
S1062, performing linear operation on the execution decision result data and the flow value estimated data based on the execution decision result data, the flow value estimated data and the pre-trained linear optimization model to obtain flow constraint parameters.
S1064, performing linear operation on the execution decision result data and the calculation force pre-estimated data based on the execution decision result data, the calculation force pre-estimated data and the pre-trained linear optimization model to obtain calculation force constraint parameters.
S1066, scheduling an optimization algorithm by the terminal Bian Yun, and determining the global parameter based on the flow constraint parameter and the calculation force constraint parameter.
The cloud server acquires execution decision result data, flow value estimated data and calculation force estimated data and stores the data into an ODPS table, and when the global parameter is solved, the data stored in the ODPS table is called at regular time, and the global parameter is solved based on an end-edge cloud scheduling optimization algorithm. The cloud server performs quantization operation on the running state data to obtain constraint parameters for scheduling optimization, and performs optimization processing on the execution decision result data and the constraint parameters through a preset end-edge cloud scheduling optimization algorithm to obtain global parameters for performing end-edge cloud scheduling optimization. The core of the end-edge cloud scheduling optimization algorithm is to realize the maximum optimization target of the flow value under the premise of calculation force constraint. The method provided by the technical scheme of one or more embodiments of the present disclosure can implement differential distribution of computing power for different value flows under the constraint of computing power resources, so that the global service effect is better.
In one embodiment, as shown in fig. 4, determining the execution decision result data of the current flow request according to the acquired global parameter (S108) may be performed as S1082-S1084:
s1082, determining terminal execution result data and edge server execution result data based on the global parameters through a preset execution decision weighting algorithm.
S1084, determining the execution decision result data of the current flow request based on the terminal execution result data and the edge server execution result data.
In one embodiment, determining execution decision result data of the current flow request based on the terminal execution result data and the edge server execution result data includes:
and taking the better execution result data in the terminal execution result data and the edge server execution result data as the execution decision result data of the current flow request.
Optionally, the larger value in the terminal execution result data and the edge server execution result data is used as the execution decision result data of the current flow request. If the terminal execution result data is used as an output value to assign a value to the execution decision result data, the flow request selects the terminal to execute; if the execution result data of the edge server is used as an output value to assign a value to the execution decision result data, the flow request selects the edge server to execute.
By adopting the technical scheme of one or more embodiments of the present specification, each time the terminal receives a flow request for a target service, the historical execution decision result data and running state data before the current flow request are utilized. The terminal uploads the historical execution decision result data and the running state data to the cloud server, and the cloud server calculates and solves global parameters for determining the current flow request execution decision result according to preset end-edge cloud scheduling. And the terminal determines execution decision result data of the current flow request according to the solved global parameters, and performs calculation distribution on the terminal and the edge server according to the execution decision result data of the current flow request to obtain the equipment of the current flow request. And each time the terminal receives the flow request aiming at the target service, calculating force distribution is carried out by combining historical flow request data. The method provided by the technical scheme of one or more embodiments of the present disclosure can open the current computing power resource allocation scheduling capability to the service system when computing power allocation is performed. The end-edge cloud optimization method has better robustness through the coupling scheduling of the service system and the combination of an offline algorithm and an online decision.
In summary, particular embodiments of the present subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may be advantageous.
The end-edge cloud scheduling optimization method provided for one or more embodiments of the present disclosure is based on the same thought, and one or more embodiments of the present disclosure further provide an end-edge cloud scheduling optimization system and apparatus.
FIG. 5 is a schematic block diagram of a terminal edge cloud scheduling optimization system according to an embodiment of the present disclosure. Referring to fig. 5, the end-edge cloud scheduling optimization system may include: an end-edge cloud scheduling optimization system, comprising: terminal, edge server and high in the clouds server, wherein:
the terminal acquires historical execution decision result data and running state data under at least one flow request before the current flow request aiming at the target service, uploads the historical execution decision result data and the running state data to the cloud server, wherein the historical execution decision result data is used for determining a decision result of executing the historical flow request, and the running state data is used for recording states of the terminal and the edge server under the historical flow request;
The cloud server determines constraint parameters for end-edge cloud scheduling optimization based on the running state data, wherein the constraint parameters comprise flow value estimated data and calculation power estimated data, acquires global parameters for end-edge cloud scheduling optimization, which are obtained by optimizing execution decision result data and constraint parameters through a preset end-edge cloud scheduling optimization algorithm in a set period, and sends the global parameters to the terminal;
the terminal determines execution decision result data of the current flow request according to the acquired global parameters, and performs calculation distribution on the terminal and the edge server according to the execution decision result data of the current flow request to obtain equipment for executing the current flow request;
and the edge server performs calculation unloading according to the execution decision result data of the current flow request.
By adopting the technical scheme of one or more embodiments of the present specification, each time the terminal receives a flow request for a target service, the historical execution decision result data and running state data before the current flow request are utilized. The terminal uploads the historical execution decision result data and the running state data to the cloud server, and the cloud server calculates and solves global parameters for determining the current flow request execution decision result according to preset end-edge cloud scheduling. And the terminal determines execution decision result data of the current flow request according to the solved global parameters, and performs calculation distribution on the terminal and the edge server according to the execution decision result data of the current flow request to obtain the equipment of the current flow request. And each time the terminal receives the flow request aiming at the target service, calculating force distribution is carried out by combining historical flow request data. The method provided by the technical scheme of one or more embodiments of the present disclosure can open the current computing power resource allocation scheduling capability to the service system when computing power allocation is performed. In addition, the cloud end obtains constraint parameters for scheduling optimization after carrying out quantization operation on the running state data, and the global parameters for carrying out end-edge cloud scheduling optimization are obtained by carrying out optimization processing on execution decision result data and the constraint parameters through a preset end-edge cloud scheduling optimization algorithm. The core of the end-edge cloud scheduling optimization algorithm is to realize the maximum optimization target of the flow value under the premise of calculation force constraint. The method provided by the technical scheme of one or more embodiments of the present disclosure can implement differential distribution of computing power for different value flows under the constraint of computing power resources, so that the global service effect is better. Fig. 6 is a schematic block diagram of a terminal edge cloud optimized scheduler according to an embodiment of the present disclosure. Referring to fig. 6, an end-edge cloud scheduling optimization apparatus includes:
The historical data obtaining module 610 obtains historical execution decision result data and running state data of at least one time of flow request before the current flow request of the target service, where the historical execution decision result data is used to determine a decision result of executing the historical flow request, and the running state data is used to record states of the terminal and the edge server under the historical flow request.
The running state data quantization module 620 determines constraint parameters for end-to-end cloud scheduling optimization based on the running state data, where the constraint parameters include flow value estimation data and calculation force estimation data.
The global parameter solving module 630 provides the constraint parameters to the cloud end, and obtains global parameters for performing end-edge cloud scheduling optimization, which are obtained by the cloud end through a preset end-edge cloud scheduling optimization algorithm in a set period and are obtained by performing optimization processing on the execution decision result data and the constraint parameters.
And the execution decision result data solving module 640 determines execution decision result data of the current flow request according to the acquired global parameters.
And the calculation power distribution module 650 performs calculation power distribution on the terminal and the edge server according to the execution decision result data of the current flow request, so as to obtain equipment for executing the current flow request.
In one embodiment, the operational state data quantization module 620 includes:
the scene data quantization unit is used for extracting scene data of the target service contained in the running state data, and performing quantization processing on the scene data to obtain flow value estimated data;
and the execution equipment computing force data quantization unit extracts the execution equipment computing force data contained in the running state data, performs quantization processing on the execution equipment computing force data to obtain computing force estimated data, and the execution equipment computing force data is used for describing the computing force of equipment executing the flow request and transmitting time-consuming data.
In one embodiment, global parameter solving module 630 includes:
the flow constraint parameter calculation unit is used for carrying out linear operation on the execution decision result data and the flow value estimated data based on the execution decision result data, the flow value estimated data and a pre-trained linear optimization model to obtain flow constraint parameters;
the calculation force constraint parameter calculation unit is used for carrying out linear operation on the execution decision result data and the calculation force estimated data based on the execution decision result data, the calculation force estimated data and a pre-trained linear optimization model to obtain calculation force constraint parameters;
And the global parameter calculation unit is used for determining the global parameter based on the flow constraint parameter and the calculation force constraint parameter by scheduling an optimization algorithm through a terminal Bian Yun.
In one embodiment, the execution of decision result data solving module 640 includes:
the execution result data calculation unit is used for respectively determining terminal execution result data and edge server execution result data based on the global parameter through a preset execution decision weighting algorithm;
and the execution decision result data assignment unit is used for determining the execution decision result data of the current flow request based on the terminal execution result data and the edge server execution result data.
By adopting the technical scheme of one or more embodiments of the present specification, each time the terminal receives a flow request for a target service, the historical execution decision result data and running state data before the current flow request are utilized. The terminal uploads the historical execution decision result data and the running state data to the cloud server, and the cloud server calculates and solves global parameters for determining the current flow request execution decision result according to preset end-edge cloud scheduling. And the terminal determines execution decision result data of the current flow request according to the solved global parameters, and performs calculation distribution on the terminal and the edge server according to the execution decision result data of the current flow request to obtain the equipment of the current flow request. And each time the terminal receives the flow request aiming at the target service, calculating force distribution is carried out by combining historical flow request data. The method provided by the technical scheme of one or more embodiments of the present disclosure can open the computing power resource allocation scheduling capability to the service system when the computing power allocation is performed. In addition, the cloud end obtains constraint parameters for scheduling optimization after carrying out quantization operation on the running state data, and the global parameters for carrying out end-edge cloud scheduling optimization are obtained by carrying out optimization processing on execution decision result data and the constraint parameters through a preset end-edge cloud scheduling optimization algorithm. The core of the end-edge cloud scheduling optimization algorithm is to realize the maximum optimization target of the flow value under the premise of calculation force constraint. The method provided by the technical scheme of one or more embodiments of the present disclosure can implement differential distribution of computing power for different value flows under the constraint of computing power resources, so that the global service effect is better.
It should be understood by those skilled in the art that the above end-edge cloud scheduling optimization device can be used to implement the end-edge cloud scheduling optimization method described above, and the detailed description thereof should be similar to that of the method section described above, so that the details are not repeated here for avoiding complexity.
Fig. 7 is a schematic block diagram of an electronic device in accordance with an embodiment of the present description. Referring to fig. 7, the electronic device may be configured or configured differently, may include one or more processors 1001 and a memory 1002, and may store one or more application programs or data in the memory 1002. Wherein the memory 1002 may be transient storage or persistent storage. The application programs stored in the memory 1002 may include one or more modules (not shown), each of which may include a series of computer-executable instructions for use in an electronic device. Still further, the processor 1001 may be configured to communicate with the memory 1002 and execute a series of computer executable instructions in the memory 1002 on an electronic device. The electronic device may also include one or more power supplies 1003, one or more wired or wireless network interfaces 1004, one or more input/output interfaces 1005, and one or more keyboards 1006.
In particular, in this embodiment, an electronic device includes a memory, and one or more programs, where the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the electronic device, and the one or more programs configured to be executed by one or more processors include instructions for:
acquiring historical execution decision result data and running state data of a terminal under at least one flow request before the current flow request of a target service, wherein the historical execution decision result data is used for determining a decision result of executing the historical flow request, and the running state data is used for recording states of the terminal and an edge server under the historical flow request; based on the running state data, determining constraint parameters for end-edge cloud scheduling optimization, wherein the constraint parameters comprise flow value estimated data and calculation force estimated data; providing constraint parameters to the cloud end, and acquiring global parameters for performing end-edge cloud scheduling optimization, which are obtained by the cloud end through a preset end-edge cloud scheduling optimization algorithm to optimize execution decision result data and constraint parameters in a set period; determining execution decision result data of the current flow request according to the acquired global parameters; and performing calculation distribution on the terminal and the edge server according to the execution decision result data of the current flow request to obtain equipment for executing the current flow request.
Further, based on the method shown in fig. 1 to 5, one or more embodiments of the present disclosure further provide a storage medium, which is used to store computer executable instruction information, and in a specific embodiment, the storage medium may be a U disc, an optical disc, a hard disk, etc., where the computer executable instruction information stored in the storage medium can implement the following flow when executed by a processor:
acquiring historical execution decision result data and running state data of a terminal under at least one flow request before the current flow request of a target service, wherein the historical execution decision result data is used for determining a decision result of executing the historical flow request, and the running state data is used for recording states of the terminal and an edge server under the historical flow request; based on the running state data, determining constraint parameters for end-edge cloud scheduling optimization, wherein the constraint parameters comprise flow value estimated data and calculation force estimated data; providing constraint parameters to the cloud end, and acquiring global parameters for performing end-edge cloud scheduling optimization, which are obtained by the cloud end through a preset end-edge cloud scheduling optimization algorithm to optimize execution decision result data and constraint parameters in a set period; determining execution decision result data of the current flow request according to the acquired global parameters; and performing calculation distribution on the terminal and the edge server according to the execution decision result data of the current flow request to obtain equipment for executing the current flow request.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for one of the above-described storage medium embodiments, since it is substantially similar to the method embodiment, the description is relatively simple, and reference is made to the description of the method embodiment for relevant points.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components. The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing one or more embodiments of the present description.
One skilled in the art will appreciate that one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Moreover, one or more embodiments of the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
One or more embodiments of the present description are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (trans itory media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
One or more embodiments of the present specification may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description of one or more embodiments is merely illustrative of one or more embodiments of the present disclosure and is not intended to be limiting of the present disclosure. Various modifications and alterations to one or more embodiments of this description will be apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of one or more embodiments of the present disclosure, are intended to be included within the scope of the claims of one or more embodiments of the present disclosure.

Claims (10)

1. An end-edge cloud scheduling optimization method comprises the following steps:
acquiring historical execution decision result data and running state data of a terminal under at least one flow request before the current flow request of a target service, wherein the historical execution decision result data is used for determining a decision result of executing a historical flow request, and the running state data is used for recording states of the terminal and an edge server under the historical flow request;
determining constraint parameters for end-edge cloud scheduling optimization based on the running state data, wherein the constraint parameters comprise flow value estimated data and calculation force estimated data;
providing the constraint parameters to a cloud end, and acquiring global parameters for end-to-end cloud scheduling optimization, which are obtained by the cloud end through a preset end-to-end cloud scheduling optimization algorithm in a set period and are obtained by optimizing the execution decision result data and the constraint parameters;
determining execution decision result data of the current flow request according to the acquired global parameters;
and according to the execution decision result data of the current flow request, performing calculation power distribution on the terminal and the edge server to obtain equipment for executing the current flow request.
2. The method of claim 1, the determining constraint parameters for end-to-end cloud scheduling optimization based on the operational state data, the constraint parameters including flow value prediction data and computational effort prediction data, comprising:
Extracting scene data of the target service contained in the running state data, and carrying out quantization processing on the scene data to obtain flow value estimated data;
extracting the calculation force data of the execution equipment contained in the running state data, and carrying out quantization processing on the calculation force data of the execution equipment to obtain calculation force estimated data, wherein the calculation force data of the execution equipment is used for describing the calculation force of equipment executing a flow request and time-consuming data for transmission.
3. The method of claim 2, wherein the scenario data of the target service includes one or more of traffic request scale data, size data of a traffic request operation model, scenario category data corresponding to the target service, and recommended refresh frequency information.
4. The method of claim 2, the execution device computing power data comprising one or more of edge computing power data of an edge server, time consuming data of the edge server for executing the target service, time consuming data of terminal to edge server network transmissions, and time consuming data of the terminal for executing the target service.
5. The method of claim 1, wherein the providing the constraint parameter to the cloud end and obtaining global parameters for performing end-edge cloud scheduling optimization, which are obtained by the cloud end performing optimization on the decision result data and the constraint parameter in a set period through a preset end-edge cloud scheduling optimization algorithm, includes:
Based on the execution decision result data, the flow value estimated data and a pre-trained linear optimization model, performing linear operation on the execution decision result data and the flow value estimated data to obtain flow constraint parameters;
based on the execution decision result data, the calculation force estimated data and the pre-trained linear optimization model, performing linear operation on the execution decision result data and the calculation force estimated data to obtain calculation force constraint parameters;
and determining the global parameter based on the flow constraint parameter and the computational power constraint parameter through an end-edge cloud scheduling optimization algorithm.
6. The method according to claim 1, wherein the determining the execution decision result data of the current flow request according to the acquired global parameter includes:
determining terminal execution result data and edge server execution result data respectively based on the global parameters through a preset execution decision weighting algorithm;
and determining execution decision result data of the current flow request based on the terminal execution result data and the edge server execution result data.
7. The method of claim 6, wherein the determining the execution decision result data of the current flow request based on the terminal execution result data and the edge server execution result data comprises:
And taking the better execution result data in the terminal execution result data and the edge server execution result data as execution decision result data of the current flow request.
8. An end-edge cloud scheduling optimization system, comprising: terminal, edge server and high in the clouds server, wherein:
the terminal acquires historical execution decision result data and running state data of at least one flow request before the current flow request aiming at the target service, and uploads the historical execution decision result data and the running state data to the cloud server, wherein the historical execution decision result data is used for determining a decision result of executing the historical flow request, and the running state data is used for recording states of the terminal and the edge server under the historical flow request;
the cloud server determines constraint parameters for end-to-end cloud scheduling optimization based on the running state data, wherein the constraint parameters comprise flow value estimated data and calculation force estimated data, acquires global parameters for end-to-end cloud scheduling optimization, which are obtained by optimizing the execution decision result data and the constraint parameters through a preset end-to-end cloud scheduling optimization algorithm in a set period, and sends the global parameters to the terminal;
The terminal determines execution decision result data of the current flow request according to the acquired global parameters, and performs calculation power distribution on the terminal and the edge server according to the execution decision result data of the current flow request to obtain equipment for executing the current flow request;
and the edge server performs calculation unloading according to the execution decision result data of the current flow request.
9. An end-edge cloud scheduling optimization device, comprising:
the historical data acquisition module is used for acquiring historical execution decision result data and running state data of a terminal under at least one flow request before the current flow request of the target service, wherein the historical execution decision result data is used for determining a decision result of executing the historical flow request, and the running state data is used for recording states of the terminal and an edge server under the historical flow request;
the operation state data quantization module is used for determining constraint parameters for end-to-end cloud scheduling optimization based on the operation state data, wherein the constraint parameters comprise flow value estimated data and calculation force estimated data;
the global parameter solving module is used for providing the constraint parameters for the cloud end, and acquiring global parameters for carrying out end-edge cloud scheduling optimization, which are obtained by the cloud end through a preset end-edge cloud scheduling optimization algorithm in a set period, and carrying out optimization processing on the execution decision result data and the constraint parameters;
The execution decision result data solving module is used for determining execution decision result data of the current flow request according to the acquired global parameters;
and the calculation power distribution module is used for carrying out calculation power distribution on the terminal and the edge server according to the execution decision result data of the current flow request to obtain equipment for executing the current flow request.
10. An electronic device, comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, enable the processor to:
acquiring historical execution decision result data and running state data of a terminal under at least one flow request before the current flow request of a target service, wherein the historical execution decision result data is used for determining a decision result of executing a historical flow request, and the running state data is used for recording states of the terminal and an edge server under the historical flow request;
determining constraint parameters for end-edge cloud scheduling optimization based on the running state data, wherein the constraint parameters comprise flow value estimated data and calculation force estimated data;
providing the constraint parameters to a cloud end, and acquiring global parameters for end-to-end cloud scheduling optimization, which are obtained by the cloud end through a preset end-to-end cloud scheduling optimization algorithm in a set period and are obtained by optimizing the execution decision result data and the constraint parameters;
Determining execution decision result data of the current flow request according to the acquired global parameters;
and performing calculation distribution on the terminal and the edge server according to the execution decision result data of the current flow request to obtain equipment for executing the current flow request.
CN202311229342.6A 2023-09-21 2023-09-21 End-edge cloud scheduling optimization method, system and device Pending CN117294715A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311229342.6A CN117294715A (en) 2023-09-21 2023-09-21 End-edge cloud scheduling optimization method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311229342.6A CN117294715A (en) 2023-09-21 2023-09-21 End-edge cloud scheduling optimization method, system and device

Publications (1)

Publication Number Publication Date
CN117294715A true CN117294715A (en) 2023-12-26

Family

ID=89243770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311229342.6A Pending CN117294715A (en) 2023-09-21 2023-09-21 End-edge cloud scheduling optimization method, system and device

Country Status (1)

Country Link
CN (1) CN117294715A (en)

Similar Documents

Publication Publication Date Title
CN111930486B (en) Task selection data processing method, device, equipment and storage medium
CN117010571A (en) Traffic prediction method, device and equipment
CN109391680B (en) Timed task data processing method, device and system
CN111324533B (en) A/B test method and device and electronic equipment
CN108243032B (en) Method, device and equipment for acquiring service level information
US20240137411A1 (en) Container quantity adjustment for application
CN109739627B (en) Task scheduling method, electronic device and medium
CN116225669B (en) Task execution method and device, storage medium and electronic equipment
CN111782409B (en) Task processing method, device and electronic equipment, and risk identification task processing method and device
CN114416360A (en) Resource allocation method and device and Internet of things system
CN117370034B (en) Evaluation method and device of computing power dispatching system, storage medium and electronic equipment
CN116932175B (en) Heterogeneous chip task scheduling method and device based on sequence generation
CN116821647B (en) Optimization method, device and equipment for data annotation based on sample deviation evaluation
CN110968483B (en) Service data acquisition method and device and electronic equipment
CN115543945B (en) Model compression method and device, storage medium and electronic equipment
CN116089046A (en) Scheduling method, device, equipment and medium based on software-defined computing network
CN117294715A (en) End-edge cloud scheduling optimization method, system and device
CN111242195B (en) Model, insurance wind control model training method and device and electronic equipment
CN111026458B (en) Application program exit time setting method and device
CN116881724B (en) Sample labeling method, device and equipment
CN117348999B (en) Service execution system and service execution method
CN116109008B (en) Method and device for executing service, storage medium and electronic equipment
CN117455015B (en) Model optimization method and device, storage medium and electronic equipment
CN117170881B (en) Resource regulation method and device, storage medium and processor
CN117421129B (en) Service execution method and device based on heterogeneous storage cluster and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination