CN115086331B - Cloud equipment scheduling method, device and system, electronic equipment and storage medium - Google Patents

Cloud equipment scheduling method, device and system, electronic equipment and storage medium Download PDF

Info

Publication number
CN115086331B
CN115086331B CN202210861612.4A CN202210861612A CN115086331B CN 115086331 B CN115086331 B CN 115086331B CN 202210861612 A CN202210861612 A CN 202210861612A CN 115086331 B CN115086331 B CN 115086331B
Authority
CN
China
Prior art keywords
information
node
terminal
cloud
resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210861612.4A
Other languages
Chinese (zh)
Other versions
CN115086331A (en
Inventor
杜凯
庄坤
付哲
许文郁
王广芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202210861612.4A priority Critical patent/CN115086331B/en
Publication of CN115086331A publication Critical patent/CN115086331A/en
Application granted granted Critical
Publication of CN115086331B publication Critical patent/CN115086331B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1012Server selection for load balancing based on compliance of requirements or conditions with available server resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1014Server selection for load balancing based on the content of a request

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Transfer Between Computers (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

According to the cloud equipment scheduling method, device and system, electronic equipment and storage medium, according to the resource capacity information of the edge nodes, the edge nodes matched with cloud demand information of the terminal can be accurately screened, and accuracy and timeliness of cloud resource scheduling are effectively improved. In addition, the resource capability information of different edge nodes in the application can be different, and accurate scheduling of the edge nodes with heterogeneous resources and heterogeneous capabilities can be realized rapidly by matching the cloud demand information with the resource capability information.

Description

Cloud equipment scheduling method, device and system, electronic equipment and storage medium
Technical Field
The present application relates to the field of cloud computing technologies, and in particular, to a cloud device scheduling method, device and system, an electronic device, and a storage medium.
Background
The cloud resource scheduling is used for scheduling the cloud resources for the demand parties to use, so that the resource demands of more demand parties can be flexibly met, the demand or task processing efficiency is effectively improved, and the utilization rate of the cloud resources can be improved.
According to the cloud resource scheduling scheme in the related art, the accuracy of resource scheduling is poor, and particularly, fast and accurate scheduling cannot be achieved for cloud resources with heterogeneous resources and heterogeneous capabilities.
Disclosure of Invention
The embodiment of the application provides a cloud equipment scheduling method, device and system, electronic equipment and storage medium, so as to realize rapid and accurate scheduling of cloud resources with heterogeneous resources and heterogeneous capabilities.
In a first aspect, an embodiment of the present application provides a cloud device scheduling method, which is applied to a cloud scheduling server, where the cloud scheduling server is communicatively connected to at least one edge node and at least one terminal, the edge node includes at least one service resource deployed in a cloud, and the terminal implements functional clouding by using the service resource in the edge node, and includes:
responding to a node access request received from a terminal, and determining cloud demand information of the terminal;
Screening a first target node meeting the clouding demand information from at least one edge node according to the resource capacity information of the at least one edge node; wherein, the resource capability information of different edge nodes is different;
And sending first node information of the first target node to the terminal, wherein the first node information is used for indicating the terminal to access the first target node based on the first node information.
In a second aspect, an embodiment of the present application provides a cloud device scheduling apparatus, which is applied to a cloud scheduling server, including: the demand determining module is used for responding to a node access request received from a terminal and determining cloud demand information of the terminal;
The node screening module is used for screening a first target node meeting the clouding requirement information from at least one edge node according to the resource capacity information of the at least one edge node; wherein, the resource capability information of different edge nodes is different;
And the information transmission module is used for transmitting first node information of the first target node to the terminal, and the first node information is used for indicating the terminal to access the first target node based on the first node information.
In a third aspect, an embodiment of the present application provides a cloud device scheduling system, including any one of the cloud scheduling servers, any one of the terminals, and any one of the edge nodes.
In a fourth aspect, an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory, the processor implementing any one of the methods described above when the computer program is executed.
In a fifth aspect, embodiments of the present application provide a computer readable storage medium having a computer program stored therein, which when executed by a processor, implements a method as in any of the above.
Compared with the related art, the application has the following advantages:
According to the embodiment of the application, firstly, cloud demand information of a terminal is determined in response to a node access request received from the terminal; then, according to the resource capacity information of at least one edge node, screening a first target node which meets cloud demand information from the at least one edge node; wherein, the resource capability information of different edge nodes is different; and finally, transmitting the node information of the first target node to the terminal so that the terminal accesses the first target node based on the node information. According to the cloud resource scheduling method and the cloud resource scheduling device, according to the resource capacity information of the edge nodes, the edge nodes which are matched with the cloud demand information of the terminal, namely the first target nodes, can be accurately screened, and accuracy and timeliness of cloud resource scheduling are effectively improved. In addition, the resource capability information of different edge nodes in the application can be different, and the accurate scheduling of the edge nodes with heterogeneous resources and heterogeneous capabilities can be rapidly realized by matching the cloud demand information with the resource capability information.
The above statements are merely for the purpose of summarizing the specification and are not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present application will be readily appreciated by reference to the accompanying drawings and the following detailed description.
Drawings
In the drawings, the same reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily drawn to scale. It is appreciated that these drawings depict only some embodiments according to the disclosure and are not therefore to be considered limiting of its scope.
Fig. 1 is a schematic view of a scenario of a cloud device scheduling method provided by the present application;
FIG. 2 is a flowchart of a cloud device scheduling method according to an embodiment of the present application;
FIG. 3 is a flow chart of a cloud device scheduling method according to another embodiment of the present application;
FIG. 4 is a block diagram of a cloud device scheduling apparatus according to an embodiment of the present application;
fig. 5 is a block diagram of an electronic device for implementing an embodiment of the application.
Detailed Description
Hereinafter, only certain exemplary embodiments are briefly described. As will be recognized by those of skill in the pertinent art, the described embodiments may be modified in various different ways without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature and not as restrictive.
In order to facilitate understanding of the technical solutions of the embodiments of the present application, the following description describes related technologies of the embodiments of the present application, and the following related technologies may be optionally combined with the technical solutions of the embodiments of the present application as alternatives, which all belong to the protection scope of the embodiments of the present application.
Some technical concepts to which the present application may relate will be described first.
Function clouding of the terminal: the computing, storage, etc. capabilities or functions implemented on the terminal are transferred from the terminal to the cloud, e.g., to an edge cloud. The function clouding of the terminal provides high-quality service for users by combining low-delay network and low-cost hardware through the virtualization capability of the cloud.
Edge cloud: the cloud computing architecture is built on an edge infrastructure based on the core technology and the edge computing capability in the cloud computing technology, has all-round service capabilities of computing, storing, safety and the like, and can form an end-to-end service architecture of 'cloud edge end three-body coordination' with a central cloud and an Internet of things terminal. By processing the network forwarding, storing, calculating, intelligent data analysis and other works on the edge cloud, the response time delay can be effectively reduced, cloud pressure can be lightened, bandwidth cost can be reduced, and the operations such as whole network scheduling, calculation power distribution and the like can be realized.
Edge node: aspects include, but are not limited to, one or more devices in an internet data center (INTERNET DATA CENTER, IDC) deployed on an edge cloud.
The cloud equipment scheduling method is used for scheduling the matched edge nodes for the terminals, so that the scheduling of cloud resources is realized, and the cloud equipment scheduling method can be executed on a cloud scheduling server, and as shown in fig. 1, the cloud scheduling server is in communication connection with at least one terminal and at least one edge node. The cloud scheduling server may include a signaling service component, a scheduling component, a preset database, a monitoring component, and an operation and maintenance component.
As shown in fig. 1, a cloud scheduling server receives a node access request sent by a terminal through a signaling service component, analyzes the node access request by using a scheduling component to determine cloud demand information of the terminal, and then the scheduling component obtains resource capacity information of at least one edge node from a preset database, matches the cloud demand information with the resource capacity information of each edge node, and determines an edge node which can be accessed by the terminal, namely a first target node described below according to a matching result. In addition, the cloud scheduling server can collect the resource capacity information of each edge node through the monitoring component and the operation and maintenance component, the monitoring component and the operation and maintenance component send the collected resource capacity information to the preset database, and the resource capacity information of each edge node is stored in the preset database.
The above-described resource capability information may include, and is not limited to, at least one of: and the edge nodes are provided with resource distribution information, network information and service capability deployment information. Illustratively, the resource distribution information may include, but is not limited to, information of a geographic location of the edge node, a name of the edge node, a central processor of the edge node and architecture information thereof, a graphics processing unit (Graphics Processing Unit, GPU) of the edge node, a memory capacity of the edge node, a block storage capacity of the edge node, an object storage capacity in the edge node, a file storage capacity, a number of files that have been currently stored, digital information of the edge node, computational load information, and the like. The architecture information characterizing CPU (Central Processing Unit) of the cpu is specifically what architecture, which may be an ARM architecture, an architecture based on a microprocessor-executed computer language instruction set X86, and so on. The CPUs of different edge nodes can adopt different architectures, so that the edge nodes with heterogeneous resources are formed.
The resource distribution information may also include available resource information, which may include, but is not limited to, remaining memory capacity, number of remaining storable files, input/output resources available for storage, block storage remaining capacity, remaining object storage capacity, health of the cloud scheduling server, and the like.
The network information may include, but is not limited to: network bandwidth, network delay, packet loss information, code rate, bandwidth type, operators, available network resource information, etc. Wherein the available network resource information may include, but is not limited to: residual bandwidth information, etc.
The service capability deployment information may include, but is not limited to, services that the edge node has deployed, types of terminals that can service, services that can be provided to the terminals, and the like. Wherein the terminal types that can be serviced can be associated with streaming protocol types supported by the edge node. The service capability deployment information can reflect the service status of the edge node.
In addition, the resource capability information may further include information such as information related to bandwidth cost, information related to computational cost, information related to storage cost, and the like, so as to match to an edge node with the minimum loss of bandwidth cost, computational cost, or storage cost according to the information.
The operation and maintenance information in the resource capability information can be acquired by the operation and maintenance component, and the available network resource information in the resource capability information and the like can be acquired by the monitoring component. As shown in fig. 1, the operation and maintenance component is in communication connection with an operation and maintenance agent in the edge node, and the operation and maintenance agent collects operation and maintenance information of the edge node and sends the operation and maintenance information to the operation and maintenance component. The monitoring component is in communication connection with a monitoring agent in the edge node, and the monitoring agent collects information such as available network resource information of the edge node and sends the information to the monitoring component.
Illustratively, the operation and maintenance information in the resource capability information may include, but is not limited to, information of a geographic location of the edge node, a name of the edge node, a network bandwidth, a bandwidth cost, a central processor of the edge node and architecture information thereof, a graphics processing unit GPU of the edge node, a memory capacity of the edge node, a block storage capacity of the edge node, an object storage capacity in the edge node, a file storage capacity, service capability deployment information, and the like. The operation and maintenance information may be stored in a preset database through a resource nanotube flow, for example.
By way of example, the information collected by the monitoring agent may include, but is not limited to, remaining bandwidth information, computational load information, remaining memory capacity, number of remaining storable files, storage of available input-output resources, health of the cloud scheduling server, and the like.
In addition, the log data of the cloud scheduling server can be stored in the preset database, for example, the log data can be collected according to a preset time interval, and then the collected log data is subjected to cleaning, summarizing, calculating and other processes according to the task or function requirements, and then is synchronized to the preset database for scheduling of cloud equipment.
As shown in fig. 1, the CPU in the edge node may adopt an ARM architecture, an X86 architecture, and the like, and the edge node may further include service resources such as a gateway. Wherein as shown in fig. 1, other resources of the node may include graphics processing units GPUs, etc. The service resources are combined to realize functions or capabilities of calculation, storage, device simulation and the like.
By combining the above, the cloud scheduling server executes the cloud equipment scheduling method, and the cloud scheduling server is in communication connection with at least one edge node and at least one terminal, wherein the terminal is a terminal for realizing function clouding by using service resources in the edge node. The edge node comprises at least one service resource deployed on the cloud, and in order to reduce time delay and improve the scheduling effectiveness, the service resource deployed on the cloud can be a service resource deployed on the edge cloud, for example, an ARM server, a graphic processing unit GPU, a NET gateway and the like deployed on the edge cloud.
The cloud device scheduling method of the present application is described below.
Fig. 2 is a flowchart of a cloud device scheduling method according to an embodiment of the present application, which is applied to the cloud scheduling server, and may include the following steps:
s201, in response to a node access request received from a terminal, cloud requirement information of the terminal is determined.
When the terminal needs to realize a certain function or execute a certain task by utilizing the edge node, generating a node access request according to the information such as the task type, IP (Internet Protocol) addresses of the terminal and the like, and sending the node access request to the cloud scheduling server.
After receiving the node access request, the cloud scheduling server analyzes the node access request to obtain information such as task types, IP addresses of the terminals and the like. And then, according to the information such as task types, IP addresses of the terminals and the like, cloud demand information is determined. The clouding requirement information includes some capability information and/or resource information that the edge node needs to possess in order to successfully execute a task, and specifically, the clouding requirement information may include, but is not limited to, at least one of resource requirement information, network requirement information, and service capability requirement information.
Illustratively, the resource demand information may include, and is not limited to, at least one of: the method comprises the steps of requiring information on a CPU architecture, requiring information on residual memory capacity, requiring information on GPU, requiring information on residual capacity of block storage, requiring information on the number of residual storable files and requiring information on residual object storage capacity.
Illustratively, the network demand information may include, but is not limited to, at least one of: bandwidth type requirement information, code rate requirement information, network delay requirement information, geographic position requirement information and operator requirement information. The bandwidth types include a type corresponding to an uplink bandwidth and a type corresponding to a downlink bandwidth. Because tasks of different task types have different demands on network delay, the demand information of the network delay can be determined through the task types. Since the geographic location of the terminal and the operator located at the geographic location can be determined according to the IP address, the requirement information of the geographic location and the requirement information of the operator can be determined according to the IP address of the terminal.
Illustratively, the service capability requirement information includes, but is not limited to, at least one of: the service requirement information of the service to be deployed and the service requirement information of the type of the terminal capable of serving.
S202, screening a first target node meeting the clouding requirement information from at least one edge node according to the resource capability information of the at least one edge node; wherein the resource capability information of different edge nodes is different.
The resource capacity information of each edge node is stored in a preset database, the resource capacity information of each edge node can be firstly called from the preset database, then the resource capacity information of each edge node is matched with the clouding demand information, and the edge node meeting the clouding demand information is used as a first target node according to the matching result. The first target node is an edge node to which the terminal can access.
For example, according to the network bandwidth, network delay, code rate and other resource capability information of each edge node, an edge node capable of meeting the cloud requirement information such as the requirement information of bandwidth type, the requirement information of code rate, the requirement information of network delay and the like can be screened from each edge node to serve as the first target node.
Because resources such as CPU architecture of different edge nodes can be different, service capability deployment information can be different, and therefore resource capability information of different edge nodes can be different, and thus all edge nodes with heterogeneous resources and heterogeneous capabilities are formed. By means of matching of the resource capacity information of the edge nodes and the cloud demand information, the edge nodes capable of meeting the cloud demand information can be screened from all the edge nodes with heterogeneous resources and heterogeneous capacities, so that accuracy of cloud resource scheduling is improved, and accurate scheduling of cloud resources with heterogeneous resources and heterogeneous capacities is achieved.
S203, first node information of the first target node is sent to the terminal, and the first node information is used for indicating the terminal to access the first target node based on the first node information.
The first node information includes information required for the terminal to access the first target node, for example, the first node information may include, but is not limited to, information such as a name of the first target node, an identifier of the first target node, a network address of the first target node, and the like.
The cloud scheduling server for executing the cloud equipment scheduling method can be deployed in a high-availability data center, and can simultaneously support the scheduling demands of a plurality of edge nodes.
In some embodiments, if a plurality of first target nodes are determined according to the cloud device scheduling method in the above embodiments, one first target node may be selected randomly as a node to which the terminal is finally connected, or an edge node with lower cost, stronger stability and/or better user experience may be selected as a node to which the terminal is finally connected according to a certain policy.
Illustratively, the node to which the terminal ultimately accesses may be selected from a plurality of first target nodes using the steps of:
Firstly, respectively determining index scores of each first target node aiming at least one preset index; the preset indicators may include, but are not limited to, cost indicators, stability indicators, experience indicators.
Then, aiming at each first target node in at least part of first target nodes, determining a scheduling score of the first target node according to at least one index score of the first target node and preset weights corresponding to preset indexes; the scheduling score may be determined here in particular by means of weighted summation.
Then, taking the first target node with the scheduling score meeting the preset condition as a second target node; if the higher the scheduling score is, the better the corresponding first target node is, for example, if the higher the scheduling score is, the better the cost, stability and user experience can be considered, the first target node with the highest scheduling score can be used as a second target node, and the second target node is the node to which the terminal is finally connected.
Finally, second node information of the second target node is sent to the terminal, so that the terminal accesses the second target node based on the second node information; the second node information includes information required for the terminal to access the second target node, for example, the second node information may include, but is not limited to, a name of the second target node, an identifier of the second target node, and a network address of the second target node.
Illustratively, the cost index may include, but is not limited to, at least one of a bandwidth cost sub-index, a computational cost sub-index, and a storage cost sub-index. Wherein, the index score of the bandwidth cost sub-index can be calculated or determined according to the associated information of the bandwidth cost of the edge node included in the resource capability information; the index score of the computational effort cost sub-index may be calculated or determined from the associated information of the computational effort cost of the edge node included in the resource capability information; the index score of the storage cost sub-index may be calculated or determined from the associated information of the storage cost of the edge node included in the resource capability information.
The stability index may include, but is not limited to, at least one of a power balance sub-index, a resource dispersion sub-index, and a distribution link stability sub-index. The index score of the power balance degree sub-index can be determined according to the information of the service which can be provided for the terminal, the information of the geographic position of the edge node, the central processing unit of the edge node, the architecture information thereof and the like in the resource capability information; illustratively, each type of information is preset with a weight, and weighted summation is carried out according to the specific value and the weight of each type of information, so that the index score of the power balance degree sub-index is obtained. The index score of the resource dispersion sub-index can be determined according to information such as the geographic position of the edge node in the resource capability information. The index score of the distribution link stability sub-index can be determined according to the information such as network bandwidth, packet loss information and the like in the resource capability information.
The experience metrics described above may include, but are not limited to: at least one of geographic distance ion index, operator sub-index and network time delay sub-index. The closer the geographic distance is, the higher the processing efficiency of the task is, the faster the response speed of the information is, and the better the user experience is, so that the geographic distance ion index corresponding to the geographic distance is used as one of experience indexes. Specifically, the geographic distance between the terminal and the edge node can be determined according to the position of the terminal and the geographic position information of the equipment node in the resource capability information, and the index score of the geographic distance ion index can be determined according to the geographic distance. Illustratively, the farther the geographic distance, the lower the index score of the geographic distance ion index, the closer the geographic distance, and the higher the index score of the geographic distance ion index.
The network service quality of different operators is different, and the network of the operator with good network service quality is used for carrying out operations such as information transmission in the task processing process, so that the speed and accuracy of task processing can be effectively improved, and the better the user experience is, therefore, the sub-index of the operator is used as one of experience indexes. Illustratively, the better the network service quality of the operator, the higher the index score of the operator sub-index, the worse the network service quality of the operator, and the lower the index score of the operator sub-index.
The longer the network delay is, the lower the task processing efficiency is, and the worse the user experience is, so that the network delay sub-index is used as one of experience indexes. Illustratively, the longer the network delay, the lower the index score of the network delay sub-index, the shorter the network delay, and the higher the index score of the network delay sub-index.
By utilizing the steps, the second target nodes with lower cost, better stability and higher experience degree can be selected from the plurality of first target nodes, so that resources can be saved, and the user experience degree and the accuracy of cloud resource scheduling can be effectively improved.
It should be noted that, the specific indexes and the specific sub-indexes included in the preset indexes may be flexibly set, for example increased, decreased, etc., according to the requirements of the actual scene, for example, the performance indexes may also be increased in the preset indexes. The information specifically included in the clouding requirement information and the resource capability information can be flexibly set according to the requirement of the actual scene, for example, some information is added, some information is reduced, and the like.
Fig. 3 is a flowchart of a cloud device scheduling method according to another embodiment of the present application.
Step one, receiving a node access request sent by a terminal, and analyzing the node access request to obtain information such as task types, IP addresses of the terminal and the like; and determining cloud demand information according to the task type obtained by analysis, the IP address of the terminal and other information.
Wherein the clouding demand information includes, but is not limited to, resource demand information, network demand information, service capability demand information.
Step two, screening from all the edge nodes according to the resource capacity information of all the edge nodes to obtain a first target node meeting the clouding demand information; and under the condition that a plurality of first target nodes are provided, performing multi-target optimization operation, and screening a second target node finally accessed by the terminal from the plurality of first target nodes.
In the process of multi-objective optimization, respectively calculating index scores of preset indexes such as index scores of cost indexes, stability indexes and experience indexes of each edge node, and performing weighted summation operation according to each index score of the first target node and preset weights corresponding to each preset index to obtain a scheduling score of the first target node. And then selecting the first target node with the highest scheduling score as a second target node.
Wherein the cost indicator may include, but is not limited to: bandwidth cost sub-index, calculation cost sub-index, storage cost sub-index; the stability index may include, but is not limited to: calculating a force balance degree sub-index, a resource dispersion sub-index and a distribution link stability sub-index; the experience metrics may include, but are not limited to: geographic distance ion index, operator sub-index, network time delay sub-index.
In some embodiments, the cloud scheduling server acquires information such as available network resource information of the edge node from a monitoring agent of the edge node by using a monitoring component; the cloud scheduling server acquires the operation and maintenance information of the edge node from the operation and maintenance agent of the edge node by using the operation and maintenance component. And the cloud scheduling server stores the acquired resource capacity information into a preset database. The monitoring agent and the operation and maintenance agent are information acquisition agent components built in the edge nodes.
After a period of time of cloud resource scheduling, the cloud scheduling server receives a plurality of node access requests, which are historically received by the cloud scheduling server, and are therefore referred to as historical node access requests. According to the historical node access requests, historical cloud demand information can be determined, the historical cloud demand information is statistical information of demands, and cloud demands of the terminal can be generally represented. According to the resource capability information of each current edge node and the historical cloud demand information, whether each currently deployed edge node can meet the cloud demand of the terminal or not can be determined. And then determining node operation aiming at the edge node according to a judging result of whether the cloud requirement is met, wherein the node operation comprises at least one of creating the edge node, canceling the edge node and updating the service capability of the edge node. And finally, executing corresponding node operation on the edge node.
By way of example, whether each currently deployed edge node can meet the clouding requirement of the terminal may be determined according to available resource information, available network resource information, service capability deployment information, and historical clouding requirement information in the resource capability information of each currently deployed edge node.
When the resource capacity of each currently deployed edge node exceeds more cloud requirements of the terminal, each currently deployed edge node can meet the cloud requirements of the terminal, and a part of edge nodes are not called and are in an idle state, so that the resource waste of the edge nodes is caused, and the node operation performed can be the operation of cancelling the edge nodes. By removing a part of edge nodes, the waste of cloud resources can be effectively reduced.
When the cloud demand of the terminal exceeds the resource capacity of all currently deployed edge nodes, each currently deployed edge node cannot meet the cloud demand of the terminal, and a part of the terminal cannot execute tasks or realize certain functions by using the edge nodes, so that the defect of poor timeliness of cloud resource scheduling is caused, and therefore, the node operation executed at the moment can be the operation of creating the edge nodes. More edge nodes are utilized to execute tasks or realize certain functions for the terminal, so that timeliness of cloud resource scheduling can be effectively improved.
When cloud requirements of the terminal are not matched with available services or capabilities of all currently deployed edge nodes, each currently deployed edge node cannot meet the cloud requirements of the terminal, and at the moment, the defects of poor scheduling timeliness and inaccurate cloud resources are caused, so that the node operation performed can be the operation of updating the service capabilities of the edge nodes. The updated service or capability of the edge node is more matched with the service and capability required by the terminal, and the edge node is utilized to execute tasks or realize certain functions for the terminal, so that the timeliness and accuracy of cloud resource scheduling can be effectively improved.
For example, the above-mentioned historical node access request for determining the historical clouding requirement information may be a node access request received by the cloud scheduling server in a historical period of a preset duration.
For example, the cloud scheduling server may issue operation information of the node operation to the operation and maintenance component, and the operation and maintenance component performs an operation corresponding to the operation information.
By means of the cloud equipment scheduling method in the embodiment, through the mode that the resource capacity information of the edge node is matched with the cloud demand information, traffic access scheduling, scheduling of resource heterogeneous scenes and scheduling of capacity heterogeneous scenes can be achieved, meanwhile, used protocols of different terminal types can be met, cloud resource scheduling of various types of terminals can be supported, the cloud equipment scheduling method comprises the steps of and is not limited to terminals of various scenes such as cloud mobile phones, cloud desktops, cloud games, cloud set top boxes and video monitoring, multi-level, multi-strategy and complex scene cloud resource scheduling such as audio and video streaming capacity, gateway capacity, computing processing capacity and fusion energy scheduling in the scene of terminal function clouding is achieved, data acquisition and data matching of various information such as storage and network in a mode that the data acquisition and data matching of various information are close to specific tasks or functional dimensions in more detail can be achieved, scheduling accuracy can be provided, and scheduling capability with performance and cost advantages can be provided for the scene of terminal function clouding.
Correspondingly to the application scene and the method of the method provided by the embodiment of the application, the embodiment of the application also provides a cloud equipment scheduling device. Fig. 4 is a block diagram of a cloud device scheduling apparatus according to an embodiment of the present application, where the cloud device scheduling apparatus is applied to a cloud scheduling server, and may include:
The requirement determining module 401 is configured to determine clouding requirement information of a terminal in response to a node access request received from the terminal.
A node screening module 402, configured to screen, according to resource capability information of at least one edge node, a first target node that meets the clouding requirement information from the at least one edge node; wherein the resource capability information of different edge nodes is different.
An information transmitting module 403, configured to send first node information of the first target node to the terminal, where the first node information is used to instruct the terminal to access the first target node based on the first node information.
In some embodiments, the clouding requirement information includes at least one of: resource demand information, network demand information, service capability demand information; and/or; the resource capability information includes at least one of: and the edge nodes are provided with resource distribution information, network information and service capability deployment information.
In some embodiments, the node screening module 402 is further configured to:
under the condition that a plurality of first target nodes are provided, respectively determining index scores of each first target node aiming at least one preset index; determining a scheduling score of each first target node in at least part of the first target nodes according to at least one index score of the first target node and preset weights corresponding to preset indexes; taking the first target node with the scheduling score meeting the preset condition as a second target node;
The information transmitting module 403 is further configured to send second node information of the second target node to the terminal, where the second node information is used to instruct the terminal to access the second target node based on the second node information.
In some embodiments, the preset indicators include at least one of: cost index, stability index and experience index;
The cost indicator includes at least one of: bandwidth cost sub-index, calculation cost sub-index, storage cost sub-index;
The stability index comprises at least one of the following: calculating a force balance degree sub-index and a resource dispersion sub-index;
the experience index comprises at least one of the following: geographic distance ion index, operator sub-index, network time delay sub-index.
In some embodiments, the requirement determining module 401, when determining clouding requirement information of the terminal in response to a node access request received from the terminal, is configured to:
responding to a node access request received from a terminal, and analyzing the node access request to obtain a task type and/or an IP address of the terminal;
And determining the clouding requirement information of the terminal according to the task type and/or the IP address of the terminal.
In some embodiments, further comprising a storage module 404 to:
receiving resource capacity information sent by the edge node;
storing the resource capacity information into a preset database;
the edge node collects the resource capacity information through a built-in information collection agent component.
In some embodiments, the node operation module 405 is further configured to:
acquiring at least one history node access request received in a history time period of a preset duration;
according to the at least one history node access request, determining history clouding demand information;
Determining operation information aiming at the edge nodes according to the resource capacity information of each current edge node and the historical clouding demand information;
executing corresponding node operation on the edge node according to the operation information; wherein the node operation includes at least one of an operation of creating an edge node, an operation of revoking an edge node, and an operation of updating a service capability of an edge node.
In some embodiments, the edge node includes utilizing at least one service resource deployed on an edge cloud.
The functions of each module in each device of the embodiment of the present application may be referred to the corresponding descriptions in the above methods, and have corresponding beneficial effects, which are not described herein.
Corresponding to the application scenario and the technical solution of the method or the device provided by the embodiment of the present application, the embodiment of the present application further provides a cloud device scheduling system, as shown in fig. 1, including the cloud scheduling server in the foregoing embodiment, the terminal in the foregoing embodiment, and the edge node in the foregoing embodiment. Specific functions or operations of the cloud scheduling server, the terminal and the edge node are not described in detail.
Fig. 5 is a block diagram of an electronic device for implementing an embodiment of the application. As shown in fig. 5, the electronic device includes: memory 510 and processor 520, memory 510 stores a computer program executable on processor 520. The processor 520, when executing the computer program, implements the methods of the above-described embodiments. The number of memories 510 and processors 520 may be one or more.
The electronic device further includes:
and the communication interface 530 is used for communicating with external equipment and carrying out data interaction transmission.
If the memory 510, the processor 520, and the communication interface 530 are implemented independently, the memory 510, the processor 520, and the communication interface 530 may be connected to each other and communicate with each other through buses. The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in fig. 5, but not only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 510, the processor 520, and the communication interface 530 are integrated on a chip, the memory 510, the processor 520, and the communication interface 530 may communicate with each other through internal interfaces.
The embodiment of the application provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the method provided in the embodiment of the application.
The embodiment of the application also provides a chip, which comprises a processor and is used for calling the instructions stored in the memory from the memory and running the instructions stored in the memory, so that the communication equipment provided with the chip executes the method provided by the embodiment of the application.
The embodiment of the application also provides a chip, which comprises: the input interface, the output interface, the processor and the memory are connected through an internal connection path, the processor is used for executing codes in the memory, and when the codes are executed, the processor is used for executing the method provided by the application embodiment.
It should be appreciated that the Processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field programmable gate array (Field Programmable GATE ARRAY, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or any conventional processor or the like. It is noted that the processor may be a processor supporting an advanced reduced instruction set machine (ADVANCED RISC MACHINES, ARM) architecture.
Further, optionally, the memory may include a read-only memory and a random access memory, and may further include a nonvolatile random access memory. The memory may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), programmable ROM (PROM), erasable Programmable ROM (EPROM), electrically Erasable EPROM (EEPROM), or flash Memory, among others. Volatile memory can include random access memory (Random Access Memory, RAM), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available. For example, static random access memory (STATIC RAM, SRAM), dynamic random access memory (Dynamic Random Access Memory, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDR SDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNC LINK DRAM, SLDRAM), and Direct memory bus RAM (DR RAM).
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with the present application are fully or partially produced. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. Computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Any process or method description in a flowchart or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process. And the scope of the preferred embodiments of the present application includes additional implementations in which functions may be performed in a substantially simultaneous manner or in an opposite order from that shown or discussed, including in accordance with the functions that are involved.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. All or part of the steps of the methods of the embodiments described above may be performed by a program that, when executed, comprises one or a combination of the steps of the method embodiments, instructs the associated hardware to perform the method.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules described above, if implemented in the form of software functional modules and sold or used as a stand-alone product, may also be stored in a computer-readable storage medium. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that various changes and substitutions are possible within the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (11)

1. The cloud equipment scheduling method is characterized by being applied to a cloud scheduling server, wherein the cloud scheduling server is in communication connection with at least one edge node and at least one terminal, the edge node comprises at least one service resource deployed in a cloud, and the terminal realizes function clouding by utilizing the service resource in the edge node; comprising the following steps:
responding to a node access request received from a terminal, and determining cloud demand information of the terminal;
Screening a first target node meeting the clouding demand information from at least one edge node according to the resource capacity information of the at least one edge node; wherein, the resource capability information of different edge nodes is different;
Transmitting first node information of the first target node to the terminal, wherein the first node information is used for indicating the terminal to access the first target node based on the first node information;
the clouding demand information comprises at least one of the following: resource demand information, network demand information, service capability demand information;
Further comprises:
acquiring at least one history node access request received in a history time period of a preset duration;
according to the at least one history node access request, determining history clouding demand information;
Determining operation information aiming at the edge nodes according to the resource capacity information of each current edge node and the historical clouding demand information;
executing corresponding node operation on the edge node according to the operation information;
the determining cloud requirement information of the terminal in response to a node access request received from the terminal comprises the following steps:
Responding to a node access request received from a terminal, and analyzing the node access request to obtain a task type and/or an Internet Protocol (IP) address of the terminal;
And determining the clouding requirement information of the terminal according to the task type and/or the IP address of the terminal.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The resource capability information includes at least one of: and the edge nodes are provided with resource distribution information, network information and service capability deployment information.
3. The method according to claim 1 or 2, further comprising:
under the condition that a plurality of first target nodes are provided, respectively determining index scores of each first target node aiming at least one preset index;
Determining a scheduling score of each first target node in at least part of the first target nodes according to at least one index score of the first target node and preset weights corresponding to preset indexes;
taking the first target node with the scheduling score meeting the preset condition as a second target node;
and sending second node information of the second target node to the terminal, wherein the second node information is used for indicating the terminal to access the second target node based on the second node information.
4. A method according to claim 3, wherein the predetermined criteria comprises at least one of: cost index, stability index and experience index;
The cost indicator includes at least one of: bandwidth cost sub-index, calculation cost sub-index, storage cost sub-index;
The stability index comprises at least one of the following: calculating a force balance degree sub-index and a resource dispersion sub-index;
the experience index comprises at least one of the following: geographic distance ion index, operator sub-index, network time delay sub-index.
5. The method according to claim 1 or 2, further comprising:
receiving resource capacity information sent by the edge node;
storing the resource capacity information into a preset database;
the edge node collects the resource capacity information through a built-in information collection agent component.
6. The method of claim 1 or 2, wherein the node operation comprises at least one of an operation to create an edge node, an operation to revoke an edge node, and an operation to update a service capability of an edge node.
7. The method according to claim 1 or 2, wherein the edge node comprises at least one service resource deployed on an edge cloud.
8. The cloud equipment scheduling device is characterized by being applied to a cloud scheduling server and comprising:
the demand determining module is used for responding to a node access request received from a terminal and determining cloud demand information of the terminal;
The node screening module is used for screening a first target node meeting the clouding requirement information from at least one edge node according to the resource capacity information of the at least one edge node; wherein, the resource capability information of different edge nodes is different;
An information transmission module, configured to send first node information of the first target node to the terminal, where the first node information is used to instruct the terminal to access the first target node based on the first node information;
the clouding demand information comprises at least one of the following: resource demand information, network demand information, service capability demand information;
The device is also for:
acquiring at least one history node access request received in a history time period of a preset duration;
according to the at least one history node access request, determining history clouding demand information;
Determining operation information aiming at the edge nodes according to the resource capacity information of each current edge node and the historical clouding demand information;
executing corresponding node operation on the edge node according to the operation information;
a demand determination module for:
Responding to a node access request received from a terminal, and analyzing the node access request to obtain a task type and/or an Internet Protocol (IP) address of the terminal;
And determining the clouding requirement information of the terminal according to the task type and/or the IP address of the terminal.
9. A cloud device scheduling system, characterized by comprising a cloud scheduling server, a terminal and an edge node performing the method of any of claims 1 to 7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory, the processor implementing the method of any one of claims 1-7 when the computer program is executed.
11. A computer readable storage medium having stored therein a computer program which, when executed by a processor, implements the method of any of claims 1-7.
CN202210861612.4A 2022-07-20 2022-07-20 Cloud equipment scheduling method, device and system, electronic equipment and storage medium Active CN115086331B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210861612.4A CN115086331B (en) 2022-07-20 2022-07-20 Cloud equipment scheduling method, device and system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210861612.4A CN115086331B (en) 2022-07-20 2022-07-20 Cloud equipment scheduling method, device and system, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115086331A CN115086331A (en) 2022-09-20
CN115086331B true CN115086331B (en) 2024-06-07

Family

ID=83243090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210861612.4A Active CN115086331B (en) 2022-07-20 2022-07-20 Cloud equipment scheduling method, device and system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115086331B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118158458A (en) * 2022-12-06 2024-06-07 中兴通讯股份有限公司 Virtual reality display method, set top box, server, terminal, device, system and storage medium
CN117201504B (en) * 2023-11-08 2024-02-27 福州高新区熠云科技有限公司 Edge node network data flow direction control method, system, equipment and medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107404523A (en) * 2017-07-21 2017-11-28 中国石油大学(华东) Cloud platform adaptive resource dispatches system and method
CN109831796A (en) * 2019-02-03 2019-05-31 北京邮电大学 Resource allocation methods in wireless network virtualization
CN110138883A (en) * 2019-06-10 2019-08-16 北京贝斯平云科技有限公司 Mixed cloud resource allocation methods and device
CN111224806A (en) * 2018-11-27 2020-06-02 华为技术有限公司 Resource allocation method and server
CN111800284A (en) * 2019-04-08 2020-10-20 阿里巴巴集团控股有限公司 Method and device for selecting edge cloud node set and electronic equipment
CN111988412A (en) * 2020-08-25 2020-11-24 东北大学 Intelligent prediction system and method for multi-tenant service resource demand
CN112035268A (en) * 2020-11-04 2020-12-04 网络通信与安全紫金山实验室 Method and device for scheduling computing resources, computer equipment and storage medium
CN112328318A (en) * 2020-09-27 2021-02-05 北京华胜天成科技股份有限公司 Method and device for automatic planning of proprietary cloud platform and storage medium
CN112887228A (en) * 2019-11-29 2021-06-01 阿里巴巴集团控股有限公司 Cloud resource management method and device, electronic equipment and computer readable storage medium
CN113676512A (en) * 2021-07-14 2021-11-19 阿里巴巴新加坡控股有限公司 Network system, resource processing method and equipment
CN113726846A (en) * 2021-07-14 2021-11-30 阿里巴巴新加坡控股有限公司 Edge cloud system, resource scheduling method, equipment and storage medium
CN114363414A (en) * 2020-09-29 2022-04-15 华为云计算技术有限公司 Method, device and system for scheduling calculation examples

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150142494A1 (en) * 2013-11-21 2015-05-21 Descisys Ltd. System and method for scheduling temporary resources
US20180191859A1 (en) * 2016-12-29 2018-07-05 Ranjan Sharma Network resource schedulers and scheduling methods for cloud deployment
US10887176B2 (en) * 2017-03-30 2021-01-05 Hewlett Packard Enterprise Development Lp Predicting resource demand in computing environments

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107404523A (en) * 2017-07-21 2017-11-28 中国石油大学(华东) Cloud platform adaptive resource dispatches system and method
CN111224806A (en) * 2018-11-27 2020-06-02 华为技术有限公司 Resource allocation method and server
CN109831796A (en) * 2019-02-03 2019-05-31 北京邮电大学 Resource allocation methods in wireless network virtualization
CN111800284A (en) * 2019-04-08 2020-10-20 阿里巴巴集团控股有限公司 Method and device for selecting edge cloud node set and electronic equipment
CN110138883A (en) * 2019-06-10 2019-08-16 北京贝斯平云科技有限公司 Mixed cloud resource allocation methods and device
CN112887228A (en) * 2019-11-29 2021-06-01 阿里巴巴集团控股有限公司 Cloud resource management method and device, electronic equipment and computer readable storage medium
CN111988412A (en) * 2020-08-25 2020-11-24 东北大学 Intelligent prediction system and method for multi-tenant service resource demand
CN112328318A (en) * 2020-09-27 2021-02-05 北京华胜天成科技股份有限公司 Method and device for automatic planning of proprietary cloud platform and storage medium
CN114363414A (en) * 2020-09-29 2022-04-15 华为云计算技术有限公司 Method, device and system for scheduling calculation examples
CN112035268A (en) * 2020-11-04 2020-12-04 网络通信与安全紫金山实验室 Method and device for scheduling computing resources, computer equipment and storage medium
CN113676512A (en) * 2021-07-14 2021-11-19 阿里巴巴新加坡控股有限公司 Network system, resource processing method and equipment
CN113726846A (en) * 2021-07-14 2021-11-30 阿里巴巴新加坡控股有限公司 Edge cloud system, resource scheduling method, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
云计算、边缘计算和算力网络;蒋林涛;;信息通信技术(04);全文 *

Also Published As

Publication number Publication date
CN115086331A (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN115086331B (en) Cloud equipment scheduling method, device and system, electronic equipment and storage medium
CN110198307B (en) Method, device and system for selecting mobile edge computing node
CN110198363B (en) Method, device and system for selecting mobile edge computing node
CN109428749B (en) Network management method and related equipment
CN111431758B (en) Cloud network equipment testing method and device, storage medium and computer equipment
CN110166526B (en) Multi-CDN access management method and device, computer equipment and storage medium
CN107613528B (en) Method and system for controlling service flow
US10897421B2 (en) Method of processing a data packet relating to a service
KR101773593B1 (en) Mobile fog computing system for performing multi-agent based code offloading and method thereof
CN110198332B (en) Scheduling method and device for content distribution network node and storage medium
CN113472900B (en) Message processing method, device, storage medium and computer program product
CN110719273A (en) Method for determining back source node, server and computer readable storage medium
CN104954431A (en) Network selection method, device and system
CN113301079B (en) Data acquisition method, system, computing device and storage medium
CN112784992A (en) Network data analysis method, functional entity and electronic equipment
CN105554125B (en) A kind of method and its system for realizing webpage fit using CDN
CN111212087A (en) Method, device, equipment and storage medium for determining login server
CN108347465B (en) Method and device for selecting network data center
CN110611937A (en) Data distribution method and device, edge data center and readable storage medium
CN108770014B (en) Calculation evaluation method, system and device of network server and readable storage medium
CN113746851B (en) Proxy system and method supporting real-time analysis of GRPC request
CN109347766A (en) A kind of method and device of scheduling of resource
CN113010314B (en) Load balancing method and device and electronic equipment
CN114726796A (en) Flow control method, gateway and switch
CN115515171A (en) Load prediction method and device of SA network and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant