CN113596152A - Load balancing implementation method, system and device - Google Patents

Load balancing implementation method, system and device Download PDF

Info

Publication number
CN113596152A
CN113596152A CN202110859164.XA CN202110859164A CN113596152A CN 113596152 A CN113596152 A CN 113596152A CN 202110859164 A CN202110859164 A CN 202110859164A CN 113596152 A CN113596152 A CN 113596152A
Authority
CN
China
Prior art keywords
load balancing
target
target service
service
master
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110859164.XA
Other languages
Chinese (zh)
Other versions
CN113596152B (en
Inventor
张新丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ezviz Software Co Ltd
Original Assignee
Hangzhou Ezviz Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Ezviz Software Co Ltd filed Critical Hangzhou Ezviz Software Co Ltd
Priority to CN202110859164.XA priority Critical patent/CN113596152B/en
Publication of CN113596152A publication Critical patent/CN113596152A/en
Application granted granted Critical
Publication of CN113596152B publication Critical patent/CN113596152B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0663Performing the actions predefined by failover planning, e.g. switching to standby network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1023Server selection for load balancing based on a hash applied to IP addresses or costs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1029Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers using data related to the state of servers by a load balancer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1044Group management mechanisms 
    • H04L67/1046Joining mechanisms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/143Termination or inactivation of sessions, e.g. event-controlled end of session
    • H04L67/145Termination or inactivation of sessions, e.g. event-controlled end of session avoiding end of session, e.g. keep-alive, heartbeats, resumption message or wake-up for inactive or interrupted session

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Cardiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Hardware Redundancy (AREA)

Abstract

The application provides a method, a system and a device for realizing load balancing. The method includes the steps that a load balancing node group is introduced, a Master and a Slave of a service are selected from the load balancing node group aiming at the service with the type being the load balancing type, the Master shares access to the service to a service instance running the service according to a load balancing mode when the Master detects the external access service, and the Master also conducts heartbeat keeping with each Slave through a unicast mode so that when each Slave detects a Master fault based on heartbeat keeping, a new Master which is in a normal state and serves as a target service is selected again to continue to take over the fault Master to execute the target service. This enables fail-over on the order of seconds, enabling high availability of load balancing nodes in the K8s cluster.

Description

Load balancing implementation method, system and device
Technical Field
The present application relates to data processing technologies, and in particular, to a method, a system, and an apparatus for implementing load balancing.
Background
Kubernetes (K8 s for short) is an open-source container arrangement engine and supports automatic deployment, application containerization management and the like. In K8s, multiple containers can be created, each container runs an application instance inside, and then management, discovery and access to the group of application instances are realized through a load balancing strategy, and the details do not need operation and maintenance personnel to perform complicated manual configuration and processing.
In application, the K8s cluster may provide a Load balancing (Load Balancer) type Service (Service) through a Load balancing node. Here, Service is an important concept in the K8s cluster for being exposed outside the K8s cluster to be known and accessed by clients in a manner of inside the cluster (virtual IP) and outside the cluster (nodePort or loadbalance). However, how to ensure high availability of the load balancing node in implementing load balancing is a technical problem to be solved urgently at present.
Disclosure of Invention
The application provides a method, a system and a device for realizing load balancing, so as to ensure high availability of a load balancing node in the process of realizing load balancing.
The application provides a load balancing implementation method, which is applied to a control device, wherein the control device is used for managing and controlling a service with a load balancing type in a K8s cluster, and the method comprises the following steps:
when monitoring that a target service of which the type is a load balancing type is newly created in the K8s cluster, determining a target load balancing node cluster for mounting the target service and mounting the target service to the target load balancing node cluster; the target load balancing node group comprises more than two load balancing nodes in the K8s cluster;
selecting a target access IP address of a target service from a virtual IP address pool allocated to a target load balancing node group, selecting a Master and a Slave of a standby load balancing node of the target service from the target load balancing node group, so that when the Master detects that the Master accesses the target service based on the target access IP address, the Master shares the access of the target service to a service instance running the target service in a load balancing manner, and performs heartbeat maintenance with each Slave in a unicast manner, so that when each Slave detects the fault of the Master based on the heartbeat maintenance, a new Master of a normal Slave hosting the target service is reselected to continue to take over the fault Master to execute the target service.
A load balancing implementation method is applied to a load balancing node in a K8s cluster, and comprises the following steps:
when acquiring that the node is a Master load balancing node Master of a target service which is newly created in the K8s cluster and of which the type is a load balancing type, performing heartbeat maintenance with a standby load balancing node Slave of the target service in a unicast mode, and sharing access to the target service to a service instance running the target service in a load balancing mode when detecting that an external target access IP address based on the target service accesses the target service;
when the node is known to be a standby load balancing node Slave of a target service newly created in the K8s cluster and having a load balancing type, if a Master fault is kept detected according to a heartbeat with a Master of a main load balancing node of the target service, a Slave in a normal state is elected together with other slaves of the target service again as a new Master of the target service, and when the node is elected as the new Master, the Slave in the fault state is taken over to execute the target service, and the Slave of the target service is subjected to heartbeat keeping in a unicast manner.
A system for implementing load balancing, the system comprising: the system comprises a control device and at least one load balancing node group; the load balancing node cluster comprises more than two load balancing nodes in the K8s cluster;
the control device is used for managing and controlling the middle load balancing node of the K8s cluster and executing the first method;
the load balancing node performs the second method as described above.
The embodiment of the application also provides the electronic equipment. The electronic device includes: a processor and a machine-readable storage medium;
the machine-readable storage medium stores machine-executable instructions executable by the processor;
the processor is configured to execute machine-executable instructions to implement the steps of the above-disclosed method.
It can be seen from the above technical solutions that, in the embodiments of the present application, a load balancing node group is introduced, for a service whose type is a load balancing type, a Master and a Slave of the service are selected from the load balancing node group, the Master shares access to the service to a service instance running the service in a load balancing manner when detecting an external access service, and the Master further performs heartbeat maintenance with each Slave in a unicast manner, so that when each Slave detects a Master failure based on the heartbeat maintenance, a new Master that serves as a target service in a normal state is reselected to continue to take over the failed Master to execute the target service. This realizes the fault second-level transfer and the high availability of the load balancing nodes in the K8s cluster;
further, in this embodiment, by introducing the load balancing node group, it is realized that the load balancing node group can realize the load balancing service by the active/standby switching between the local load balancing nodes, and it is realized that the load balancing service is decoupled from any controller in the K8s cluster, the active/standby switching between the load balancing nodes does not depend on the controller, and whether the controller is down or not does not affect the load balancing service in operation.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a system configuration diagram provided in an embodiment of the present application;
FIG. 2 is a flow chart of a method provided by an embodiment of the present application;
FIG. 3 is a flow chart of another method provided by an embodiment of the present application;
FIG. 4 is a block diagram of an apparatus according to an embodiment of the present disclosure;
FIG. 5 is a block diagram of another apparatus according to an embodiment of the present disclosure;
fig. 6 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In order to make those skilled in the art better understand the technical solutions provided by the embodiments of the present application and make the above objects, features and advantages of the embodiments of the present application more obvious and understandable, the following provides a detailed description of a load balancing implementation system provided by the embodiments of the present application with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a system structure diagram provided in the embodiment of the present application. The system is arranged in a K8s cluster. As shown in fig. 1, the system may include: the system comprises a control device and at least one load balancing node group; wherein each load balancing node cluster comprises more than two nodes (nodes) in the K8s cluster for providing load balancing services. For convenience of description, the node herein may be simply referred to as a load balancing node. Alternatively, the load balancing node can be a four-layer load balancer based on a K8s cluster.
As an embodiment, the control device is used for managing and controlling services of Load balancing (Load Balancer) type in the K8s cluster. For example, in the new creation of a service with a load balancing type, an access IP address is allocated to the service, the service is mounted to a corresponding load balancing node group, and a load balancing node is selected from the load balancing node group as a Master load balancing node (Master) of the service, which is responsible for sharing access to the service to a service instance running the service in a load balancing manner; and when the service with the type of the load balancing type is deleted, recovering the access IP address distributed to the service, relieving the mounting of the service on the corresponding load balancing node group, releasing the Master and the like which serve as the service in the load balancing node group.
It should be noted that, in this embodiment, a specific implementation form of the control device is not specifically limited, and the control device may be a load balancing controller provided by a current manufacturer, and the like.
In this embodiment, for each load balancing node group, a heartbeat is maintained between the Master serving as the service in the load balancing node group and other standby load balancing nodes (Slave) serving as the service in the load balancing node group, and once a Master failure is detected based on the heartbeat, the Slave serving as the service in the load balancing node group reselects a new Master serving as the service to take over the service, so as to achieve a failure second-level transition, thereby achieving high availability of the load balancing nodes in the K8s cluster.
It should be noted that, in this embodiment, the load balancing node group may be configured in advance according to actual requirements, for example, the load balancing nodes in the same network segment may be configured in the same load balancing node group, and so on. The present embodiment does not specifically limit how to plan and configure the load balancing node group.
The method provided by the embodiments of the present application is described below:
referring to fig. 2, fig. 2 is a flowchart of a method provided by an embodiment of the present application. This flow is applied to the control apparatus described above. As shown in fig. 2, the process may include the following steps:
step 201, when monitoring that a target service of which the type is a load balancing type is newly created in the K8s cluster, the control device determines a target load balancing node group for mounting the target service and mounts the target service to the target load balancing node group.
In this embodiment, the user will newly create a Service (Service) in the K8s cluster according to the actual requirement. Optionally, as an embodiment, the control device may monitor the newly created service in the K8s cluster in real time by interacting with the reset API interface of the K8s API Server in the K8s cluster. Here, the K8s API Server provides HTTP Rest interfaces such as add, delete, change, check, and watch of various resource objects (pod, RC, Service, etc.) of K8s, and is a data bus and a data center of the entire system. Based on this, the control device can easily know whether a service is newly created in the K8s cluster by interacting with the reset API interface of the K8s API Server in the K8s cluster. It should be noted that, once a service is newly created in the K8s cluster, the newly created service may be recorded in the resource list in the K8s cluster.
In this embodiment, the newly created service has a corresponding service type. If the type of the newly created service is a load balancing type, it indicates that more than two service instances for running the service are needed to run the service in a load balancing manner. The present embodiment focuses on a service whose type is a load balancing type, and other service types are not involved for the time being. For convenience of description, the service of the load balancing type is referred to as a target service.
In this embodiment, in order to implement that more than two service instances run the target service in a load balancing manner, it is necessary to control the balancing nodes in the K8s cluster, and as for how to control, as described in step 201, a target load balancing node group for mounting the target service is determined and the target service is mounted to the target load balancing node group, and then the following step 202 is performed. In the present embodiment, the target load balancing node group is as described above, and includes at least two or more load balancing nodes in the K8s cluster. As to how to determine the target load balancing node group for mounting the target service in step 201, a description of a specific embodiment will be given below, and details thereof are not repeated here.
Step 202, the control device selects a target access IP address of the target service from the virtual IP address pool to which the target load balancing node group has been allocated, and selects a Master load balancing node Master and a Slave load balancing node Slave of the target service from the target load balancing node group.
Once the Master and the Slave of the target service are selected, the Master shares access to the target service to a service instance running the target service in a load balancing mode when detecting that the target service is accessed by an external IP address based on target access, and the Master also performs heartbeat maintenance with each Slave in a unicast mode so as to reselect a new Master which is in a normal state and serves as the target service to continue to take over the failed Master to execute the target service when each Slave detects a Master failure based on the heartbeat maintenance. This enables fail-over on the order of seconds, enabling high availability of load balancing nodes in the K8s cluster. As for how to select a target access IP address of a target service from the virtual IP address pool to which the target load balancing node group has been allocated, and how to select a Master and a Slave of the target service from the target load balancing node group, the following description will be given by way of example, and details are not described here again.
Thus, the flow shown in fig. 2 is completed.
As can be seen from the flow shown in fig. 2, in the embodiment, a load balancing node group is introduced, for a service of which the type is a load balancing type, a Master and a Slave of the service are selected from the load balancing node group, the Master shares access to the service to a service instance running the service in a load balancing manner when detecting an external access service, and the Master further performs heartbeat maintenance with each Slave in a unicast manner, so that when each Slave detects a Master failure based on the heartbeat maintenance, a new Master serving as a target service in a normal state is reselected to continue to take over the failed Master to execute the target service. This realizes the fault second-level transfer and the high availability of the load balancing nodes in the K8s cluster;
further, in this embodiment, by introducing the load balancing node group, it is realized that the load balancing node group can realize the load balancing service by the active/standby switching between the local load balancing nodes, and it is realized that the load balancing service is decoupled from any controller in the K8s cluster, the active/standby switching between the load balancing nodes does not depend on the controller, and whether the controller is down or not does not affect the load balancing service in operation.
How to determine the target load balancing node group for mounting the target service in step 201 is described as follows:
in this embodiment, when a service is newly created in the K8s cluster, corresponding annotation (annotation) information is configured for the service. Based on this, the present embodiment may determine, based on the annotation information of the target service, a target load balancing node group for mounting the target service:
for an embodiment, the determining, in the step 201, a target load balancing node group for mounting a target service may include:
step a1, if a load balancing node group identifier exists in the annotation information of the target service, determining a load balancing node group corresponding to the load balancing node group identifier as the target load balancing node group; if the load balancing node identification exists in the annotation information of the target service, determining a load balancing node group where the load balancing node corresponding to the load balancing node identification is located at present as the target load balancing node group; and if the annotation information of the target service does not have the load balancing node group identifier or the load balancing node identifier, selecting one load balancing node group with an idle load from the current existing load balancing node groups as the target load balancing node group.
In this embodiment, each load balancing node group has its own unique identifier for characterizing the load balancing node group. Different load balancing node groups have different identities. Based on this, when the annotation information of the target service includes the load balancing node group identifier, as described in step a1, the load balancing node group corresponding to the load balancing node group identifier may be directly determined as the target load balancing node group.
Of course, if the annotation information of the target service does not include the load balancing node group identifier but includes the load balancing node identifier, optionally, in this embodiment, the load balancing node group where the load balancing node corresponding to the load balancing node identifier is currently located may be determined as the target load balancing node group.
If the annotation information of the target service does not include the load balancing node group identifier nor the load balancing node identifier, optionally, in this embodiment, one with an idle load may be selected from the currently existing load balancing node groups as the target load balancing node group. Optionally, in order to facilitate execution of selecting a load-free one from currently-existing load balancing node clusters as the target load balancing node cluster, before executing this step, the load balancing node cluster information may be obtained through interacting with a Rest API interface of a K8s API Server in a K8s cluster. The load balancing node group information may at least include: the load of each load balancing node in the load balancing node group (optionally, for each load balancing node group, the sum of all the current loads of each load balancing node in the load balancing node group is the load of the load balancing node group). Based on this, here, selecting one with an idle load from currently existing load balancing node groups as the target load balancing node group may include: and determining load balancing node groups meeting the idle conditions from the current existing load balancing node groups, and selecting one load balancing node group meeting the idle conditions as the target load balancing node group. The idle condition here is, for example, that the number of load balancing nodes whose loads are lower than the first set load threshold is greater than a set value, the load of the load balancing node group is smaller than the second set load threshold, and the like, and this embodiment is not particularly limited. The load balancing node group information further includes a load balancing node group control switch. Here, the load balancing node group control switch is used to indicate whether the load balancing function is enabled. The switch is controlled based on the load balancing node group, and in this embodiment, the selected target load balancing node group enables a load balancing function.
It should be noted that the load balancing node group information further includes a load balancing node group identifier and a load balancing node group node list (identifiers of load balancing nodes in the load balancing node group are recorded). Based on this, the load balancing node group identifier or the load balancing node identifier in the annotation information definitely exists in the obtained load balancing node group information.
Finally, how to determine the target load balancing node group for mounting the target service is realized through the step a 1. It should be noted that step a1 is only an example and is not intended to be limiting.
How to select the target IP address of the target service from the virtual IP address pool to which the target load balancing node group has been allocated in step 202 is described as follows:
optionally, in this embodiment, selecting a target IP address of a target service from the virtual IP address pool to which the target load balancing node group has been allocated includes the following step a 2:
step a2, if there is service related information in the annotation information of the target service, determining the created history service having an association relation with the target service according to the service related information (the access IP address allocated to the history service is one address in the virtual IP address pool), and determining the IP address allocated to the history service as the target access IP address (that is, realizing that a plurality of services share one IP address); if the comment information of the target service has a designated IP address (the designated IP address is one address in the virtual IP address pool), determining the designated IP address as the target access IP address; and if no information associated with the determined target IP address exists in the annotation information of the target service, selecting an unused IP address from the virtual IP address pool as the target access IP address.
That is, it is finally realized through step a2 how to select the target IP address of the target service from the virtual IP address pool to which the target load balancing node group has been allocated. It should be noted that step a2 is only an example and is not intended to be limiting.
How to select the Master and the Slave of the target service from the target load balancing node group in the step 202 is described as follows:
optionally, the selecting the Master and the Slave of the target service from the target load balancing node group may include the following step a 3:
step a3, if the annotation information of the target service has a load balancing node identifier, determining the load balancing node corresponding to the load balancing node identifier as a Master of the target service; otherwise, selecting an idle load balancing node from the target load balancing node group as a Master of the target service; and using other load balancing nodes except the Master in the target load balancing node group as Slave of the target service.
To this end, the selection of the Master and the Slave of the target service from the target load balancing node group is realized through step a 3. It should be noted that, the step a3 is only an example and is not used to limit how to select the Master and the Slave of the target service from the target load balancing node group.
In this embodiment, after the process shown in fig. 2 is executed, the service configuration information of the target service may be further recorded in a resource list corresponding to the target service in the K8s cluster through a Rest API interface with the K8s API Server in the K8s cluster. The service configuration information at least includes the target access IP address, the identifier of the target load balancing node group, the node identifier of the Master serving as the target service in the target load balancing node group, and the node identifier of the Slave serving as the target service (it should be noted that, if all the nodes except the Master in the target load balancing node group are Slave, the service configuration information may not include the node identifier of the Slave serving as the target service).
Once the service configuration information is recorded into the resource list corresponding to the target service in the K8s cluster, the load balancing node serving as the Master reads the service configuration information, and if the node identifier of the Master is found to be the node identifier of itself, it is considered that itself is the Master of the target service. Similarly, the load balancing node serving as the Slave reads the service configuration information, and finds that the node identifier of the Slave is the node identifier of the Slave itself or finds that the node identifier of the Slave does not exist in the service configuration information but the node identifier of the Master is not the node identifier of the Master, and then considers the Slave itself as the Slave of the target service. And then, as described above, when the Master of the target service detects that the external access to the target service is based on the target access IP address, the Master of the target service shares the access to the target service to the service instance running the target service in a load balancing manner, and performs heartbeat maintenance with each Slave in a unicast manner, so that when each Slave detects that the Master fails based on the heartbeat maintenance, a new Master which is in a normal state and serves as the target service is elected again to continue to take over the failed Master to execute the target service.
In this embodiment, the target service whose type is the load balancing type is not fixed, but is updated or deleted.
As an embodiment, when the control device monitors that the target service is deleted, the target access IP address may be recovered, the resource list related to the control target service is deleted from the K8s cluster, and the target service is uninstalled by the target load balancing node group, so that any load balancing node in the target load balancing node group does not execute the target service any more.
As another embodiment, when the change of the annotation information of the target service is monitored, the service configuration information is updated according to the changed annotation information.
Optionally, when the changed annotation information requires to reallocate a target access IP address for the target service, reselecting a new IP address from the virtual IP address pool to allocate to the target service, and updating the target access IP address in the service configuration information to the newly allocated IP address;
and when the changed annotation information requires to select the mounted target load balancing node group for the target service again, selecting a new load balancing node group from all the currently existing load balancing node groups again, and updating the identifier of the target load balancing node group in the service configuration information to the identifier of the newly selected load balancing node group.
The above is how to implement load balancing from the perspective of the station at the control device, and the following is how to implement load balancing from the perspective of the station at the load balancing node:
referring to fig. 3, fig. 3 is a flow chart of another method provided by the embodiment of the present application. This flow applies to the load balancing nodes in the K8s cluster as described above. In this embodiment, when the load balancing node is started, the node information (including, for example, the node identifier, the node IP address, and the like) of the node is reported through the reset API interface with the K8s API Server. The reported node information can be used as a basis for dividing the load balancing node group. Once the load balancing node cluster is partitioned, load balancing node cluster information (described above, not referred to herein) may be recorded.
As shown in fig. 3, the process may include the following steps:
step 301, when it is known that the node is a Master of a target service, which is newly created in a K8s cluster and of which the type is a load balancing type, a standby load balancing node Slave of the target service performs heartbeat maintenance in a unicast manner, and when it is detected that an external target access IP address based on the target service accesses the target service, the access to the target service is shared to a service instance running the target service in a load balancing manner.
As described above, the control device records the service configuration information of the target service into the resource list corresponding to the target service in the K8s cluster through the reset API interface with the K8s API Server in the K8s cluster. Based on this, any load balancing node will read the service configuration information. And after reading the service configuration information, the load balancing node serving as the Master finds that the node identifier of the Master is the node identifier of the Master, and then the load balancing node is regarded as the Master of the target service. That is, the Master load balancing node Master that learns the target service with the type of load balancing newly created in the K8s cluster as the node in step 301 is realized.
For the Master of the target service, as described in step 301, it performs heartbeat maintenance with the standby load balancing node Slave of the target service in a unicast manner. In this embodiment, the Master of the target service may send a heartbeat message to each Slave in a unicast manner based on a Virtual Router Redundancy Protocol (VRRP) to inform the Master that the Master is online.
Meanwhile, when the Master receives an access request (carrying the target access IP address), the Master considers that the external initiates access to the target service, and at the moment, the Master controls the target service instance running the target service to share the access to the target service to the service instance running the target service according to a load balancing mode. For example, the Master forwards the access to the target service to the service instances running the target service in a load balancing manner, so as to realize load balancing of the target service among the service instances.
Step 302, when it is known that the node is a standby load balancing node Slave of a target service newly created in the K8s cluster and of which the type is a load balancing type, if a Master failure is kept detected according to a heartbeat with a Master of a main load balancing node of the target service, a Slave in a normal state is elected together with other slaves of the target service again as a new Master of the target service, and when the node is elected as the new Master, the Slave in the failure Master is taken over to execute the target service, and the Slave in the target service is subjected to heartbeat keeping in a unicast manner.
It should be noted that, in this embodiment, step 301 and step 302 do not have a fixed time sequence.
In this embodiment, the load balancing node of the Slave serving as the target service also reads the service configuration information, and finds that the node identifier of the Slave is a node identifier of itself or finds that the Slave node identifier does not exist in the service configuration information but the node identifier of the Master is not a node identifier of itself, and considers itself as the Slave serving as the target service. Namely, the standby load balancing node Slave is realized, which learns that the newly created type in the K8s cluster is the target service of the load balancing type.
As described in step 301, the Master of the target service performs heartbeat maintenance with the Slave of the target service in a unicast manner, and once a Master failure is detected according to the heartbeat maintenance with the Master of the target service, a Slave in a normal state is elected together with other slaves of the target service again as a new Master of the target service. In this embodiment, there are many ways to elect a new Master between the Slave, for example, the way to elect a Master in the VRPP is used, and this embodiment is not particularly limited.
If the load balancing node is elected as the new Master, the new Master can directly take over the failed Master to execute the target service at the moment, and the heartbeat maintenance is carried out with the Slave of the target service in a unicast mode. In this embodiment, whether the selected new Master or the former Master, the external router announces the Master serving as the target to the external router, so that the external router directly forwards the access to the target service to the Master serving as the target.
It should be noted that, in this embodiment, the Master of the target service may determine, for example, that the Master of the target service has a higher priority than other nodes based on the priority of the node. In addition, when the Master fails, the new Master takes over the target service instead of the failed Master, but when the failed Master recovers, the recovered Master can continue to take over the target service, so that the purpose of integral balance of the cluster is achieved.
The flow shown in fig. 3 is completed.
Through the flow chart shown in figure 3,
it should be noted that, in this embodiment, a Master serving as a service in a load balancing node group shares access to the service to a service instance running the service according to a load balancing manner when detecting an external access service, and the Master further performs heartbeat maintenance with each Slave in a unicast manner, so that when each Slave detects a Master failure based on the heartbeat maintenance, a new Master in a normal state is elected to serve as a target service to continue to take over the failed Master to execute the target service. This realizes the fault second-level transfer and the high availability of the load balancing nodes in the K8s cluster;
further, in this embodiment, the load balancing node group can implement the load balancing service by active/standby switching between the local load balancing nodes, so that the load balancing service is decoupled from any controller in the K8s cluster, the active/standby switching between the load balancing nodes does not depend on the controller, and whether the controller is down or not does not affect the running load balancing service.
In this embodiment, each load balancing node has a high availability (keepalive) process keep-alive function, and is mainly responsible for keepalive process lifecycle management, and mainly includes: starting a keepalived process, stopping the keepalived process, restarting the keepalived process and dynamically loading a configuration function; further, when keepalived is started or the configuration is dynamically loaded, if an error occurs, the event is converted into k8s event, and a cluster administrator is warned.
The method provided by the embodiment of the present application is described above, and the apparatus provided by the embodiment of the present application is described below:
referring to fig. 4, fig. 4 is a structural diagram of an apparatus provided in an embodiment of the present application. The device is applied to a control device for managing and controlling services of load balancing type in a K8s cluster. The apparatus corresponds to the flow shown in fig. 2.
As shown in fig. 4, the apparatus may include:
the determining unit is used for determining a target load balancing node group for mounting the target service and mounting the target service to the target load balancing node group when monitoring that the target service of which the type is the load balancing type is newly created in the K8s cluster; the target load balancing node group comprises more than two load balancing nodes in the K8s cluster;
and the processing unit is used for selecting a target access IP address of a target service from the allocated virtual IP address pool of the target load balancing node group, selecting a Master and a Slave of a primary load balancing node of the target service from the target load balancing node group, so that the Master shares access to the target service to a service instance running the target service in a load balancing mode when detecting that the target service is accessed by the outside based on the target access IP address, and performs heartbeat maintenance with each Slave in a unicast mode, so that when each Slave detects a fault of the Master based on the heartbeat maintenance, a new Master in a normal state is selected again to serve as the target service to continue to take over the fault Master to execute the target service.
Optionally, the determining, by the determining unit, a target load balancing node group for mounting a target service includes:
if the load balancing node group identifier exists in the annotation information of the target service, determining a load balancing node group corresponding to the load balancing node group identifier as the target load balancing node group;
if the annotation information of the target service has a load balancing node identifier, determining a load balancing node group where a load balancing node corresponding to the load balancing node identifier is currently located as the target load balancing node group;
and if neither the load balancing node group identifier nor the load balancing node identifier exists in the annotation information of the target service, selecting one with an idle load from the current existing load balancing node group as the target load balancing node group.
Optionally, the selecting, by the processing unit, a target access IP address of a target service from a virtual IP address pool to which a target load balancing node group has been allocated includes:
if service association information exists in the annotation information of the target service, determining the created historical service having an association relation with the target service according to the service association information, wherein the IP address allocated to the historical service is one address in the virtual IP address pool, and the IP address allocated to the historical service is determined as the target access IP address;
if a specified IP address exists in the annotation information of the target service, and the specified IP address is an address in the virtual IP address pool, determining the specified IP address as the target access IP address;
and if no information associated with the determination of the target IP address exists in the annotation options information of the target service, selecting an unused IP address from the virtual IP address pool as the target access IP address.
Optionally, the selecting, by the processing unit, a Master and a Slave of a primary load balancing node of a target service from a target load balancing node group includes:
if the annotation information of the target service has a load balancing node identifier, determining a load balancing node corresponding to the load balancing node identifier as a Master of the target service; otherwise, selecting an idle load balancing node from the target load balancing node group as the Master of the target service;
and taking other load balancing nodes except the Master in the target load balancing node group as the Slave of the target service.
In this embodiment, the processing unit further records the service configuration information of the target service into a resource list corresponding to the target service in the K8s cluster; the service configuration information at least comprises the target access IP address, the identifier of the target load balancing node group and the node identifier of a Master which serves as a target service in the target load balancing node group; and the number of the first and second groups,
when the target service is monitored to be deleted, recovering the target access IP address, controlling the resource list related to the target service to be deleted from the K8s cluster, and removing the mounting of the target load balancing node group on the target service, so that any load balancing node in the target load balancing node group does not execute the target service any more; and the number of the first and second groups,
and when the change of the annotation information of the target service is monitored, updating the service configuration information according to the changed annotation information. Optionally, the updating the service configuration information according to the changed annotation information includes:
when the changed annotation information requires to re-allocate a target access IP address for the target service, re-selecting a new IP address from a virtual IP address pool to allocate to the target service, and updating the target access IP address in the service configuration information to the newly allocated IP address;
and when the changed annotation information requires to select the mounted target load balancing node group for the target service again, selecting a new load balancing node group from all the currently existing load balancing node groups again, and updating the identifier of the target load balancing node group in the service configuration information to the identifier of the newly selected load balancing node group.
Thus, the structure of the apparatus shown in FIG. 4 is completed.
Referring to fig. 5, fig. 5 is a structural diagram of another apparatus provided in the embodiment of the present application. The device is applied to a load balancing node in a K8s cluster, and corresponds to the flow shown in FIG. 3.
As shown in fig. 5, the apparatus may include:
a first learning unit, configured to, when learning that a node is a Master load balancing node Master of a target service that is newly created in the K8s cluster and is of a load balancing type, perform heartbeat maintenance with a standby load balancing node Slave of the target service in a unicast manner, and share access to the target service to a service instance that runs the target service in a load balancing manner when it is detected that an external target access IP address based on the target service accesses the target service; and the number of the first and second groups,
a second learning unit, configured to, when it is learned that the node is a standby load balancing node Slave of a target service that is newly created in the K8s cluster and is of a load balancing type, if a Master failure is kept detected according to a heartbeat with a Master of the target service, reselect a Slave in a normal state together with other slaves of the target service as a new Master of the target service, and when the node is elected as the new Master, take over the failed Master to execute the target service, and perform heartbeat keeping with the Slave of the target service in a unicast manner.
Optionally, when the first learning unit learns that the node is used as the Master of the target service, when it is learned that the target service is deleted from the K8s cluster or when it is learned that the target service is no longer mounted on the load balancing node group where the node is located, the first learning unit stops executing the target service, and when it is learned that the node is switched from the Master of the target service to the Slave of the target service, the second learning unit is triggered to return to execute the operation that the node is the Slave of the target service.
And the second learning unit learns that the node is used as the Slave of the target service, and returns to execute the operation of the node as the Master of the target service when the node is learned to be switched from the Slave of the target service to the Master of the target service.
Thus, the apparatus shown in FIG. 5 is completed.
The embodiment of the application also provides a hardware structure of the device shown in fig. 4 or fig. 5. Referring to fig. 6, fig. 6 is a structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 6, the hardware structure may include: a processor and a machine-readable storage medium having stored thereon machine-executable instructions executable by the processor; the processor is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application.
Based on the same application concept as the method, embodiments of the present application further provide a machine-readable storage medium, where several computer instructions are stored, and when the computer instructions are executed by a processor, the method disclosed in the above example of the present application can be implemented.
The machine-readable storage medium may be, for example, any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A method for realizing load balancing is applied to a control device, the control device is used for managing and controlling services of a load balancing type in a K8s cluster, and the method comprises the following steps:
when monitoring that a target service of which the type is a load balancing type is newly created in the K8s cluster, determining a target load balancing node cluster for mounting the target service and mounting the target service to the target load balancing node cluster; the target load balancing node group comprises more than two load balancing nodes in the K8s cluster;
selecting a target access IP address of a target service from a virtual IP address pool allocated to a target load balancing node group, selecting a Master and a Slave of a standby load balancing node of the target service from the target load balancing node group, so that when the Master detects that the Master accesses the target service based on the target access IP address, the Master shares the access of the target service to a service instance running the target service in a load balancing manner, and performs heartbeat maintenance with each Slave in a unicast manner, so that when each Slave detects the fault of the Master based on the heartbeat maintenance, a new Master of a normal Slave hosting the target service is reselected to continue to take over the fault Master to execute the target service.
2. The method of claim 1, wherein determining a target load balancing node group for mounting a target service comprises:
if the load balancing node group identifier exists in the annotation information of the target service, determining a load balancing node group corresponding to the load balancing node group identifier as the target load balancing node group;
if the annotation information of the target service has a load balancing node identifier, determining a load balancing node group where a load balancing node corresponding to the load balancing node identifier is currently located as the target load balancing node group;
and if neither the load balancing node group identifier nor the load balancing node identifier exists in the annotation information of the target service, selecting one with an idle load from the current existing load balancing node group as the target load balancing node group.
3. The method of claim 1, wherein selecting a target access IP address for a target service from a pool of virtual IP addresses to which a group of target load balancing nodes has been assigned comprises:
if service association information exists in the annotation information of the target service, determining the created historical service having an association relation with the target service according to the service association information, wherein the IP address allocated to the historical service is one address in the virtual IP address pool, and the IP address allocated to the historical service is determined as the target access IP address;
if a specified IP address exists in the annotation information of the target service, and the specified IP address is an address in the virtual IP address pool, determining the specified IP address as the target access IP address;
and if no information associated with the determination of the target IP address exists in the annotation options information of the target service, selecting an unused IP address from the virtual IP address pool as the target access IP address.
4. The method according to claim 1, wherein the selecting the Master load balancing node Master and the Slave load balancing node Slave of the target service from the target load balancing node group comprises:
if the annotation information of the target service has a load balancing node identifier, determining a load balancing node corresponding to the load balancing node identifier as a Master of the target service; otherwise, selecting an idle load balancing node from the target load balancing node group as the Master of the target service;
and taking other load balancing nodes except the Master in the target load balancing node group as the Slave of the target service.
5. The method of claim 1, further comprising:
recording the service configuration information of the target service into a resource list corresponding to the target service in the K8s cluster; the service configuration information at least comprises the target access IP address, the identifier of the target load balancing node group and the node identifier of a Master which serves as a target service in the target load balancing node group;
when the target service is monitored to be deleted, recovering the target access IP address, controlling the resource list related to the target service to be deleted from the K8s cluster, and removing the mounting of the target load balancing node group on the target service, so that any load balancing node in the target load balancing node group does not execute the target service any more;
and when the change of the annotation information of the target service is monitored, updating the service configuration information according to the changed annotation information.
6. The method of claim 5, wherein the updating the service configuration information according to the changed annotation information comprises:
when the changed annotation information requires to re-allocate a target access IP address for the target service, re-selecting a new IP address from a virtual IP address pool to allocate to the target service, and updating the target access IP address in the service configuration information to the newly allocated IP address;
and when the changed annotation information requires to select the mounted target load balancing node group for the target service again, selecting a new load balancing node group from all the currently existing load balancing node groups again, and updating the identifier of the target load balancing node group in the service configuration information to the identifier of the newly selected load balancing node group.
7. A load balancing implementation method is applied to a load balancing node in a K8s cluster, and comprises the following steps:
when acquiring that the node is a Master load balancing node Master of a target service which is newly created in the K8s cluster and of which the type is a load balancing type, performing heartbeat maintenance with a standby load balancing node Slave of the target service in a unicast mode, and sharing access to the target service to a service instance running the target service in a load balancing mode when detecting that an external target access IP address based on the target service accesses the target service;
when the node is known to be a standby load balancing node Slave of a target service newly created in the K8s cluster and having a load balancing type, if a Master fault is kept detected according to a heartbeat with a Master of a main load balancing node of the target service, a Slave in a normal state is elected together with other slaves of the target service again as a new Master of the target service, and when the node is elected as the new Master, the Slave in the fault state is taken over to execute the target service, and the Slave of the target service is subjected to heartbeat keeping in a unicast manner.
8. The method of claim 7, further comprising:
if the node is used as the Master of the target service, stopping executing the target service when the target service is known to be deleted from the K8s cluster or when the target service is known not to be mounted on the load balancing node group where the node is located, and returning to execute the operation that the node is the Slave of the target service when the node is known to be switched from the Master of the target service to the Slave of the target service;
if the node is used as the Slave of the target service, when the node is known to be switched from the Slave of the target service to the Master of the target service, returning to execute the operation of the node as the Master of the target service.
9. A system for implementing load balancing, the system comprising: the system comprises a control device and at least one load balancing node group; the load balancing node cluster comprises more than two load balancing nodes in the K8s cluster;
the control device is used for managing and controlling a middle load balancing node of a K8s cluster and executing the method of any one of claims 1 to 6;
the load balancing node performs the method of any of claims 7 to 8.
10. An electronic device, comprising: a processor and a machine-readable storage medium;
the machine-readable storage medium stores machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the method steps of any of claims 1-8.
CN202110859164.XA 2021-07-28 2021-07-28 Load balancing realization method, system and device Active CN113596152B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110859164.XA CN113596152B (en) 2021-07-28 2021-07-28 Load balancing realization method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110859164.XA CN113596152B (en) 2021-07-28 2021-07-28 Load balancing realization method, system and device

Publications (2)

Publication Number Publication Date
CN113596152A true CN113596152A (en) 2021-11-02
CN113596152B CN113596152B (en) 2024-03-26

Family

ID=78251291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110859164.XA Active CN113596152B (en) 2021-07-28 2021-07-28 Load balancing realization method, system and device

Country Status (1)

Country Link
CN (1) CN113596152B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114553823A (en) * 2022-02-28 2022-05-27 联想(北京)有限公司 Access control method and electronic equipment
CN115242793A (en) * 2022-07-05 2022-10-25 杭州萤石软件有限公司 Streaming media load balancing method, device and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7389332B1 (en) * 2001-09-07 2008-06-17 Cisco Technology, Inc. Method and apparatus for supporting communications between nodes operating in a master-slave configuration
CN111176697A (en) * 2020-01-02 2020-05-19 广州虎牙科技有限公司 Service instance deployment method, data processing method and cluster federation
CN112015544A (en) * 2020-06-30 2020-12-01 苏州浪潮智能科技有限公司 Load balancing method, device and equipment of k8s cluster and storage medium
US20200412651A1 (en) * 2019-06-27 2020-12-31 Citrix Systems, Inc. Securing communications between services in a cluster using load balancing systems and methods
WO2021120633A1 (en) * 2019-12-19 2021-06-24 华为技术有限公司 Load balancing method and related device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7389332B1 (en) * 2001-09-07 2008-06-17 Cisco Technology, Inc. Method and apparatus for supporting communications between nodes operating in a master-slave configuration
US20200412651A1 (en) * 2019-06-27 2020-12-31 Citrix Systems, Inc. Securing communications between services in a cluster using load balancing systems and methods
WO2021120633A1 (en) * 2019-12-19 2021-06-24 华为技术有限公司 Load balancing method and related device
CN111176697A (en) * 2020-01-02 2020-05-19 广州虎牙科技有限公司 Service instance deployment method, data processing method and cluster federation
CN112015544A (en) * 2020-06-30 2020-12-01 苏州浪潮智能科技有限公司 Load balancing method, device and equipment of k8s cluster and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114553823A (en) * 2022-02-28 2022-05-27 联想(北京)有限公司 Access control method and electronic equipment
CN115242793A (en) * 2022-07-05 2022-10-25 杭州萤石软件有限公司 Streaming media load balancing method, device and system
CN115242793B (en) * 2022-07-05 2023-08-25 杭州萤石软件有限公司 Streaming media load balancing method, device and system

Also Published As

Publication number Publication date
CN113596152B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
US10609159B2 (en) Providing higher workload resiliency in clustered systems based on health heuristics
EP1770508B1 (en) Blade-based distributed computing system
US7225356B2 (en) System for managing operational failure occurrences in processing devices
JP4659062B2 (en) Failover method, program, management server, and failover system
US8825904B2 (en) Method, apparatus, system for address management
US20110178985A1 (en) Master monitoring mechanism for a geographical distributed database
CN105515812A (en) Fault processing method of resources and device
CN113596152B (en) Load balancing realization method, system and device
CN104137085A (en) Method for controlling access of clients to a service in a cluster environment
CN111176888B (en) Disaster recovery method, device and system for cloud storage
US10880367B2 (en) Load balancing stretched clusters in a distributed network
EP3915224A1 (en) State controller running in a kubernetes system and method for operating same
CN111935244B (en) Service request processing system and super-integration all-in-one machine
WO2013048750A1 (en) Live module diagnostic testing
CN112199176B (en) Service processing method, device and related equipment
CN114844912A (en) Data link distribution method and device and distributed block storage system
CN108509296B (en) Method and system for processing equipment fault
US10637748B2 (en) Method and apparatus for establishing interface between VNFMS, and system
CN109587218B (en) Cluster election method and device
US20170141950A1 (en) Rescheduling a service on a node
US9015518B1 (en) Method for hierarchical cluster voting in a cluster spreading more than one site
CN114553900B (en) Distributed block storage management system, method and electronic equipment
JP2017027166A (en) Operation management unit, operation management program, and information processing system
US10855521B2 (en) Efficient replacement of clients running large scale applications
CN114168261A (en) OpenStack-based high availability method and device for managing bare metal instances

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant