CN110971662A - Two-node high-availability implementation method and device based on Ceph - Google Patents

Two-node high-availability implementation method and device based on Ceph Download PDF

Info

Publication number
CN110971662A
CN110971662A CN201911006892.5A CN201911006892A CN110971662A CN 110971662 A CN110971662 A CN 110971662A CN 201911006892 A CN201911006892 A CN 201911006892A CN 110971662 A CN110971662 A CN 110971662A
Authority
CN
China
Prior art keywords
node
monitor
service
master node
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911006892.5A
Other languages
Chinese (zh)
Inventor
王振宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fiberhome Telecommunication Technologies Co Ltd
Original Assignee
Fiberhome Telecommunication Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fiberhome Telecommunication Technologies Co Ltd filed Critical Fiberhome Telecommunication Technologies Co Ltd
Priority to CN201911006892.5A priority Critical patent/CN110971662A/en
Publication of CN110971662A publication Critical patent/CN110971662A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services

Abstract

The invention discloses a two-node high-availability realization method and a device based on Ceph, wherein the method comprises the following steps: deploying a high-availability service on two nodes respectively running a first monitor service, wherein the high-availability service is used for electing a main node from the two nodes; deploying a second monitor service on the main node, and when the main node is detected to be out of order, migrating the second monitor service on the main node to the slave node; according to the method, the master node is selected from the two nodes respectively operating the monitor service, the third monitor service is deployed and dynamically migrates along with the role change of the nodes, and after the master node fails, the slave node becomes a new master node to operate the two monitor services, so that the normal operation of the whole cluster is ensured; the invention reduces the minimum node number of the distributed storage cluster based on the Ceph from three to two, and the storage system is still highly available in a cluster mode under the two-node mode.

Description

Two-node high-availability implementation method and device based on Ceph
Technical Field
The invention belongs to the technical field of distributed storage, and particularly relates to a two-node high-availability implementation method and device based on Ceph.
Background
Ceph is open-source distributed storage software, and can combine a standard server and a local hard disk of the server into a uniform storage resource pool to provide a highly reliable and highly extensible storage system. Ceph has several core components: monitor, osd, rbd, mds, rgw, etc., where the most basic components are monitor and osd. The Monitor component is responsible for management of the whole storage cluster, the Monitor is a small cluster formed by a plurality of nodes, the same data is stored on each node, a leader is selected in the Monitor cluster through paxos protocol, and other nodes are used as slave. The osd is mainly responsible for storing data, and multiple copies of the same data can be stored so as to ensure the reliability of the data.
The Paxos protocol is an important protocol in a distributed system, and is used for solving the problem of how multiple nodes in the distributed system agree on certain data. In the distributed system, abnormal conditions such as node faults, network delay and the like exist, the data can still keep consistency in the distributed system when the abnormal conditions occur through paxos protocol, the paxos protocol is a voting mode, each node throws one ticket, and when the number of the votes is more than half of the total number of the nodes, the votes are shown to be consistent in the distributed system, the proposal takes effect. When the number of votes is equal to or less than half of the total number of nodes, it indicates that no agreement is reached and the proposal is not effective.
According to the principle of paxos protocol, an odd number of monitor services must be deployed in Ceph, and considering reliability and other factors, 3, 5 or 7 services are generally selected. When 3 monitor services are deployed, any one node is allowed to fail, and the remaining two nodes can still negotiate through paxos to provide normal services; when 5 monitor services are deployed, any two nodes are allowed to fail, and the remaining 3 nodes can still negotiate through paxos to provide normal services. If the monitor service is deployed to be even number, for example 2, any one node fails, and the votes of the remaining one node can not exceed half all the time, the service can still not be used. If monitor service deploys 4 nodes, more than half of them, it must ensure that 3 nodes are normal, and at most, only one node can be allowed to fail, and the reliability is lower than that of 3 nodes.
That is, according to the paxos principle, the minimum number of nodes of a set of distributed storage system must be 3 in general, so that the reliability of the system can be ensured and high availability can be realized. If only two nodes exist in a distributed storage system, after one of the nodes fails, the monitor service can no longer be used, and the reliability of the system is greatly reduced.
Disclosure of Invention
Aiming at least one defect or improvement requirement in the prior art, the invention provides a two-node high availability implementation method and device based on Ceph, which can ensure that a storage system can still normally operate when any one node fails under the condition of two nodes, and aims to solve the problem that the existing two-node distributed system cannot realize high availability by using monitor service.
To achieve the above object, according to an aspect of the present invention, there is provided a two-node high-availability implementation method based on Ceph, the method including:
s1: deploying a high-availability service on two nodes respectively running a first monitor service, wherein the high-availability service is used for electing a main node from the two nodes;
s2: and deploying a second monitor service on the master node, and when the master node is detected to be failed, migrating the second monitor service on the master node to the slave node so as to configure the slave node as a new master node.
Preferably, the two-node high availability implementation method, where deploying the second monitor service on the master node specifically includes:
configuring an external service interface on the elected main node;
deploying a second monitor service on the outbound service interface.
Preferably, the method for implementing high availability of two nodes, wherein migrating the second monitor service on the master node to the slave node specifically includes:
migrating an outbound service interface on a master node to a slave node to configure the slave node as a new master node;
migrating the second monitor service to a new primary node.
Preferably, in the method for implementing high availability of two nodes, the process of detecting the failure of the master node includes:
the master node multicasts data to the slave node through the external service interface, and if the slave node does not receive the multicast data, the master node is judged to have a fault.
Preferably, the two-node high availability implementation method further includes:
the new master node detects the number of monitor services running on the two nodes;
when the number of the monitor services is two, deploying a second monitor service on the external service interface;
and when the number of the monitor services is three, deleting the second monitor service on the slave node and redeploying the second monitor service on the external service interface of the slave node.
Preferably, in the two-node high availability implementation method, the first monitor service uses a local IP and a fixed port of the corresponding node, respectively, and the second monitor service uses an external service interface of the master node.
According to another aspect of the present invention, there is also provided a two-node high-availability implementation apparatus based on Ceph, including:
a high availability component deployed on two nodes running a first monitor service for electing a master node from the two nodes;
the monitoring component is deployed on two nodes running a first monitor service and used for deploying a second monitor service on the main node; when the master node fails, a second monitor service on the master node is migrated to the slave node to configure the slave node as a new master node.
Preferably, the two-node high-availability implementation apparatus is further configured to configure an external service interface on the master node;
the monitoring component deploys a second monitor service on the outbound service interface on the master node.
Preferably, the two-node high availability implementation apparatus, the high availability component is further configured to migrate an external service interface on the master node to the slave node when the master node fails, so as to configure the slave node as a new master node;
and the monitoring component is used for migrating the second monitor service to a new main node after the external service interface is detected.
Preferably, the two-node high availability implementation apparatus is further configured to detect whether the master node fails:
the master node multicasts data to the slave node through the external service interface, and if the slave node does not receive the multicast data, the high-availability component on the slave node judges that the master node fails.
Preferably, in the above two-node high availability implementation apparatus, the process of migrating the second monitor service to a new master node by the monitoring component includes:
the monitoring component detects the number of monitor services running on the two nodes;
when the number of the monitor services is two, deploying a second monitor service on the external service interface;
and when the number of the monitor services is three, deleting the second monitor service on the slave node and redeploying the second monitor service on the external service interface of the slave node.
Preferably, in the above two-node high availability implementation apparatus, the first monitor service uses a local IP and a fixed port of the corresponding node, respectively, and the second monitor service uses an external service interface of the master node.
In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:
(1) the two nodes respectively run with a first monitor service, a master node is selected from the two nodes through the high-availability service, a second monitor service is deployed on the selected master node and dynamically migrates along with the change of master and slave roles of the nodes, so that three monitor services are deployed on the two nodes, and when the master node fails, the slave node becomes a new master node and runs with the first monitor service and the second monitor service; therefore, after one node fails, two monitor services are still provided on the rest node, so that the normal operation of the whole cluster is ensured.
(2) According to the two-node high-availability implementation method and device based on the Ceph, the minimum number of nodes of the distributed storage cluster based on the Ceph is reduced from three to two, and the storage system still has high availability in a cluster mode under a two-node mode.
Drawings
Fig. 1 is a flowchart of a two-node high availability implementation method based on Ceph according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a deployment of highly available components provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of the deployment of three monitors on two nodes according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a monitor migration process provided by an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The invention provides a two-node high-availability implementation scheme based on Ceph aiming at the requirement that the minimum number of nodes in distributed storage is 3, and the scheme can ensure that a storage system can still normally operate when any one node fails under the condition that only two nodes exist.
The invention will be further explained in detail with reference to the following examples and the accompanying drawings.
Fig. 1 is a flowchart of a two-node high-availability implementation method based on Ceph according to this embodiment, and referring to fig. 1, the two-node high-availability implementation method includes the following steps:
s1: deploying high-availability service on two nodes respectively running a first monitor service, wherein the high-availability service is used for selecting a main node from the two nodes and configuring an external service interface on the main node;
s2: and deploying a second monitor service on an external service interface of the main node, and migrating the second monitor service on the main node to the slave node when the main node is detected to be failed.
Wherein migrating the second monitor service on the master node to the slave node specifically comprises: firstly, migrating an external service interface on a master node to a slave node to configure the slave node as a new master node; a second monitor service is then deployed on the outbound service interface on the new master node. The external service interface is used as a judgment standard of the node role, and when any node has the external service interface, the current role of the node is indicated as a main node; otherwise, the current role of the node is a slave node;
in this embodiment, the process of detecting that the master node fails includes: the master node multicasts data to the slave node through the external service interface, and if the slave node does not receive the multicast data of the master node, the master node is judged to have a fault.
Deploying the second monitor service on the outbound service interface on the new host node specifically includes:
the new master node first detects the number of monitor services running on the two nodes; when the number of the monitor services is two, deploying a second monitor service on the external service interface; and when the number of the monitor services is three, deleting the second monitor service on the slave node and redeploying the second monitor service on the external service interface of the slave node.
In this embodiment, the first monitor service uses the local IP and the fixed port of the corresponding node, respectively, and the second monitor service uses the external service interface of the master node. The three monitor services all use different IP, which can avoid IP collision.
The embodiment also provides a device for implementing the two-node high-availability implementation method based on Ceph, which can be implemented in a software and/or hardware manner and can be integrated on an electronic device; the device comprises a high availability component and a monitoring component; wherein the content of the first and second substances,
the high availability component is deployed on two nodes running a first monitor service and is used for electing a main node from the two nodes;
the monitoring component is deployed on two nodes running the first monitor service and used for deploying a second monitor service on the main node; when the master node fails, a second monitor service on the master node is migrated to the slave node to configure the slave node as a new master node.
In this embodiment, the two nodes respectively run the first monitor service, the second monitor service is deployed on the master node in the two nodes, and dynamic migration is performed along with changes of master and slave roles of the nodes, so that three monitor services are deployed on the two nodes, and after the master node fails, the slave node becomes a new master node and runs the first monitor service and the second monitor service; therefore, after one node fails, two monitor services are still provided on the rest node, so that the normal operation of the whole cluster is ensured.
As a preferred example of this embodiment, after the high-availability component selects a master node from the two nodes, an external service interface is configured on the master node; the monitoring component deploys a second monitor service on an outbound service interface on the primary node.
When the main node fails, the high-availability component migrates an external service interface on the main node to the slave node to configure the slave node as a new main node; the monitoring component is used for migrating the second monitor service to a new main node after detecting the external service interface;
in this embodiment, monitoring components are operated on both the nodes, and the monitoring components are configured to monitor roles of their own nodes in real time, and when it is detected that an external service interface is configured on the node where the monitoring components are located, which indicates that the current role of the node where the monitoring components are located is a master node, a second monitor service is deployed on the external service interface of the node where the monitoring components are located.
As a preferred example of this embodiment, the process of the monitoring component migrating the second monitor service to the new master node includes:
after detecting that the current role of the node where the monitoring component is located is a main node, the monitoring component detects the number of monitor services running on the two nodes; when the number of the monitor services is two, directly deploying a second monitor service on an external service interface of the node where the second monitor service is located; when the number of the monitor services is three, the second monitor service on the slave node is deleted firstly, and then the second monitor service is deployed on the external service interface of the node where the second monitor service is located again.
As a preferred example of this embodiment, the high availability component is further configured to detect whether the master node fails, specifically: the master node multicasts data to the slave node through the external service interface, and if the slave node does not receive the multicast data of the master node, the high-availability component on the slave node judges that the master node fails.
The Paxos protocol is realized in the Ceph monitor service, the minimum reliability of the Paxos protocol is guaranteed to be three monitor services, if only one or two monitor services are available, the voting cannot exceed half of the voting, and the services are unavailable under the conditions of node failure, network failure and the like. Therefore, three monitor services are still needed in the two-node high-availability implementation device provided by the embodiment, in order to implement three monitor services on two physical servers, the embodiment selects one of the physical servers to run two monitors, and the remaining one of the physical servers to run one monitor service, so that three monitor services are simultaneously running. Then on which of the two services should run the two monitor services, the present embodiment introduces a new component: high available component keepalived.
The Keepalived is a high-availability software, which runs on a plurality of servers to form a server group by the plurality of servers, wherein a master is arranged in the server group, the rest servers are backups, and a VIP for providing services to the outside is configured on the master. The Master can multicast to other backups in the server group, when the backups cannot receive the message sent by the Master, the Master is considered to be down, at the moment, one backup is reselected as a new Master, and meanwhile, the VIP can also migrate to the new Master.
FIG. 2 is a schematic diagram illustrating a deployment of the high availability components provided in this embodiment; FIG. 3 is a schematic diagram of the present embodiment providing three monitors deployed on two nodes; referring to fig. 2 and 3, in this embodiment, a high availability component Keepalived is deployed on two nodes running monitor services, and the monitor services on the two nodes are respectively marked as monitor1 and monitor 2; after the high-availability component Keepalived is deployed, selecting a main node from the two nodes, and configuring an external service interface VIP on the main node; the other node automatically serves as a slave node, and in this embodiment, the master node is denoted as a master node and denoted as a backup node.
In the case of only two servers, a master node is generated in two nodes by keepalive, and then two monitor services are run on the master node: monitor1, monitor3, running a monitor service on the backup node: monitor2 so that three monitor services can run on two physical services.
FIG. 4 is a schematic diagram of a monitor migration process provided in the present embodiment; when the master node fails, neither monitor1 nor monitor3 on the master node is available; only one monitor2 on the backup node cannot achieve high availability; therefore, in the embodiment, the VIP on the master node is migrated to the original backup node, the backup node with the VIP becomes a new master node, and then the monitor3 is deployed on the new master node, and two monitors, namely the monitor2 and the monitor3, are still served on the remaining one node, so that the normal operation of the whole monitor cluster is ensured.
We list several abnormalities below:
(1) when any one of the three monitor services is abnormal, two monitor services are remained, the number of the normal services exceeds half, and the whole monitor cluster can still normally operate.
(2) When the backup node (server2) fails, the upper service monitor2 is offline from the cluster, but the master node is normal, the master node normally runs the monitor1 and the monitor3, the normal service is more than half, and the whole monitor cluster can still normally provide service.
(3) When a master node (server1) fails, two monitor services running on the master node are unavailable, at the moment, keepalive can detect the master node failure, the master node is firstly migrated to a server2, and when a monitoring component detects that the role of the monitoring component is changed from backup to master, the monitor3 service is automatically started on a new master node (server 2); that is, of the 3 monitors, monitor1 and monitor2 are fixedly run on two service nodes, monitor3 is dynamically migrated along with the role of master, and when master is migrated, monitor3 is migrated to a new master. Therefore, after the master node fails, two monitor services are still provided on the rest node, and the normal operation of the whole cluster can be ensured.
The following is a specific execution flow of the two-node high-availability implementation method based on Ceph provided in this embodiment;
step 1: the monitor service specifies the IP and the port when starting, and monitor1 and monitor2 respectively use the local IP and the fixed port of the respective physical server when starting; monitor3 dynamically migrates with the node role, monitor3 starts using the VIP of the physical server as Master node;
step 2: starting a monitoring program on each physical server, regularly detecting whether the monitoring program configures VIP, if the VIP is configured, indicating that the monitoring program is in the role of a master node, and if the VIP is not configured, indicating that the monitoring program is in the role of a backup node;
and step 3: if the monitoring program detects that the monitoring program is a backup node, the monitoring program indicates that the monitor3 does not need to be started, and the current role of the monitoring program is recorded;
and 4, step 4: the monitor finds that the monitor is a master node, detects the number of the current monitor, if the number of the monitors is two, the monitor indicates that only two monitors are deployed currently, and the third monitor3 does not join the cluster, deploys the monitor3 on the VIP, and records the role of the monitor;
and 5: the monitor finds that the monitor changes from the backup node to the master node, and detects that the number of the monitor is three, the monitor indicates that another node fails, the monitor3 which is not available in the monitor cluster is deleted, the monitor3 is reconfigured on the local VIP, and then the monitor3 is added into the monitor cluster;
step 6: the monitoring program repeats the above steps 2 to 5 periodically.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A two-node high-availability implementation method based on Ceph is characterized by comprising the following steps:
s1: deploying a high-availability service on two nodes respectively running a first monitor service, wherein the high-availability service is used for electing a main node from the two nodes;
s2: and deploying a second monitor service on the master node, and migrating the second monitor service on the master node to the slave node when the master node is detected to be out of order.
2. The two-node high availability implementation method of claim 1, wherein deploying a second monitor service on the master node specifically comprises:
configuring an external service interface on the elected main node;
deploying a second monitor service on the outbound service interface.
3. The two-node high availability implementation method of claim 2, wherein migrating the second monitor service on the master node to the slave node specifically comprises:
migrating an outbound service interface on a master node to a slave node to configure the slave node as a new master node;
and deploying the second monitor service on an external service interface of the new main node.
4. A two-node high availability implementation method according to claim 1 or3, characterized in that the process of detecting the failure of the master node comprises:
the master node multicasts data to the slave node through the external service interface, and if the slave node does not receive the multicast data, the master node is judged to have a fault.
5. The two-node high availability implementation method of claim 3, further comprising:
the new master node detects the number of monitor services running on the two nodes;
when the number of the monitor services is two, deploying a second monitor service on the external service interface;
and when the number of the monitor services is three, deleting the second monitor service on the slave node and redeploying the second monitor service on the external service interface of the slave node.
6. The two-node high availability implementation method of claim 2, wherein the first monitor service uses a local IP and a fixed port of a corresponding node, respectively, and the second monitor service uses an external service interface of a master node.
7. A two-node high-availability implementation device based on Ceph is characterized by comprising:
a high availability component deployed on two nodes running a first monitor service for electing a master node from the two nodes;
the monitoring component is deployed on two nodes running a first monitor service and used for deploying a second monitor service on the main node; when the master node fails, a second monitor service on the master node is migrated to the slave node.
8. The two-node high availability implementing apparatus of claim 7, the high availability component further to configure an external service interface on the master node;
the monitoring component deploys a second monitor service on the outbound service interface on the master node.
9. The two-node high availability implementation apparatus of claim 8, the high availability component further to migrate an outbound service interface on a master node to a slave node to configure the slave node as a new master node when the master node fails;
the monitoring component is used for deploying the second monitor service on the external service interface after detecting the external service interface on the new main node.
10. The two-node high availability implementing apparatus of claim 8, wherein the process of the monitoring component migrating the second monitor service onto a new master node comprises:
the monitoring component detects the number of monitor services running on the two nodes;
when the number of the monitor services is two, deploying a second monitor service on the external service interface;
and when the number of the monitor services is three, deleting the second monitor service on the slave node and redeploying the second monitor service on the external service interface of the slave node.
CN201911006892.5A 2019-10-22 2019-10-22 Two-node high-availability implementation method and device based on Ceph Pending CN110971662A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911006892.5A CN110971662A (en) 2019-10-22 2019-10-22 Two-node high-availability implementation method and device based on Ceph

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911006892.5A CN110971662A (en) 2019-10-22 2019-10-22 Two-node high-availability implementation method and device based on Ceph

Publications (1)

Publication Number Publication Date
CN110971662A true CN110971662A (en) 2020-04-07

Family

ID=70029766

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911006892.5A Pending CN110971662A (en) 2019-10-22 2019-10-22 Two-node high-availability implementation method and device based on Ceph

Country Status (1)

Country Link
CN (1) CN110971662A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111901415A (en) * 2020-07-27 2020-11-06 星辰天合(北京)数据科技有限公司 Data processing method and system, computer readable storage medium and processor
CN112019601A (en) * 2020-08-07 2020-12-01 烽火通信科技股份有限公司 Two-node implementation method and system based on distributed storage Ceph
CN112202601A (en) * 2020-09-23 2021-01-08 湖南麒麟信安科技股份有限公司 Application method of two physical node mongo clusters operated in duplicate set mode
CN114064414A (en) * 2021-11-25 2022-02-18 北京志凌海纳科技有限公司 High-availability cluster state monitoring method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160335166A1 (en) * 2015-05-14 2016-11-17 Cisco Technology, Inc. Smart storage recovery in a distributed storage system
CN108494585A (en) * 2018-02-28 2018-09-04 新华三技术有限公司 Elect control method and device
CN108628717A (en) * 2018-03-02 2018-10-09 北京辰森世纪科技股份有限公司 A kind of Database Systems and monitoring method
CN109474465A (en) * 2018-11-13 2019-03-15 上海英方软件股份有限公司 A kind of method and system of the high availability that can dynamically circulate based on server cluster

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160335166A1 (en) * 2015-05-14 2016-11-17 Cisco Technology, Inc. Smart storage recovery in a distributed storage system
CN108494585A (en) * 2018-02-28 2018-09-04 新华三技术有限公司 Elect control method and device
CN108628717A (en) * 2018-03-02 2018-10-09 北京辰森世纪科技股份有限公司 A kind of Database Systems and monitoring method
CN109474465A (en) * 2018-11-13 2019-03-15 上海英方软件股份有限公司 A kind of method and system of the high availability that can dynamically circulate based on server cluster

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111901415A (en) * 2020-07-27 2020-11-06 星辰天合(北京)数据科技有限公司 Data processing method and system, computer readable storage medium and processor
CN111901415B (en) * 2020-07-27 2023-07-14 北京星辰天合科技股份有限公司 Data processing method and system, computer readable storage medium and processor
CN112019601A (en) * 2020-08-07 2020-12-01 烽火通信科技股份有限公司 Two-node implementation method and system based on distributed storage Ceph
CN112019601B (en) * 2020-08-07 2022-08-02 烽火通信科技股份有限公司 Two-node implementation method and system based on distributed storage Ceph
CN112202601A (en) * 2020-09-23 2021-01-08 湖南麒麟信安科技股份有限公司 Application method of two physical node mongo clusters operated in duplicate set mode
CN112202601B (en) * 2020-09-23 2023-03-24 湖南麒麟信安科技股份有限公司 Application method of two physical node mongo clusters operated in duplicate set mode
CN114064414A (en) * 2021-11-25 2022-02-18 北京志凌海纳科技有限公司 High-availability cluster state monitoring method and system

Similar Documents

Publication Publication Date Title
CN110971662A (en) Two-node high-availability implementation method and device based on Ceph
US11307943B2 (en) Disaster recovery deployment method, apparatus, and system
CN111290834B (en) Method, device and equipment for realizing high service availability based on cloud management platform
US7225356B2 (en) System for managing operational failure occurrences in processing devices
US20160036924A1 (en) Providing Higher Workload Resiliency in Clustered Systems Based on Health Heuristics
EP1697843B1 (en) System and method for managing protocol network failures in a cluster system
CN102394914A (en) Cluster brain-split processing method and device
CN109474465A (en) A kind of method and system of the high availability that can dynamically circulate based on server cluster
CN107508694B (en) Node management method and node equipment in cluster
EP3291487B1 (en) Method for processing virtual machine cluster and computer system
CN103312809A (en) Distributed management method for service in cloud platform
US10652100B2 (en) Computer system and method for dynamically adapting a software-defined network
US20090164565A1 (en) Redundant systems management frameworks for network environments
CN108173971A (en) A kind of MooseFS high availability methods and system based on active-standby switch
US9807051B1 (en) Systems and methods for detecting and resolving split-controller or split-stack conditions in port-extended networks
CN111935244B (en) Service request processing system and super-integration all-in-one machine
CN110333986B (en) Method for guaranteeing availability of redis cluster
CN105959145B (en) A kind of method and system for the concurrent management server being applicable in high availability cluster
CN104125079A (en) Method and device for determining double-device hot-backup configuration information
CN105490847A (en) Real-time detecting and processing method of node failure in private cloud storage system
CN111953808A (en) Data transmission switching method of dual-machine dual-active architecture and architecture construction system
CN115794769B (en) Method for managing high-availability database, electronic equipment and storage medium
CN114338670B (en) Edge cloud platform and network-connected traffic three-level cloud control platform with same
CN114124803B (en) Device management method and device, electronic device and storage medium
US9015518B1 (en) Method for hierarchical cluster voting in a cluster spreading more than one site

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200407