CN113726899B - Construction method of available micro data center for colleges and universities based on OpenStack - Google Patents

Construction method of available micro data center for colleges and universities based on OpenStack Download PDF

Info

Publication number
CN113726899B
CN113726899B CN202111022871.XA CN202111022871A CN113726899B CN 113726899 B CN113726899 B CN 113726899B CN 202111022871 A CN202111022871 A CN 202111022871A CN 113726899 B CN113726899 B CN 113726899B
Authority
CN
China
Prior art keywords
node
steps
network
availability
following
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111022871.XA
Other languages
Chinese (zh)
Other versions
CN113726899A (en
Inventor
李雷孝
李�杰
高昊昱
康泽锋
马志强
万剑雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inner Mongolia University of Technology
Original Assignee
Inner Mongolia University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inner Mongolia University of Technology filed Critical Inner Mongolia University of Technology
Priority to CN202111022871.XA priority Critical patent/CN113726899B/en
Publication of CN113726899A publication Critical patent/CN113726899A/en
Application granted granted Critical
Publication of CN113726899B publication Critical patent/CN113726899B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1034Reaction to server failures by a load balancer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45575Starting, stopping, suspending or resuming virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45587Isolation or security of virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention belongs to the technical field of micro data center construction, and particularly relates to an OpenStack-based construction method of an available micro data center for colleges and universities, which comprises the following steps: s1, constructing a cloud platform super-fusion framework, wherein the cloud platform super-fusion framework comprises a hardware facility foundation, fusion framework resource pooling and resource scheduling management automation; s2, respectively constructing a database with high availability of a persistent layer, a message queue, a storage layer and a network layer; s3, constructing a reliability evaluation model; and S4, verifying high availability. The invention provides a scheme suitable for constructing a miniature data center decentralized high-availability cloud platform in colleges and universities based on OpenStack, so that the service availability of the OpenStack cloud platform reaches a high-availability standard, and the environmental fault of the cloud platform can be recognized in real time. The reliability evaluation model and the test cases are utilized to further verify the fault tolerance, reliability and high availability of the cluster, and the availability of the service can reach 99.99%.

Description

Construction method of available micro data center for colleges and universities based on OpenStack
Technical Field
The invention belongs to the technical field of micro data center construction, and particularly relates to a construction method of an available micro data center for colleges and universities based on OpenStack.
Background
Except for the HP Helion scheme, the existing OpenStack HA scheme HAs the characteristic of centralization of control or management nodes, and is obviously not suitable for constructing a miniature data center in scale. The control nodes are the core of the whole platform for providing services to the outside, once the control nodes fail, the usability of the cloud platform is greatly reduced, and the reliability of a small number of control nodes becomes a high-usability short board of the whole cloud platform. The root cause of the situation is that the control nodes are separated from the computing and storage nodes by the solutions, and one physical server only installs the control components as the control nodes or only installs the computing or storage components as the computing or storage nodes, so that the single-point failure resistance of the whole cluster cannot be improved with the increase of the nodes. Although the Helion scheme of the HP integrates the node control and the calculation roles, the Helion scheme of the HP is not combined with a distributed storage scheme, and redundant backup of mirror image storage cannot be performed, namely high storage availability cannot be achieved.
Disclosure of Invention
Aiming at the technical problem that cloud service of a college data center is insufficient in reliability, the invention provides the construction method of the OpenStack-based college available miniature data center, which is high in fault tolerance rate, high in availability and strong in reliability.
In order to solve the technical problems, the invention adopts the technical scheme that:
an OpenStack-based construction method for a college and university available micro data center comprises the following steps:
s1, constructing a cloud platform super-fusion framework, wherein the cloud platform super-fusion framework comprises a hardware facility foundation, fusion framework resource pooling and resource scheduling management automation;
s2, respectively constructing a database with high availability of a persistent layer, a message queue, a storage layer and a network layer;
s3, constructing a reliability evaluation model;
and S4, verifying high availability.
A switch is arranged in the hardware facility foundation and adopts a port stacking mode; network virtualization is arranged in the fusion framework resource pooling, and the network virtualization adopts super-fusion node configuration on a network card; and elastic expansion is arranged in the resource scheduling management automation, and the expandability of an OpenStack cloud platform is increased by the elastic expansion.
The method for constructing the database persistence layer in the S2 comprises the following steps: the state of the database in each node and the data in the database must be consistent with the contents of the databases of other nodes in the cluster, and a Galera cluster is adopted to realize a MariaDB multi-master mode.
The Galera cluster includes three maria db nodes that are peer-to-peer and master nodes to each other.
The message queue in the S2 is high in availability, and a RabbitMQ component is adopted to realize an advanced message queue protocol.
The method for constructing the high availability of the storage layer in the S2 comprises the following steps: the method comprises the steps that a rear-end storage adopts a Cinder butt joint Ceph of an OpenStack component, the Ceph is an extensible and software-defined open source storage system, the rear-end storage adopts a FileStore storage mode in the Ceph, the FileStore storage mode is that a Journal is written before data is written, and the Journal is changed into two writing operations for one request.
The method for constructing the network layer high availability in the S2 comprises the following steps: the network layer adopts Keepalived + Hasproxy, the Keepalived provides functions of server health check and fault node isolation for realizing a TCP/IP layer health check mechanism in an OSI seven-layer protocol, each super fusion node in the cloud platform super fusion architecture is provided with Keepalived to become a Keepalived node, and the Keepalived nodes are communicated by adopting a VRRP protocol defined by the Keepalived; the Haproxy is free and open source code software written by using C language, and is used for load balancing.
The reliability evaluation model in the S3 comprises a physical server hardware HW, an operating system OS, a storage system Ceph, a MariaDB node, a RabbitMQ component, an identity authentication Keystone, a volume management Cinder, a network management Neutron, a mirror image management Glance, a calculation management Nova and an SDEP block indicating the dependency relationship among the components, wherein the physical server hardware HW is connected with the operating system OS, the operating system OS is respectively connected with the RabbitMQ component through the storage system Ceph and the MariaDB node, the RabbitMQ component is respectively connected with the network management Neutron through the identity authentication Keystone and the mirror image Cinder, the network management Neutron is connected with the calculation management Novarve through the mirror image management Glance, the physical server hardware HW points to the operating system OS through the SDEP block, the MariaDB node points to the identity authentication Keystone through the SDEP block, and the identity authentication Keystone points to the network management Neotron and the mirror image management Glance through the SDEP block respectively; the SDEP block points to a component that represents the component that must be repaired first in the event of a failure, and whether the SDEP block points to a component that can function depends on the state of the previous component.
The verification high availability in the S4 comprises a test software level error, a test hardware level error and a test network level error;
the method for testing the software level errors comprises the following steps: comprises the following steps:
step one, simulating the occurrence of errors, and closing Nova service;
secondly, an administrator logs in OpenStack;
step three, enumerating and calculating Nova instances and calculating response time;
fourthly, recovering Nova service;
the method for testing hardware level errors comprises the following steps: comprises the following steps:
step one, simulating the occurrence of errors, and restarting any node server;
step two, calculating response time;
the method for testing the network level error comprises the following steps: comprises the following steps:
step one, simulating the occurrence of errors, and unplugging a network cable of any node server;
and step two, calculating response time by adopting a ping command.
Compared with the prior art, the invention has the following beneficial effects:
the invention provides a scheme suitable for constructing a micro data center decentralized high-availability cloud platform in colleges and universities based on OpenStack, so that the service availability of the OpenStack cloud platform reaches a high-availability standard, and the environmental fault of the cloud platform can be identified in real time. The reliability evaluation model and the test cases are utilized to further verify the fault tolerance, reliability and high availability of the cluster, and the availability of the service can reach 99.99%.
Drawings
FIG. 1 is a schematic diagram of a cloud platform hyper-convergence architecture of the present invention;
FIG. 2 is a schematic diagram of a Galera cluster according to the present invention;
FIG. 3 is a schematic diagram of a response to a keepalived + hash user request in accordance with the present invention;
fig. 4 is a schematic diagram of the reliability evaluation model of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it is to be noted that, unless otherwise explicitly specified or limited, the terms "connected" and "connected" are to be interpreted broadly, e.g., as being fixed or detachable or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
An OpenStack-based construction method for a high-school available micro data center comprises the following steps:
s1, constructing a cloud platform super-fusion framework, wherein the cloud platform super-fusion framework comprises a hardware facility foundation, fusion framework resource pooling and resource scheduling management automation, as shown in figure 1;
s2, respectively constructing a database with high availability of a persistent layer, a message queue, a storage layer and a network layer;
s3, constructing a reliability evaluation model;
and S4, verifying high availability.
Furthermore, a switch is arranged in the hardware facility foundation, and the switch adopts a port stacking mode; network virtualization is arranged in the fusion framework resource pooling, and the network virtualization adopts super-fusion node configuration on a network card; and elastic expansion is arranged in the resource scheduling management automation, and the expandability of the OpenStack cloud platform is increased by the elastic expansion.
Further, the method for constructing the database persistence layer in the S2 comprises the following steps: the state of the database in each node and the data in the database must be consistent with the contents of the databases of other nodes in the cluster, and a Galera cluster is adopted to realize a MariaDB multi-master mode.
Further, the Galera cluster includes three mariedab nodes that are peer-to-peer and master nodes to each other. Any one of the MariaDB nodes can be connected when a client reads and writes data. In a read operation, the data read from each node is the same. When a write operation is performed, when data is written to a node, the cluster will be synchronized with other nodes.
Further, the message queue in S2 is highly available to implement the advanced message queue protocol using RabbitMQ components. The main function of the rabbitMQ component is to synchronize state information of operations and services on the cloud platform among the components, and is responsible for communication among the components of OpenStack. The RabbitMQ supports high availability and strong expansibility, a node newly added into the cluster can be dynamically added into the cluster only by specifying some information (such as a cluster name, a port and a main node IP address) of the cluster, and an existing node does not need to restart or revise the configuration file.
Further, the method for constructing the high availability of the storage layer in the S2 comprises the following steps: the back-end storage adopts a Cinder docking Ceph of an OpenStack component, wherein the Ceph is an extensible and software-defined open source storage system, the back-end storage adopts a FileStore storage mode in the Ceph, the FileStore storage mode is that before data is written, a Journal is written first, and the Journal changes from one request to two write operations.
Further, the method for constructing the network layer high availability in the S2 is as follows: the network layer adopts Keepalived + Haproxy, keepalived provides the functions of server health check and fault node isolation for realizing a TCP/IP layer health check mechanism in an OSI seven-layer protocol, keepalived is installed on each super fusion node in a cloud platform super fusion framework to become a Keepalived node, the Keepalived nodes adopt VRRP protocol communication defined by Keepalived, a main node is selected from a plurality of nodes by a cluster according to an election algorithm provided by Keepalived, and at the moment, a virtual IP is placed on the node. Other nodes send heartbeat check data packets to the main node at intervals to request the main node to respond, if the main node does not send heartbeat check confirmation within a certain time, the other nodes consider that the main node is in failure, and at the moment, the virtual IP can be migrated to a new main node.
Keepalived supports a plurality of routing modes, the difference of the routing modes can affect the virtual IP transit client request and real server response modes, and the Keepalived supports the following two routing modes: NAT routing and DR routing. The NAT routing has a risk of a single point of failure, and if a load balancer or a repeater fails, the cluster service may be unavailable and is not suitable for cluster use of the super-convergence architecture. So the scheme adopts DR routing. The difference between this method and the NAT is that the back-end server directly sends the request processing result of the user back to the user, and does not pass through the repeater or the load balancer any more. This weakens the role of the transponder and may even remove a specialized transponder. The forwarding is not performed any more, so that the response time to the user request is shortened, the delay is reduced, and the user experience is better. And each super-convergence server is provided with Keepalived, so that the problem of single-point failure can be prevented, the high availability characteristic is ensured, and the network delay time of response can be shortened.
The Haproxy is free and open source code software written by using C language, and is used for load balancing. The effect of high availability of the cluster can be achieved by matching with Keepalived. The specific sending of the user request to which server at the back end is determined by a load balancing algorithm in the Haproxy, and the Haproxy only needs to specify a virtual IP provided by Keepalived in a configuration file. The common Haproxy strategy is that Roundrobinc takes turns to receive requests in the order of server1, server2, and server3 according to the server list. This may affect the instance address accessed to be different from the address requested by the user; the leasecon refers to selecting the server with the least receiving connection among the plurality of servers. Since the cloud platform is highly available, a plurality of servers with the same load cannot be elected. The Source policy hashes the Source IP, and the fixed IP is always bound to one of the servers unless the bound server is down and the IP address is migrated. After comparison, the Source strategy is finally selected to ensure that the request of the user to the cloud platform is fixedly sent to one of the servers, and the problem that the address of the server where the virtual machine instance is located is inconsistent with the request address of the user is avoided. The response to the user request after using Keepalived to collocate Haproxy is shown in fig. 3. As shown in FIG. 3, three servers all run Keepalld + Haproxy, but appear out as a virtual IP. In normal use, a virtual IP binding one of the servers is created for a user, and VRRP is used between the servers for survival detection. If the server fails to respond, other servers can take over the virtual IP in a state that the user does not perceive the virtual IP, and continue to provide corresponding services, so that high availability is ensured.
Further, the reliability evaluation model in S3 includes a physical server hardware HW, an operating system OS, a storage system Ceph, a maridb node, a RabbitMQ component, an identity authentication Keystone, a volume management sender, a network management Neutron, a mirror image management glare, a computation management Nova, and an SDEP block indicating a dependency relationship between the components, the physical server hardware HW is connected with the operating system OS, the operating system OS is connected with the RabbitMQ component through the storage system Ceph and the maridb node, the RabbitMQ component is connected with the network management Neutron through the identity authentication Keystone and the volume management sender, the network management Neutron is connected with the computation management Nova through the mirror image management glare, the physical server hardware points to the operating system OS through the HW SDEP block, the maridb node points to the identity authentication Keystone through the SDEP block, the identity authentication Keystone points to the network management Neutron, the operational management glare, the web image management glare through the SDEP block; the component pointed to by the SDEP block indicates the component that must be repaired first in the event of a failure, and whether the component pointed to by the SDEP block can operate depends on the state of the previous component. As can be seen from fig. 4, the operating system OS depends on the physical server hardware HW, whereas all the latter OpenStack components depend on the operating system OS. The database MariaDB component is the basis of identity authentication Keystone, volume management circle and network, mirror image and calculation, ceph provides a bottom data storage service for other components, and the mirror image management component Glance provides mirror image for the calculation management Nova used by an end user to create an instance.
Further, the invention realizes the high availability scheme proposed in the text on three machines containing 16 CPUs, 251G memory and 64T storage, and performs high availability verification on the scheme. The invention designs a test case from the following three angles: testing software level errors, testing hardware level errors, testing network level errors.
Further, the method for testing software level errors comprises the following steps: as shown in table 1, the following steps are included:
step one, simulating the occurrence of errors, and closing Nova service;
step two, an administrator logs in the OpenStack;
step three, enumerating and calculating Nova instances and calculating response time;
and step four, recovering Nova service.
Figure BDA0003242194570000081
TABLE 1
Further, the method for testing hardware level errors comprises the following steps: as shown in table 2, the following steps are included:
step one, simulating the occurrence of errors, and restarting any node server;
and step two, calculating response time.
Figure BDA0003242194570000091
TABLE 2
Further, the method for testing the network level error comprises the following steps: as shown in table 3, the following steps were included:
step one, simulating the occurrence of errors, and unplugging a network cable of any node server;
and step two, calculating response time by adopting a ping command.
Figure BDA0003242194570000092
TABLE 3
By using the test case for testing, the Nova component can be quickly recovered to be normal within 5 seconds, and can be seamlessly switched to other nodes when a fault occurs. Providing a guaranteed Service Level Agreement (SLA) standard may measure the availability of a cloud platform. And substituting the test result of the experiment into an SLA calculation formula, and ensuring that the service level agreement grade II is ensured when the cloud platform sacrifices that the service standard reaches 99.99%. This further verifies the high availability of the present solution.
Although only the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art, and all changes are encompassed in the scope of the present invention.

Claims (2)

1. A construction method of an OpenStack-based college and university available micro data center is characterized by comprising the following steps: comprises the following contents:
s1, constructing a cloud platform super-fusion framework, wherein the cloud platform super-fusion framework comprises a hardware facility foundation, fusion framework resource pooling and resource scheduling management automation;
s2, respectively constructing a database with high availability of a persistent layer, a message queue, a storage layer and a network layer; the method for constructing the database persistence layer in the S2 comprises the following steps: the state of the database in each node and the data in the database must be consistent with the contents of other node databases in the cluster, and a Galera cluster is adopted to realize a MariaDB multi-master mode; the Galera cluster comprises three MariaDB nodes which are equal and mutually master nodes; the message queue in the S2 is high in availability and adopts a RabbitMQ component to realize a high-level message queue protocol; the method for constructing the high availability of the storage layer in the S2 comprises the following steps: the method comprises the following steps that a rear-end storage adopts a Cinder docking Ceph of an OpenStack component, the Ceph is an extensible and software-defined open source storage system, the rear-end storage adopts a FileStore storage mode in the Ceph, the FileStore storage mode is that a Journal is written before data is written, and the Journal changes from one-time request to two-time writing operation; the method for constructing the network layer high availability in the S2 comprises the following steps: the network layer adopts Keepalived + Haproxy, the Keepalived provides functions of server health check and fault node isolation for realizing a TCP/IP layer health check mechanism in an OSI seven-layer protocol, each super fusion node in the cloud platform super fusion framework is provided with the Keepalived to become a Keepalived node, and the Keepalived nodes are communicated by adopting a VRRP protocol defined by the Keepalived; the Haproxy is free and open source code software written by using C language, and is used for load balancing;
s3, constructing a reliability evaluation model; the reliability evaluation model in the S3 comprises a physical server hardware HW, an operating system OS, a storage system Ceph, a MariaDB node, a RabbitMQ component, an identity authentication Keystone, a volume management Cinder, a network management Neutron, a mirror image management Glance, a calculation management Nova and an SDEP block for representing the dependency relationship between the components, wherein the physical server hardware HW is connected with the operating system OS, the operating system OS is respectively connected with the RabbitMQ component through the storage system Ceph and the MariaDB node, the RabbitMQ component is respectively connected with the network management Neutron through the identity authentication Keystone and the MariaDB component, the network management Neutron is connected with the calculation management Nova through the mirror image management Glance, the physical server hardware HW points to the operating system OS through the SDEP block, the MariaDB node points to the identity authentication Keystone through the SDEP block, and the identity authentication Keystone points to the network management Neutron, the mirror image management Glutron and the mirror image management Glance respectively through the SDEP block; the component pointed by the SDEP block indicates the component which must be repaired firstly when the fault occurs, and whether the component pointed by the SDEP block can operate depends on the state of the previous component;
s4, verifying high availability;
the verification high availability in the S4 comprises a test software level error, a test hardware level error and a test network level error;
the method for testing the software level errors comprises the following steps: comprises the following steps:
step one, simulating the occurrence of errors, and closing Nova service;
secondly, an administrator logs in OpenStack;
step three, enumerating and calculating Nova instances and calculating response time;
fourthly, recovering Nova service;
the method for testing hardware level errors comprises the following steps: comprises the following steps:
step one, simulating the occurrence of errors, and restarting any node server;
step two, calculating response time;
the method for testing the network level error comprises the following steps: comprises the following steps:
step one, simulating the occurrence of errors, and unplugging a network cable of any node server;
and step two, calculating response time by using a ping command.
2. The OpenStack-based construction method for the high-availability miniature data center in colleges and universities, according to claim 1, is characterized in that: a switch is arranged in the hardware facility foundation and adopts a port stacking mode; network virtualization is arranged in the fusion framework resource pooling, and the network virtualization adopts super-fusion node configuration on a network card; and elastic expansion is arranged in the resource scheduling management automation, and the expandability of an OpenStack cloud platform is increased by the elastic expansion.
CN202111022871.XA 2021-09-01 2021-09-01 Construction method of available micro data center for colleges and universities based on OpenStack Active CN113726899B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111022871.XA CN113726899B (en) 2021-09-01 2021-09-01 Construction method of available micro data center for colleges and universities based on OpenStack

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111022871.XA CN113726899B (en) 2021-09-01 2021-09-01 Construction method of available micro data center for colleges and universities based on OpenStack

Publications (2)

Publication Number Publication Date
CN113726899A CN113726899A (en) 2021-11-30
CN113726899B true CN113726899B (en) 2022-10-04

Family

ID=78680754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111022871.XA Active CN113726899B (en) 2021-09-01 2021-09-01 Construction method of available micro data center for colleges and universities based on OpenStack

Country Status (1)

Country Link
CN (1) CN113726899B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827148B (en) * 2022-04-28 2023-01-03 北京交通大学 Cloud security computing method and device based on cloud fault-tolerant technology and storage medium
CN116049136B (en) * 2022-12-21 2023-07-28 广东天耘科技有限公司 Cloud computing platform-based MySQL cluster deployment method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108462746A (en) * 2018-03-14 2018-08-28 广州西麦科技股份有限公司 A kind of container dispositions method and framework based on openstack
CN110750334A (en) * 2019-10-25 2020-02-04 北京计算机技术及应用研究所 Network target range rear-end storage system design method based on Ceph
CN111444020A (en) * 2020-03-31 2020-07-24 中国科学院计算机网络信息中心 Super-fusion computing system architecture and fusion service platform

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2926501A4 (en) * 2012-12-03 2016-07-13 Hewlett Packard Development Co Asynchronous framework for management of iaas
US9336091B2 (en) * 2014-03-06 2016-05-10 International Business Machines Corporation Reliability enhancement in a distributed storage system
CN111290839A (en) * 2020-05-09 2020-06-16 南京江北新区生物医药公共服务平台有限公司 IAAS cloud platform system based on openstack
CN112615666B (en) * 2020-12-19 2022-07-15 河南方达空间信息技术有限公司 Micro-service high-availability deployment method based on RabbitMQ and HAproxy

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108462746A (en) * 2018-03-14 2018-08-28 广州西麦科技股份有限公司 A kind of container dispositions method and framework based on openstack
CN110750334A (en) * 2019-10-25 2020-02-04 北京计算机技术及应用研究所 Network target range rear-end storage system design method based on Ceph
CN111444020A (en) * 2020-03-31 2020-07-24 中国科学院计算机网络信息中心 Super-fusion computing system architecture and fusion service platform

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于OpenStack的中小企业私有云构建及高可用性研究;徐鹏;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20170515;全文 *
基于OpenStack的高可用私有云的实施案例;唐飞雄;《计算机***应用》;20150615;第2-4节 *

Also Published As

Publication number Publication date
CN113726899A (en) 2021-11-30

Similar Documents

Publication Publication Date Title
CN110750334B (en) Ceph-based network target range rear end storage system design method
US20200257593A1 (en) Storage cluster configuration change method, storage cluster, and computer system
CN114787781B (en) System and method for enabling high availability managed failover services
CN107734026B (en) Method, device and equipment for designing network additional storage cluster
US20220091771A1 (en) Moving Data Between Tiers In A Multi-Tiered, Cloud-Based Storage System
US9128626B2 (en) Distributed virtual storage cloud architecture and a method thereof
CN113726899B (en) Construction method of available micro data center for colleges and universities based on OpenStack
US20200142788A1 (en) Fault tolerant distributed system to monitor, recover and scale load balancers
CN108200124B (en) High-availability application program architecture and construction method
US20120079090A1 (en) Stateful subnet manager failover in a middleware machine environment
US8938604B2 (en) Data backup using distributed hash tables
US8671218B2 (en) Method and system for a weak membership tie-break
US20160057009A1 (en) Configuration of peered cluster storage environment organized as disaster recovery group
CN112000635A (en) Data request method, device and medium
CN104811476A (en) Highly-available disposition method facing application service
US20180205612A1 (en) Clustered containerized applications
US11733874B2 (en) Managing replication journal in a distributed replication system
US6804819B1 (en) Method, system, and computer program product for a data propagation platform and applications of same
CN108512753B (en) Method and device for transmitting messages in cluster file system
CN113849136B (en) Automatic FC block storage processing method and system based on domestic platform
CN111818188B (en) Load balancing availability improving method and device for Kubernetes cluster
US9077665B1 (en) Transferring virtual machines and resource localization in a distributed fault-tolerant system
CN112104729A (en) Storage system and caching method thereof
US11704289B2 (en) Role reversal of primary and secondary sites with minimal replication delay
CN112882771A (en) Server switching method and device of application system, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant