CN111371606A - Method for specifying monitor ip when using look to deploy ceph cluster - Google Patents

Method for specifying monitor ip when using look to deploy ceph cluster Download PDF

Info

Publication number
CN111371606A
CN111371606A CN202010126230.8A CN202010126230A CN111371606A CN 111371606 A CN111371606 A CN 111371606A CN 202010126230 A CN202010126230 A CN 202010126230A CN 111371606 A CN111371606 A CN 111371606A
Authority
CN
China
Prior art keywords
look
monitor
ceph
cluster
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010126230.8A
Other languages
Chinese (zh)
Inventor
赵磊
蔡卫卫
谢涛涛
宋伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Huimao Electronic Port Co Ltd
Original Assignee
Shandong Huimao Electronic Port Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Huimao Electronic Port Co Ltd filed Critical Shandong Huimao Electronic Port Co Ltd
Priority to CN202010126230.8A priority Critical patent/CN111371606A/en
Publication of CN111371606A publication Critical patent/CN111371606A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/04Network management architectures or arrangements
    • H04L41/042Network management architectures or arrangements comprising distributed management centres cooperatively managing the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • H04L63/0209Architectural arrangements, e.g. perimeter networks or demilitarized zones
    • H04L63/0218Distributed architectures, e.g. distributed firewalls
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/30Profiles

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a method for specifying monitor ip when a look is used for deploying a ceph cluster, which relates to the technical field of Kubernets environment. The method realizes the user-defined monitor ip method when the look cluster is deployed in a Kubernetes environment, improves the deployment flexibility and reduces the limitation in the network aspect.

Description

Method for specifying monitor ip when using look to deploy ceph cluster
Technical Field
The invention relates to a Kubernetes environment, in particular to a method for specifying monitor ip when a look is used for deploying ceph clusters.
Background
Currently, storage technologies can be divided into three types:
1) block storage
The method comprises the steps of hooking a bare hard disk through a certain protocol (SAS, SCSI, SAN, iSCSI and the like), and then partitioning, formatting and creating a file system; or directly using the bare hard disk to store data (database). In addition, the logical volume partitioned by the lvm is also a block storage, because the operating system cannot distinguish whether it is logical or physical, for example, there are 5 logical volumes, which are regarded as 5 bare physical hard disks.
2) File system storage
The file system is a remote file system of mount through protocols such as NFS, CIFS and the like, and is a server/notebook which is commonly taken, FTP and NFS services can be erected as long as a proper operating system and software are installed, and a server after the FTP and NFS services are erected is a file storage server, such as an NAS and NFS server.
3) Object store
Most object stores are implemented essentially as key-value pair storage systems, finding a value from a key can be anything, small files (small binary fragments), large files, such as a network disk store or an object store.
The Ceph cluster is a storage cluster which can provide an interface for the three storage modes, and the lowest layer of the Ceph cluster is an object storage. Ceph provides an infinitely scalable storage cluster based on RADOS. RADOS consists of a large number of storage device node group layers, each node has its own hardware resources (such as CPU, memory, hard disk, network) and runs an operating system and a file system. The Ceph cluster provides a basic library librados and high-level application interfaces RADOS GW, RBD and Ceph FS for users, and finally meets the requirements of the users on different storage types. By building a kubernets environment, the entire computer cluster can work as a whole, allowing containerized applications to be deployed to the cluster without having to bind them specifically to a single machine. As a classic representative of object storage, ceph with the characteristics of reliability, automation, distribution and the like can be deployed in a Kubernetes environment by means of look, and the requirements of users on containerized object storage application are met. Rook is an open source cloud native storage orchestrator by Kubernetes, providing a platform, framework, and support for various storage solutions for native integration with the cloud native environment. Rook turns storage software into a self-managed, self-expanding, and self-healing storage service. It accomplishes this by automated deployment, startup, provisioning, configuration, expansion, upgrade, migration, disaster recovery, monitoring, and resource management. Rook uses tools provided by the underlying cloud native container management, scheduling, and orchestration platform to perform its duties. Rook is deeply integrated into a cloud native environment, and seamless experience is provided for scheduling, life cycle management, resource management, security, monitoring and user experience by using extension points.
At present, when a large cluster is deployed, many networks are often involved, for example, when an OpenStack cluster is deployed, a user often needs to consider a management network, a tunnel network, and an external network, and if docking of ceph cluster and OpenStack related services needs to be completed, ceph's public network and cluster network need to be considered, but in an actual situation, a situation that a network card is not enough may occur, and therefore, one network needs to serve multiple roles, at this time, an administrator or a user needs to be able to reasonably allocate and use each network, and therefore, higher requirements are also placed on network deployment flexibility. Therefore, users often need to customize the network according to actual conditions, and in the process of deploying ceph by using look at present, the users cannot specify the ip of the monitor of the ceph, which causes use limitation to the users needing to customize the monitor network. In fact, according to the setting of the HostNet in the cluster yaml file (CephCluster type), when the user selects HostNet to deploy a pod, the public network of the monitor is finally set as the ip of the pod; when the user does not select the HostNet to deploy the pod, the network scheme of Kubernets determines the podip, such as Flannel, Calico, Canal, weaveNet, etc., and finally the public network of monitor is set as the ip of service, and the reason for this is considered by the authorities: the IP of the Pod is not fixed and to solve this problem, Service provides an abstraction layer to access the Pod. The Service serves the outside as a stable front end regardless of the change of the Pod of the back end. Through the two modes, the public network of the monitor cannot be customized by the user, so that the flexibility of using the storage function by the user is limited, and therefore, the development of the function of the designated monitor ip of the ceph cluster deployed for the look is significant.
Disclosure of Invention
Aiming at the requirements and the defects of the prior art development, the invention provides a method for specifying monitor ip when a look is used for deploying a ceph cluster, so that the network limitation on users when the look is used for deploying the ceph cluster is reduced, and the deployment flexibility is improved.
The invention discloses a method for specifying monitor ip when a look is used for deploying a ceph cluster, which adopts the following technical scheme for solving the technical problems:
a method for specifying monitor ip when a look cluster is deployed is used, the method requires a user to select HostNet to deploy pod, a pub _ net parameter is set under a mon field of a spec field in yaml of the ceph cluster, the pub _ net parameter represents a network segment of public network of the monitor which the user wants to set, all networks on a server are scanned by the monitor, finally, the networks which accord with the pub _ net parameter are screened out, and the building of the ceph cluster is continued.
Specifically, the method for specifying monitor ip when a look is used for deploying a ceph cluster comprises the following specific implementation steps:
step 1, deploying a Kubernetes environment of at least three nodes, wherein the environment adopts a calico network scheme;
step 2, downloading a look source code, changing the look source code and re-manufacturing a look/ceph mirror image;
step 3, configuring a ceph cluster yaml file, designating the mirror image reproduced in the step 2, and setting a pub _ net parameter under an mon field of a spec field in the ceph cluster yaml, wherein the pub _ net parameter represents a network segment of a public network of a monitor which a user wants to set;
step 4, deploying the ceph cluster by using look;
and 5, checking the health condition of the monitor ip and the ceph cluster by using the ceph toolbox provided by the official part.
More specifically, the at least three nodes in step 1 simultaneously serve as monitor and osd nodes of the ceph cluster, and in a production environment, to meet the HA requirement, osd needs to be distributed over different osd nodes, and each node HAs at least one disk serving as ceph osd.
Preferably, when two or more osds are distributed on an osd node, a status prompt of "active + undersized + degraded" appears.
Preferably, the public network of monitor and the public network of osd are in the same network segment.
More specifically, in step 2, the modification of the look source code, that is, the modification of the processing logic of monitor ip, includes the following specific operations:
step 2.1, acquiring the ip of the pod;
and 2.2, inquiring all network cards on the pod, filtering the network cards belonging to the network segment specified by the pub _ net parameter in the step 3 from the network cards, and using the network cards as public networks of the monitor.
Preferably, after changing the look source code, look/ceph mirror image is made directly using docker build.
More specifically, in the implementation process of step 4, the yaml files concerned when different versions of look deploy ceph clusters are not exactly the same, and the yaml files required to be concerned can be acquired by ceph toolbox provided by the official.
Compared with the prior art, the method for specifying monitor ip when using look to deploy ceph cluster has the following beneficial effects:
1) the invention realizes the method of customizing the monitor ip by the user when using look to deploy ceph cluster in Kubernetes environment, improves the deployment flexibility and reduces the limitation in network aspect;
2) in the implementation process of the invention, when a plurality of networks exist in the deployment environment, a user can realize the isolation from other networks by defining a special network as the public network of the ceph monitor, thereby ensuring the safety of the ceph cluster.
Drawings
FIG. 1 is a Rook architecture after specifying monitor ip as presented by the present invention.
Detailed Description
In order to make the technical scheme, the technical problems to be solved and the technical effects of the present invention more clearly apparent, the following technical scheme of the present invention is clearly and completely described with reference to the specific embodiments.
The first embodiment is as follows:
the embodiment provides a method for specifying monitor ip when a look is used for deploying a ceph cluster, which requires a user to select a HostNet to deploy a pod, a pub _ net parameter is set under a mon field of a spec field in the yach cluster yaml, the pub _ net parameter represents a network segment of the public network of the monitor which the user wants to set, all networks on a server are scanned by creating the monitor, finally, a network which accords with the pub _ net parameter is screened out, and the building of the ceph cluster is continued.
In this embodiment, a method for specifying monitor ip when a look deploys a ceph cluster includes:
step 1, deploying a Kubernetes environment of three nodes, wherein the environment adopts a calico network scheme;
step 2, downloading a look source code, changing the look source code and re-manufacturing a look/ceph mirror image;
step 3, configuring a ceph cluster yaml file, designating the mirror image reproduced in the step 2, and setting a pub _ net parameter under an mon field of a spec field in the ceph cluster yaml, wherein the pub _ net parameter represents a network segment of a public network of a monitor which a user wants to set;
step 4, deploying the ceph cluster by using look;
and 5, checking the health condition of the monitor ip and the ceph cluster by using the ceph toolbox provided by the official part.
In the above implementation steps, the three nodes in step 1 are simultaneously used as monitor and osd nodes of a ceph cluster, in a production environment, to meet the requirement of HA, osd needs to be dispersed on different osd nodes, and each node HAs at least one disk as ceph osd. When two or more osds are distributed on one osd node, the status prompt of active + undersized + degraded will appear.
It should be noted that, in this embodiment, the public network of monitor and the public network of osd are in the same network segment, such as 192.168.8.0/24.
In step 2, changing the look source code, namely changing the processing logic of the monitor ip, wherein in the code provided by the government, the initmonaps function of look/pkg/operator/ceph/mon/mon.go is responsible for processing the public network of the monitor, and when the user selects the Hostnetwork to deploy the pod, the public network of the monitor is finally set as the ip of the pod; when the user does not select HostNet to deploy a pod, the network scheme of Kubernets determines podip, and finally the public network of monitor is set as the ip of service. Based on this, the specific operation of changing the look source code in this embodiment includes:
step 2.1, acquiring the ip of the pod;
and 2.2, inquiring all network cards on the pod, filtering the network cards belonging to the network segment specified by the pub _ net parameter in the step 3 from the network cards, and using the network cards as public networks of the monitor.
Furthermore, the authority provides a mode of using dockerfile to customize the mirror image to make the mirror image, so after changing the hook source code, the hook/ceph mirror image is directly made by using docker build.
In this embodiment, in the implementation process of step 4, the yaml files concerned when the look of different versions deploys the ceph cluster are not completely the same, and the yaml files required to be concerned can be acquired by the ceph toolbox provided by the official. For example, for the look/ceph v1.0.3 version, common, operator and cluster yaml files are concerned by the user, and the detailed content please concern official documents.
Based on the first embodiment, with reference to fig. 1, a look architecture after a publicity network of a Monitor of a designated ceph cluster is shown. In FIG. 1, taking the example of a three-node Kubernets cluster, there is one disk on each node that serves as the osd and one monitor pod on the osd-1 node. In addition, a book-ceph-agent pod is run on each node. As can be seen from fig. 1, according to the first embodiment, a user can customize a public network of monitor, and an official default provides a hook-config-override, such as a configmap, from a cluster network and a public network that define an osd, which is combined with the first embodiment, so that the user really implements the customization of the ceph cluster network. After the look cluster is successfully deployed, the storage service can be provided to the outside.
In summary, the method for specifying the monitor ip when the look is used for deploying the ceph cluster realizes the user-defined monitor ip method when the look is used for deploying the ceph cluster in the Kubernetes environment, improves the deployment flexibility and reduces the limitation on the network.
The principles and embodiments of the present invention have been described in detail using specific examples, which are provided only to aid in understanding the core technical content of the present invention. Based on the above embodiments of the present invention, those skilled in the art should make any improvements and modifications to the present invention without departing from the principle of the present invention, and therefore, the present invention should fall into the protection scope of the present invention.

Claims (8)

1. A method for specifying monitor ip when using look to deploy ceph cluster is characterized in that the method requires a user to select HostNetwork to deploy pod, a pub _ net parameter is set under the mon field of spec field in the yach cluster yaml, the pub _ net parameter represents the network segment of the public network of the monitor which the user wants to set, all networks on a server are scanned by creating the monitor, finally, the networks which accord with the pub _ net parameter are screened out, and the building of the ceph cluster is continued.
2. The method for specifying monitor ip when a ceph cluster is deployed by using look as claimed in claim 1, wherein the specific implementation steps of the method include:
step 1, deploying a Kubernetes environment of at least three nodes, wherein the environment adopts a calico network scheme;
step 2, downloading a look source code, changing the look source code and re-manufacturing a look/ceph mirror image;
step 3, configuring a ceph cluster yaml file, designating the mirror image reproduced in the step 2, and setting a pub _ net parameter under an mon field of a spec field in the ceph cluster yaml, wherein the pub _ net parameter represents a network segment of a public network of a monitor which a user wants to set;
step 4, deploying the ceph cluster by using look;
and 5, checking the health condition of the monitor ip and the ceph cluster by using the ceph toolbox provided by the official part.
3. The method as claimed in claim 2, wherein the at least three nodes in step 1 are used as the monitor and osd nodes of the ceph cluster, and osd is distributed on different osd nodes in order to meet the HA requirement in the production environment, and each node HAs at least one disk as ceph osd.
4. The method for specifying monitor ip in the case of deploying ceph cluster by using look as claimed in claim 3, wherein when two or more osds are distributed on an osd node, a status prompt of "active + undersized + degraded" appears.
5. The method of claim 3, wherein the public network of monitor and the public network of osd are in the same network segment.
6. The method as claimed in claim 2, wherein in step 2, modifying the look source code, that is, modifying the processing logic of the monitor ip, the specific operation of modifying the look source code includes:
step 2.1, acquiring the ip of the pod;
and 2.2, inquiring all network cards on the pod, filtering the network cards belonging to the network segment specified by the pub _ net parameter in the step 3 from the network cards, and using the network cards as public networks of the monitor.
7. The method for specifying monitor ip when using look to deploy ceph cluster as claimed in claim 6, wherein look source code is changed and then look/ceph mirror image is made directly using docker build.
8. The method for specifying monitor ip when deploying ceph cluster by using look as claimed in claim 2, wherein in the implementation process of step 4, the yaml files concerned when deploying ceph cluster by using look of different versions are not identical, and the yaml files required to be concerned can be obtained by ceph toolbox provided by an official party.
CN202010126230.8A 2020-02-26 2020-02-26 Method for specifying monitor ip when using look to deploy ceph cluster Pending CN111371606A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010126230.8A CN111371606A (en) 2020-02-26 2020-02-26 Method for specifying monitor ip when using look to deploy ceph cluster

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010126230.8A CN111371606A (en) 2020-02-26 2020-02-26 Method for specifying monitor ip when using look to deploy ceph cluster

Publications (1)

Publication Number Publication Date
CN111371606A true CN111371606A (en) 2020-07-03

Family

ID=71211575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010126230.8A Pending CN111371606A (en) 2020-02-26 2020-02-26 Method for specifying monitor ip when using look to deploy ceph cluster

Country Status (1)

Country Link
CN (1) CN111371606A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113268313A (en) * 2021-06-08 2021-08-17 北京汇钧科技有限公司 Task execution method and device based on cloud container

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050010926A1 (en) * 2003-07-11 2005-01-13 Sreedhara Narayanaswamy System and method for cluster deployment
CN106850621A (en) * 2017-02-07 2017-06-13 南京云创大数据科技股份有限公司 A kind of method based on container cloud fast construction Hadoop clusters
CN108989419A (en) * 2018-07-11 2018-12-11 郑州云海信息技术有限公司 A kind of memory node dispositions method based on cloud storage

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050010926A1 (en) * 2003-07-11 2005-01-13 Sreedhara Narayanaswamy System and method for cluster deployment
CN106850621A (en) * 2017-02-07 2017-06-13 南京云创大数据科技股份有限公司 A kind of method based on container cloud fast construction Hadoop clusters
CN108989419A (en) * 2018-07-11 2018-12-11 郑州云海信息技术有限公司 A kind of memory node dispositions method based on cloud storage

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CGTXSGEOO421515302: "ceph集群多网络配置(public、cluster、network、addr )", 《CSDN》 *
YGQYGQ2: "kubernetes上部署rook-ceph存储***", 《HTTPS://BLOG.51CTO.COM/YGQYGQ2/2449524》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113268313A (en) * 2021-06-08 2021-08-17 北京汇钧科技有限公司 Task execution method and device based on cloud container

Similar Documents

Publication Publication Date Title
US9971823B2 (en) Dynamic replica failure detection and healing
CN111338854B (en) Kubernetes cluster-based method and system for quickly recovering data
CN113296792B (en) Storage method, device, equipment, storage medium and system
CN106713493B (en) System and method for constructing distributed file in computer cluster environment
CN108959385B (en) Database deployment method, device, computer equipment and storage medium
CN106657167B (en) Management server, server cluster, and management method
US11991094B2 (en) Metadata driven static determination of controller availability
CN111225064A (en) Ceph cluster deployment method, system, device and computer-readable storage medium
CN113626286A (en) Multi-cluster instance processing method and device, electronic equipment and storage medium
US20210279073A1 (en) Systems and methods for automated and distributed configuration of computing devices
CN111684437A (en) Chronologically ordered staggered updated key-value storage system
CN113254156A (en) Container group deployment method and device, electronic equipment and storage medium
CN114510464A (en) Management method and management system of high-availability database
CN111274191A (en) Method for managing ceph cluster and cloud local storage coordinator
CN111371606A (en) Method for specifying monitor ip when using look to deploy ceph cluster
CN117544507A (en) Multi-region distributed configuration method and system based on cloud object storage service
CN115357198B (en) Mounting method and device of storage volume, storage medium and electronic equipment
CN115344273B (en) Method and system for running application software based on shelf system
CN114661420B (en) Application protection method, device and system based on Kubernetes container platform
CN111767345B (en) Modeling data synchronization method, modeling data synchronization device, computer equipment and readable storage medium
US20230108778A1 (en) Automated Generation of Objects for Kubernetes Services
CN113468182B (en) Data storage method and system
AU2021268828B2 (en) Secure data replication in distributed data storage environments
CN115618409A (en) Database cloud service generation method, device, equipment and readable storage medium
CN114077501A (en) Method, electronic device, and computer-readable medium for managing a microservice architecture system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200703

RJ01 Rejection of invention patent application after publication