WO2020062131A1 - 一种基于区块链技术的容器云管理*** - Google Patents

一种基于区块链技术的容器云管理*** Download PDF

Info

Publication number
WO2020062131A1
WO2020062131A1 PCT/CN2018/108575 CN2018108575W WO2020062131A1 WO 2020062131 A1 WO2020062131 A1 WO 2020062131A1 CN 2018108575 W CN2018108575 W CN 2018108575W WO 2020062131 A1 WO2020062131 A1 WO 2020062131A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
application
management system
container cloud
cloud management
Prior art date
Application number
PCT/CN2018/108575
Other languages
English (en)
French (fr)
Inventor
韦小强
田江波
刘广德
陈奇
姚鑫
张鹏
王与实
Original Assignee
北京连云决科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京连云决科技有限公司 filed Critical 北京连云决科技有限公司
Priority to CN201880097738.0A priority Critical patent/CN113169952B/zh
Priority to PCT/CN2018/108575 priority patent/WO2020062131A1/zh
Publication of WO2020062131A1 publication Critical patent/WO2020062131A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords

Definitions

  • the invention belongs to the technical field of computer cloud computing, and particularly relates to a container cloud management system based on blockchain technology.
  • Cloud computing technology can provide usable, convenient and on-demand network access, enter configurable computing resource sharing pool (resources include network, server, storage, application software, etc.), these resources can be quickly provided, but only need to invest Minimal administrative work or minimal interaction with service providers.
  • resources include network, server, storage, application software, etc.
  • public cloud is considered the main form of cloud computing.
  • Public cloud usually refers to a cloud provided by a third-party provider that can be used for free or at low cost, and is generally available through the Internet.
  • the core attribute of public cloud is shared resource services.
  • cloud computing platforms can be divided into three types according to different functions: IaaS (Infrastructure as a Service), PaaS (Platform as a Service), and SaaS (Software as a Service).
  • IaaS Infrastructure as a Service
  • PaaS PaaS
  • SaaS Software as a Service
  • PaaS PaaS
  • PaaS PaaS
  • SaaS Software as a Service
  • the application management platform PaaS service provided by IaaS service providers for its users, whose purpose is to allow users to more easily manage their own applications and provide value-added services.
  • PaaS public cloud services such as Google's GAE (Google App Engine), Sina's SAE (Sina App Engine), etc., which allow users to complete all life cycles from application development, construction, testing, and deployment in the PaaS platform , But only supports a limited development language, and all applications and data are on the PaaS common cloud platform.
  • PaaS private cloud services some users are unwilling to use PaaS public cloud platforms and application management platforms provided by IaaS service providers in order to protect the security of their developed applications and data security. Instead, they choose to build their own PaaS private cloud platforms. This platform can be developed by users themselves, or they can choose the way of project implementation and choose service providers.
  • schemes 1 and 2 need to tightly couple resources, applications, data, and platforms. Therefore, schemes 1 and 2 will not be adopted by users who are concerned about application and data security.
  • Option 3 is the preferred option for users who are concerned about application and data security, in the process of building a Cloud Platform, Option 3 not only requires higher personnel costs and server costs for self-built platforms, but also chooses external providers. It also requires expensive project implementation costs, whose costs and platform implementation cycle are higher than options 1 and 2.
  • PaaS public cloud platforms will be accepted by more and more users because of its flexibility, low cost, and pay-per-use advantages, as long as it can solve the existing PaaS
  • the security issue of the public cloud platform, the PaaS public cloud platform is expected to become the first choice for users to deploy Internet applications.
  • the present invention is intended to provide a container cloud management system based on blockchain technology, which is used to release the tight coupling between resources, applications, data and platforms, reduce the cost of users while solving the security issues of resources, applications, and data.
  • the security issues can involve: 1. In the container cloud management platform, how to solve the user's management rights of its own resources, so that the platform can not operate the user's resources; 2. In the container cloud management platform, how to solve the user The problem of operation permissions on applications and data makes it impossible for the platform to operate the user's application and read the user's operation records and user data with minimum permissions; 3.
  • the invention discloses a container cloud management system based on a blockchain technology.
  • the container cloud management system includes a plurality of management node masters, and each of the plurality of management node masters can communicate with other management node masters through a network. Communicate with each other, each of the plurality of management node masters can receive and process an access request of a working node Node, and the access request is that the working node Node requests to join the container cloud management system; Before the working node Node sends the access request to the management node Master, the user generates a public and private key account for the access request based on the working node Node.
  • the access request includes the working node Node registering its own information with the management node Master;
  • the self information includes the host name, kernel version, operating system information, and docker version of the working node Node;
  • the processing of the access request by the management node Master includes that the management node Master includes the working node Node in the cluster scheduling, and the management node Master monitors the working node Node included in the cluster scheduling in real time. status.
  • the real-time monitoring is specifically: the working node Node periodically sends its status information to the management node Master, and the management node Master writes the received status information into its cluster distributed storage database etcd, The status information is analyzed and processed at the same time.
  • each of the plurality of management node Masters provides a cluster distributed storage database etcd, and the processing of the access request by the management node Master includes that the management node Master sends the work
  • the self information of the node Node is written into its cluster distributed storage database etcd.
  • the management node master synchronizes the self information and status information of the accessed working node Node to all other management node masters in the container cloud management system.
  • the working node Node that is accessed can communicate with each of the plurality of management node masters through a network, and the working node Node and the plurality of management node masters communicate with each other. Each of them communicates with each other including the working node Node listening to an operation on the management node Master for an application.
  • the application-specific operations include: creating an application, modifying an application, and deleting an application; when the application-specific operation is creating an application, the worker node Node creates an application according to a creation requirement; when the application-specific operation When modifying the application, the working node Node modifies the application according to the modification requirements; when the operation for the application is deleting the application, the working node Node deletes the application according to the deletion requirement, the application is based on the Pod, and the application-specific Operations include: creating a pod, modifying a pod, and deleting a pod.
  • a user can soon access his own device to the container cloud management system and become a working node Node of the system, and the user can access any of the multiple management node Masters through the working node Node accessed.
  • One sends an application creation request, the application creation request carrying the user's public key and the user's signature information signed with the private key and the original template data used to create the application.
  • the management node Master can receive and respond to the application creation request, and the management node Master responding to the application creation request includes using a user public key to perform a user signature verification on the application creation request. If the verification fails, the application is refused to be created, and the user is prompted to fail the application creation.
  • the management node master performs a write operation that writes the application creation request to its cluster distributed storage database etcd, and the write operation includes writing to all other management node masters Distributing the application creation request.
  • the writing operation further includes verification of data consistency and signature data of all cluster distributed storage databases etcd in the container cloud management system using a blockchain consensus algorithm based on the application creation request, and the verification specifically For: if the number of the cluster distributed storage database etcd that passes the verification in the container cloud management system is greater than half, the application creation request is written to each cluster distributed storage in the container cloud management system Database etcd; otherwise, each cluster distributed storage database etcd in the container cloud management system refuses to write the application creation request and prompts the user that application creation fails.
  • management node Master screens qualified working nodes Node to create an application
  • management node Master screens qualified working nodes Node to create an application Specifically, the management node Master screens qualified conditions according to scheduling rules.
  • management node Master writes the screening result into its cluster distributed storage database etcd.
  • the working node Node obtains and processes the created application through a Watch mechanism, and the obtaining the created application includes: the working node Node obtains a user public key and a user private key carried by the application creation request The signature information of the signature and the original template data used to create the application.
  • the processing the creating application includes: the working node Node performs user signature verification using a user public key based on the application creation request, and if the user signature verification succeeds, creating the application and prompting the user to create the application Success; if the user signature verification fails, the application is rejected and the user is prompted to fail to create the application.
  • the working node Node periodically sends the creation result status and subsequent operation status to the management node master.
  • the management node master can monitor the status of system resources in real time, judge and process based on the current status, and restore the resource status to Expectation.
  • the user can also send an application deletion request to any one of the multiple management nodes Master through the accessing working node Node, where the application deletion request carries the user's public key and the user's signature information signed with the private key.
  • the management node Master can receive and respond to the application deletion request, and the management node Master responding to the application deletion request includes using a user public key to perform a user signature verification on the application deletion request. If the verification fails, the application is rejected and the user is prompted to delete the application.
  • the management node Master queries the target application resource object in all cluster distributed storage databases etcd in the container cloud management system.
  • the management node master distributes the application deletion request to all other management node masters.
  • all the cluster distributed storage databases etcd in the container cloud management system use a blockchain consensus algorithm to verify data consistency and signature data based on the application delete request.
  • the verification is specifically: if the container cloud The number of the cluster distributed storage database etcd that passed the verification in the management system is greater than half, the management node master sends the application deletion request to the corresponding working node Node; otherwise, the management node master rejects the application deletion Request and prompt the user that the application creation failed.
  • the corresponding working node Node receives and processes the application deletion request, and the processing of the application deletion request is specifically: the working node Node uses a user public key to perform user signature verification based on the application deletion request If the user signature verification fails, the application is rejected and the user is prompted to delete the application; if the user signature verification is successful, the application is deleted.
  • the corresponding worker node Node sends the deletion result to the management node Master, and the management node Master performs a write operation that writes the deletion result to the cluster distributed storage database etcd.
  • the writing operation further includes verifying data consistency and signature data of the deletion result by using a blockchain consensus algorithm of all cluster distributed storage databases etcd in the container cloud management system.
  • the verification is specifically: If the number of the cluster distributed storage database etcd that passes the verification in the container cloud management system is greater than half, the deletion result is written to each cluster distributed storage database in the container cloud management system etcd, and prompts the user to delete the application successfully; otherwise, each cluster distributed storage database etcd in the container cloud management system refuses to write the deletion result, and prompts the user to delete the application failure.
  • the application according to the invention is based on a Pod.
  • the above-mentioned blockchain cloud-based container cloud management system enables users to have their own management rights over their own resources, and neither the platform nor any third party can operate the user's applications and read Taking the user's operation records and user data not only allows users to conveniently apply a third-party independent container cloud management platform without worrying about application and data security issues, but also effectively reduces the user's cost of use.
  • FIG. 1 is a schematic diagram of a cluster architecture of a container cloud management system in the prior art
  • FIG. 3 is a business flowchart of a container cloud management system based on a blockchain technology according to an embodiment of the present invention.
  • container technology is increasingly used in cloud computing.
  • the container described here is essentially a virtualization technology. It is different from a virtual machine in that the virtual machine is hardware virtualization and the container is a virtual operating system.
  • the container packages the application and the execution environment of the application together. When the application is deployed, the entire container is directly deployed. Because the container has its own application execution environment, there is no application deployment exception due to environmental changes during the deployment process. Problem, can achieve "build once, execute everywhere".
  • the existing container cloud management system is based on a container management platform that manages containers.
  • the existing container management platforms include Kubernetes container management platform, Mesos container management platform, and Swarm container management platform.
  • Kubernetes container management platform is currently the most popular leading solution for distributed architecture based on container technology. It uses a distributed architecture to divide the machines in the cluster into a management node Master and a group of working nodes Node.
  • the main functions of the Kubernetes container management platform include: packaging, instantiating and running applications using Docker, running and managing containers across hosts in a cluster, and solving communication problems between containers running between different hosts, etc. .
  • the Kubernetes container management platform will be used as the basis to describe an improved container cloud management system based on blockchain technology provided by embodiments of the present invention, but those skilled in the art can understand the present invention
  • the container cloud management system based on blockchain technology can also be based on other types of container management platforms.
  • FIG. 1 illustrates a cluster architecture of a prior art container cloud management system.
  • the cluster architecture of the Kubernetes container management platform is mainly composed of a management node Master, a working node Node, and a storage node Storage.
  • the management node Master provides: 1 an API server that provides services through the kube-apiserver process, which is the cluster's API interface and the only entry point for all resources to add, delete, modify, and find operations; 2 through The kube-scheduler process provides a scheduler.
  • the kube-scheduler process is responsible for scheduling cluster resources, such as binding a Pod (the smallest management element in Kubernetes) to a worker node Node.
  • the controller provides services through the kube-controller-manager process Manager, the kube-controller-manager process is the cluster's automated management and control center.
  • management node Master uses management functions such as resource management, Pod scheduling, elastic scaling, security control, system monitoring and error correction of the entire cluster, and are all fully automated.
  • the working node Node provides: 1 a kubelet process, which is used to manage Pods and containers, mirrors, volumes, etc., to achieve the management of the node itself; 2 a kube-proxy process, which provides network proxy and load balancing to implement and manage the node master Communication between the kube-apiserver process; 3 docker engine process, which is responsible for the container management of the node.
  • FIG. 2 illustrates a business process of a container cloud management system in the prior art.
  • the business processes of the existing Kubernetes container management platform are mainly:
  • the working node Node to be accessed will start a kubelet process service.
  • the automatic registration mechanism of the kubelet process service it will actively register the working node Node with a management node Master in the container cloud management system.
  • the management node master After receiving the registration information of the working node Node, the management node master will write the registration information to the management node master. Etcd, and the registered worker node Node is included in the cluster scheduling.
  • the kube-controller-manager process in the management node Master will monitor the status of the registered working node Node in real time.
  • the kubelet process After the kubelet process completes the registration, it periodically reports the status information of its working node Node to the management node Master.
  • the management node master writes the received status information to its etcd, and makes corresponding responses based on the status information of the working node Node. Analytical processing.
  • the kubelet process of the working node Node will also monitor the / registry / pods / $ NodeName and / registry / pods directories in etcd in the management node Master through the API of the management node Master through the Watch mechanism (observation notification mechanism). All operations on the Pod will be monitored by the kubelet process.
  • the working node Node responds to the above monitoring as follows: 1 If a new pod is found that is bound to the working node Node, create the pod according to the requirements of the pod list; 2 If it is found that a pod has been created in the working node Node If it is found that the Pod in the working node Node needs to be deleted, a corresponding Pod deletion operation is performed.
  • Step S1 The user submits a Pod creation request (supports data in JSON or YAML format) to the management node Master through kubectl (the client provided by Kubernetes, which can be used to directly operate Kubernetes) or the RESTful API interface using the client on the worker node Node );
  • Step S2 The kube-apiserver process in the management node Master receives and processes the Pod creation request submitted by the user, and stores the original template data into etcd in the management node Master;
  • Step S3 The kube-sechduler process in the management node Master discovers that a new unbound Pod is generated, and then attempts to assign a working node Node to the Pod;
  • Step S4 The kube-scheduler process filters and filters the working nodes Node (e.g., CPU and Memory required by the Pod) according to the scheduling rules, and selects and scores based on the current running status of the selected working node Node.
  • the highest working node Node binds the Pod to the working node Node, and then writes the binding result to etcd in the management node Master;
  • Step S5 The kubelet process in the worker node Node discovers and acquires the newly created Pod task through the Watch mechanism (observation notification mechanism), and then calls the Docker API (Docker API is an external operation interface provided by the docker engine process) to create and start the Pod;
  • the Watch mechanism observation notification mechanism
  • the Docker API Docker API is an external operation interface provided by the docker engine process
  • Step S6 The kubelet process will periodically report the created result status and subsequent running status to the management node Master;
  • Step S7 The kube-controller-manager process in the master node of the management node simultaneously monitors the status of the resources in the cluster in real time, and judges and processes according to the current status to try to restore the resource status to the desired status.
  • Table 1 shows the specific format of data sent by kubectl in the prior art:
  • Step S1 The user uses the client to submit a Pod deletion request to the management node Master through a kubectl or RESTful API interface;
  • Step S2 The kube-apiserver process in the management node Master receives and processes the Pod delete request submitted by the user, and queries the matching resource object in etcd in the management node Master, generates a delete task, and sends it to the working node Node.
  • Step S3 The kubelet process in the working node Node calls the Docker API (Docker API is an external operation interface provided by the docker engine process) to delete the Pod-related containers, clean up the Pod-related data, and release resources;
  • Docker API is an external operation interface provided by the docker engine process
  • Step S4 The kubelet process reports the deletion result to the kube-apiserver process in the management node Master;
  • Step S5 The kube-apiserver process updates the result to etcd in the management node Master, and cleans up the resource object.
  • the entire process requires close coupling between resources, applications, data, and platforms, and there are hidden security risks.
  • the user's management rights for own resources There is no control right for application and data operation permissions, the platform can operate the user's resources or operate the user's application and read the user's operation record and user's data, provided that one or some management nodes in the container cloud management system Master
  • the crash or being compromised will affect the use and security of the entire system, and further affect the security of applications and services for all users.
  • Blockchain technology is to use block chain data structure to verify and store data, use distributed node consensus algorithm to generate and update data, use cryptography to ensure the security of data transmission and access, and use intelligence composed of automated script code.
  • Blockchain technology has significant decentralization characteristics, and is open, autonomous, and information cannot be tampered with.
  • Blockchain technology uses a consensus algorithm to achieve trust establishment and rights acquisition between different nodes, and guarantees that the data in all nodes in a cluster in a distributed system is exactly the same and can reach agreement on a proposal.
  • common blockchain consensus algorithms include Raft consensus algorithm, Paxos consensus algorithm, Proof-of-Work (POW), Proof-of-Stake (POS), and Delegated Proof-of-Stake (DPOS).
  • the embodiments of the present invention aim to provide an improved container cloud management system based on blockchain technology.
  • the container cloud management system includes: at least three management node masters, and the management node master may provide a kube-apiserver process, a kube-scheduler process, a kube-controller-manager process and Have cluster distributed storage database etcd; at least one worker node Node, which can provide kubelet process, kube-proxy process, docker engine process.
  • At least three management nodes Master in the container cloud management system based on the blockchain technology in the embodiment of the present invention can communicate with each other through the network, and each working node Node can communicate with any one of them through the network.
  • the management node Master communicates.
  • FIG. 3 illustrates a business process of a container cloud management system according to an embodiment of the present invention.
  • the process specifically includes:
  • each user Before users want to connect their equipment as a working node Node to the container cloud management system, each user needs to generate their own public and private key accounts in advance, in which the public key can be disclosed to everyone, and the private key is kept by the user himself and cannot be external Release, once leaked, safety cannot be guaranteed.
  • each working node Node that wants to access will start its own kubelet process. With the automatic registration mechanism of the kubelet process, it will actively register the working node Node to one of the container cloud management systems.
  • the node master will carry the host name, kernel version, operating system information, docker version and other data of the working node Node during registration.
  • the management node master After receiving the registration information of the working node Node, the management node master will write the registration information to the management node. Etcd in the node Master, and the worker node Node that is successfully registered is included in the cluster scheduling.
  • the kube-controller-manager process in the management node Master will monitor the status of the registered working node Node in real time.
  • the kubelet process After the kubelet process completes registration, it will periodically report the status information of its corresponding working node Node to the management node Master.
  • the management node master will also write the received status information to etcd, and based on the status of the working node Node The information is analyzed and processed accordingly.
  • the kubelet process in the worker node Node will also monitor the / registry / pods / $ NodeName and / registry / pods directories in etcd in the management node Master through the API of the management node Master through the Watch mechanism (observation notification mechanism). , All operations on the Pod will be monitored by the kubelet process.
  • the working node Node responds to the above monitoring as follows: 1 If a new pod is found that is bound to the working node Node, create the pod according to the requirements of the pod list; 2 If it is found that a pod has been created in the working node Node If you want to delete the Pod in the working node Node, perform the corresponding Pod delete operation to delete the Pod.
  • any working node Node in the container cloud management system connects itself to one of the management node masters in the system through the above access process, the registration information data will pass through the management node master.
  • the kube-apiserver process and etcd are synchronized to all other management node masters in the system, so that a working node Node does not have to correspond to a specific management node or some management node masters, but can flexibly select any one in the network. Management node master access.
  • the working node Node is accessed through one of the management node masters, after the working node Node accesses the container cloud management system, it can communicate with all other management node masters in the system through the network, and the user can base on this The working node Node deploys applications in the system through other management nodes Master in the system.
  • Step S1 When application deployment is required, the user uses a client on a working node Node that has access to the container cloud management system through kubectl (the client that comes with Kubernetes, which can be used to directly operate Kubernetes) or a RESTful API interface Submit a Pod creation request (supporting data in JSON or YAML format) to one of the management node masters in the container cloud management system.
  • the Pod creation request will carry the user's public key and the user's signature information signed with the private key and the information used to create the Pod.
  • Original template data
  • Step S2 The kube-apiserver process in the management node Master receives, processes, and parses the Pod creation request submitted by the user;
  • Step S3 The kube-apiserver process uses the user's public key to perform user signature verification on the Pod creation request. If the user signature verification fails, the corresponding Pod is refused to be created, and the user is notified that the Pod creation fails;
  • Step S4 If the kube-apiserver process successfully checks the user signature, it will perform a write operation that attempts to write the above Pod creation request to the etcd in the management node Master. Accordingly, the etcd will also be based on its internal consensus The algorithm distributes the Pod creation request to etcd in all other management nodes Master;
  • Step S5 If the above write operation is successfully performed, the kube-sechduler process in the management node Master will find that a new unbound Pod is generated, and will try to allocate the required working node Node for the Pod;
  • Step S6 The kube-scheduler process selects the working node Node that meets the conditions (such as those that meet the CPU and memory resources required by the Pod) according to the scheduling rules, and selects and selects the node based on the current running status of the selected working node Node. Score the highest working node Node, bind the Pod to the working node Node, and then write the binding result to etcd in the management node Master;
  • Step S7 The kubelet process in the working node Node will discover and obtain a new Pod creation task through the Watch mechanism (observation notification mechanism);
  • Step S8 The kubelet process receives the Pod creation task and uses the user's public key to perform user signature verification. If the user signature verification fails, it will refuse to create the Pod and prompt the user that the Pod creation failed;
  • Step S9 If the kubelet process successfully checks the user's signature, it will call the Docker API (Docker API is an external operation interface provided by the docker engine process) to create and start the Pod, and prompt the user that the Pod was successfully created;
  • Docker API Docker API is an external operation interface provided by the docker engine process
  • Step S10 The kubelet process will periodically report the created result status and subsequent running status to the management node Master;
  • Step S11 The kube-controller-manager process in the master node of the management node monitors the status of the resources in the cluster in real time, and judges and processes according to the current status, and attempts to restore the resource status to the desired status.
  • step S4 when the kube-apiserver process attempts to write the above Pod creation request to etcd in the management node Master, the etcd will use etcd in the Raid consensus algorithm in all other management nodes in the system. During data consistency and secondary verification of signature data.
  • the etcd in all the management nodes Master in the container cloud management system in the embodiment of the present invention is regarded as an etcd cluster as a whole, and the above-mentioned second verification is specifically as follows: 1 when the etcd cluster has a greater than 1/2 number of etcd When the second verification is passed, the above Pod creation requester can be successfully written into each etcd in the etcd cluster, and then etcd in the management node Master returns the data write success information to the kube-apiserver process and continues Execute the above step S5; 2 If there is more than or equal to 1/2 of etcd in the etcd cluster, the above-mentioned secondary verification is failed, indicating that the data and operation consensus has failed, the etcd cluster refuses to write to the above Pod creation sent by the kube-apiserver process Request, and then etcd in the management node Master returns data write failure information to the kube-apiserver process, prompting the user that the pod creation failed.
  • step S3 if the kube-apiserver process fails to verify the user signature, it will directly refuse to accept the Pod creation request; in the above step S4, if the data write failure information is returned to the kube-apiserver process, it means Since the kube-apiserver process does not have permission to write the above Pod creation request to the etcd cluster, the kube-apiserver process will also directly refuse to accept the above Pod creation request.
  • Raft consensus algorithm is used in the above step S4, those skilled in the art can understand that other applicable blockchain consensus algorithms can also be applied in the above step S4.
  • Table 2 shows the specific format of data sent by kubectl in the embodiment of the present invention:
  • Step S1 The user uses the client to submit a Pod delete request to one of the management node masters in the container cloud management system through the kubectl or RESTful API interface.
  • the Pod delete request carries the user's public key and the signature information signed by the user using the private key.
  • Step S2 The kube-apiserver process in the management node Master receives, processes, and parses the Pod delete request submitted by the user;
  • Step S3 The kube-apiserver process uses the user public key to perform user signature verification on the Pod deletion request. If the user signature verification fails, it will refuse to delete the Pod and prompt the user that the Pod deletion failed;
  • Step S4 If the kube-apiserver process successfully checks the user signature, it will query the matching resource object in etcd in the container cloud management system and generate a Pod deletion task.
  • the etcd cluster also uses Raft consensus algorithm to process the kube-apiserver.
  • the Pod delete task verifies the data consistency and signature data. The verification is as follows: 1 When more than 1/2 of etcd in the etcd cluster pass the above verification, it responds to the kube-apiserver process with a successful verification, and kube- The apiserver process sends the Pod deletion task to the kubelet process in the corresponding worker node Node. 2 If there is more than or equal to 1/2 of the etcd in the etcd cluster and fails the above verification, it will directly reject the Pod deletion task and prompt User Pod deletion failed;
  • Step S5 The kubelet process in the corresponding worker node receives the Pod deletion task and uses the user public key to perform user signature verification again. If the user signature verification fails, the kubelet process will refuse to delete the corresponding Pod and prompt the user Pod deletion failed;
  • Step S6 If the kubelet process successfully checks the user signature, it will call the Docker API (Docker API is an external operation interface provided by the docker engine process) to delete the corresponding Pod, clean up the relevant data of the Pod, and release resources;
  • Docker API “Docker API is an external operation interface provided by the docker engine process
  • Step S7 The kubelet process reports the deletion result to the kube-apiserver process
  • Step S8 The kube-apiserver process writes the Pod deletion result to the etcd cluster, and the etcd cluster uses Raft consensus algorithm to verify data consistency and signature data.
  • the verification is as follows: 1 When there is more than 1 in the etcd cluster / 2 number of etcd passed the above verification. The etcd cluster accepted the writing of the Pod deletion result and responded to the kube-apiserver process. The Pod deletion result was successfully written and the user was notified that the Pod deletion was successful. 2 If there is more than Or equal to 1/2 the number of etcd does not pass the above verification, it will directly reject the writing of the Pod deletion result, and prompt the user that the Pod deletion fails.
  • Raft consensus algorithm is used in the above step S8
  • other applicable blockchain consensus algorithms can also be applied in the above step S8.
  • the operation logs in etcd in the container cloud management system are all appended and do not support log deletion operations, so you can view and track personal operation logs at any time to complete related log audit operations.
  • the container cloud management system provided by the present invention is improved based on blockchain technology and consensus algorithms, which solves the security problems of resources, applications, and data, so that users have management rights over their own resources.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本发明公开了一种基于区块链技术的容器云管理***,该容器云管理***包括多个管理节点Master和多个工作节点Node,多个管理节点Master的每一个提供集群分布式存储数据库etcd;其中,当用户通过工作节点Node向该容器云管理***部署或删除应用时,管理节点Master会基于区块链技术共识算法进行用户签名校验和/或数据一致性验证。该容器云管理***能够有效提高***安全性,赋予用户对自有资源的管理权限,平台方及任何第三方均无法操作用户的应用、读取用户的操作记录以及用户的数据,能让用户在不担心应用及数据安全问题的前提下便捷地应用第三方提供的独立的公有云平台,有效降低用户的使用成本。

Description

一种基于区块链技术的容器云管理*** 技术领域
本发明属于计算机云计算技术领域,具体涉及一种基于区块链技术的容器云管理***。
背景技术
云计算技术能够提供可用的、便捷的和按需的网络访问,进入可配置的计算资源共享池(资源包括网络、服务器、存储、应用软件等),这些资源能够被快速提供,却只需投入很少的管理工作或与服务供应商进行很少的交互。
目前,云计算的形态有:公有云、私有云和混合云,其中公有云被认为是云计算的主要形态。公有云通常是指第三方提供商为用户提供的能够免费或低成本使用的云,一般可通过Internet使用,公有云的核心属性是共享资源服务。
在云计算领域,根据功能不同可将云计算平台分为三种:IaaS(基础设施即服务)、PaaS(平台即服务)、SaaS(软件即服务)。其中,在PaaS(平台即服务)领域,目前主要存在三种服务方案:
1、IaaS服务提供商为其用户提供的应用管理平台PaaS服务,其目的是让用户能够更方便地管理其自身应用,并提供增值服务。
2、PaaS公有云服务,例如***的GAE(Google App Engine)、新浪的SAE(Sina App Engine)等,其可以让用户从应用开发、构建、测试、部署等所有生命周期全部在PaaS平台中完成,但仅支持有限的开发语言,且所有应用和数据全部在PaaS共有云平台上。
3、PaaS私有云服务,有一些用户为了保护自己开发的应用安全以及数据安全,不愿意使用PaaS公有云平台以及IaaS服务提供商提供的应用管理平台,而是选择自建PaaS私有云平台,这种平台可以由用户自己开发,也可以选择项目实施的方式选择服务提供商。
上述方案1-3均需要用户将资源、应用、数据、平台绑定在一起进行安装、部署、运维。其中,方案1和2则需要将资源、应用、数据、平台这四者之间紧密耦合,因此方案1和2均不会被关注应用与数据安全的用户所采用。
虽然对于关注应用与数据安全的用户来说,方案3是优先选择项,但方案3在搭建云平台的过程中,不仅平台自建需要投入较高的人员成本以及服务器成 本,而且选择外部提供商也需要价格不菲的项目实施成本,其成本以及平台实施周期等均要高出方案1和2。
不过可以预见的是,随着云计算技术的持续发展,PaaS公有云平台因其灵活性、低成本、按使用量付费等优点而会被越来越多的用户接受,只要能解决现有PaaS公有云平台的安全性问题,则PaaS公有云平台将有望成为用户部署互联网应用程序的优先选择项。
因此,如何解决目前用户最为担心的PaaS公有云平台的安全性问题,如何让用户在不担心应用及数据安全问题的前提下便捷地应用第三方提供的独立的PaaS公有云平台,是本发明需要解决的课题。
发明内容
本发明意在提供一种基于区块链技术的容器云管理***,用以解除资源、应用、数据与平台之间的紧密耦合,降低用户使用成本的同时解决资源、应用、数据的安全性问题。这里的安全性问题可以涉及:1.在容器云管理平台中,如何解决用户对自有资源的管理权限问题,使得平台方无法操作用户的资源;2.在容器云管理平台中,如何解决用户对应用及数据的操作权限问题,使得平台方无法操作用户的应用并最小权限的读取用户的操作记录以及用户的数据;3.如何保障在容器云管理平台的部分管理节点的崩溃或被入侵不会影响平台正常使用以及不影响用户应用服务和数据安全性的问题;4.如何解决用户对自己的应用在容器云管理平台上的操作记录的审计问题。
本发明的目的是通过以下技术方案来实现的:
本发明公开了一种基于区块链技术的容器云管理***,所述容器云管理***包括多个管理节点Master,所述多个管理节点Master中的每一个均可以通过网络与其它管理节点Master相互通信,所述多个管理节点Master中的每一个均可以接收并处理一工作节点Node的接入请求,该接入请求为所述工作节点Node请求加入所述容器云管理***;其特征在于,在所述工作节点Node向所述管理节点Master发送所述接入请求之前,用户基于所述工作节点Node生成用于该接入请求的一公私钥账户。
进一步地,所述接入请求包括所述工作节点Node向所述管理节点Master注册自身信息;所述自身信息包括所述工作节点Node的主机名称、内核版本、 操作***信息和docker版本;所述管理节点Master处理所述工作节点Node的所述接入请求包括所述管理节点Master将所述工作节点Node纳入集群调度中,并且所述管理节点Master实时监控纳入集群调度中的所述工作节点Node的状态。
进一步地,所述实时监控具体为:所述工作节点Node定时向所述管理节点Master发送其状态信息,所述管理节点Master将接收到的该状态信息写入其集群分布式存储数据库etcd中,同时分析并处理该状态信息。
进一步地,所述多个管理节点Master中的每一个均提供集群分布式存储数据库etcd,所述管理节点Master处理所述工作节点Node的所述接入请求包括所述管理节点Master将所述工作节点Node的所述自身信息写入其集群分布式存储数据库etcd中。
进一步地,在所述接入请求后,所述管理节点Master将接入的所述工作节点Node的自身信息和状态信息同步到所述容器云管理***中的所有其它管理节点Master。
优选地,在所述接入请求后,接入的所述工作节点Node能够通过网络与所述多个管理节点Master中的每一个相互通信,所述工作节点Node与所述多个管理节点Master中的每一个相互通信包括所述工作节点Node监听所述管理节点Master中针对应用的操作。
优选地,所述针对应用的操作包括:创建应用、修改应用和删除应用;当所述针对应用的操作为创建应用时,所述工作节点Node按照创建要求创建应用;当所述针对应用的操作为修改应用时,所述工作节点Node按照修改要求修改应用;当所述针对应用的操作为删除应用时,所述工作节点Node按照删除要求删除应用,所述应用基于Pod,所述针对应用的操作包括:创建Pod、修改Pod和删除Pod。
本发明中,用户不久可以将自己的设备接入到容器云管理***中,成为***的工作节点Node,而且用户能够通过接入的所述工作节点Node向所述多个管理节点Master中的任意一个发送应用创建请求,所述应用创建请求携带用户公钥和用户使用私钥签名的签名信息以及用于创建应用的原始模板数据。
进一步地,所述管理节点Master能够接收并响应所述应用创建请求,所述 管理节点Master响应所述应用创建请求包括使用用户公钥对所述应用创建请求进行用户签名校验,若该用户签名校验失败则拒绝创建应用,并提示用户应用创建失败。
进一步地,若所述用户签名校验成功,所述管理节点Master执行将所述应用创建请求写入其集群分布式存储数据库etcd的写入操作,所述写入操作包括向所有其它管理节点Master分发所述应用创建请求。
优选地,所述写入操作进一步包括所述容器云管理***中的所有集群分布式存储数据库etcd使用区块链共识算法基于所述应用创建请求进行数据一致性及签名数据的验证,该验证具体为:若所述容器云管理***中通过该验证的所述集群分布式存储数据库etcd的数量大于一半,则所述应用创建请求被写入所述容器云管理***中的每个集群分布式存储数据库etcd中;否则,所述容器云管理***中的每个集群分布式存储数据库etcd均拒绝写入所述应用创建请求,并提示用户应用创建失败。
进一步地,所述管理节点Master筛选符合条件的工作节点Node用以创建应用,所述管理节点Master筛选符合条件的工作节点Node用以创建应用具体为:所述管理节点Master依据调度规则筛选符合条件的工作节点Node。
进一步地,所述管理节点Master将筛选结果写入其集群分布式存储数据库etcd。
进一步地,所述工作节点Node通过Watch机制获取并处理所述创建应用,所述获取所述创建应用包括:所述工作节点Node获取所述应用创建请求所携带的用户公钥和用户使用私钥签名的签名信息以及用于创建应用的原始模板数据。
进一步地,所述处理所述创建应用包括:所述工作节点Node基于所述应用创建请求使用用户公钥进行用户签名校验,若该用户签名校验成功,则创建应用,并提示用户应用创建成功;若该用户签名校验失败,则拒绝创建应用,并提示用户应用创建失败。
进一步地,所述工作节点Node将创建结果状态和后续运行状态定期发送给所述管理节点Master,所述管理节点Master能够实时监控***资源的状态,根据当前状态判断处理,并将资源状态修复到期望状态。
此外,用户还能够通过接入的工作节点Node向多个管理节点Master中的任 意一个发送应用删除请求,所述应用删除请求携带用户公钥和用户使用私钥签名的签名信息。
进一步地,所述管理节点Master能够接收并响应所述应用删除请求,所述管理节点Master响应所述应用删除请求包括使用用户公钥对所述应用删除请求进行用户签名校验,若该用户签名校验失败,则拒绝删除应用,并提示用户应用删除失败。
进一步地,若所述用户签名校验成功,所述管理节点Master在所述容器云管理***中的所有集群分布式存储数据库etcd中查询目标应用资源对象。
优选地,若所述用户签名校验成功,所述管理节点Master向所有其它管理节点Master分发所述应用删除请求。
进一步地,所述容器云管理***中的所有集群分布式存储数据库etcd使用区块链共识算法基于所述应用删除请求进行数据一致性及签名数据的验证,该验证具体为:若所述容器云管理***中通过该验证的所述集群分布式存储数据库etcd的数量大于一半,所述管理节点Master向相应的工作节点Node发送所述应用删除请求;否则,所述管理节点Master拒绝所述应用删除请求,并提示用户应用创建失败。
进一步地,所述相应的工作节点Node接收并处理所述应用删除请求,所述处理所述应用删除请求具体为:所述工作节点Node基于所述应用删除请求使用用户公钥进行用户签名校验,若该用户签名校验失败,则拒绝删除应用,并提示用户应用删除失败;若该用户签名校验成功则删除应用。
进一步地,删除应用后,所述相应的工作节点Node将删除结果发送给所述管理节点Master,所述管理节点Master执行将所述删除结果写入集群分布式存储数据库etcd的写入操作。
进一步地,所述写入操作进一步包括所述容器云管理***中的所有集群分布式存储数据库etcd使用区块链共识算法对所述删除结果进行数据一致性及签名数据的验证,该验证具体为:若所述容器云管理***中通过所述验证的所述集群分布式存储数据库etcd的数量大于一半,则所述删除结果被写入所述容器云管理***中的每个集群分布式存储数据库etcd中,并提示用户应用删除成功;否则,所述容器云管理***中的每个集群分布式存储数据库etcd均拒绝写入所述 删除结果,并提示用户应用删除失败。
优选地,本发明所涉及的应用基于Pod。
和现有技术相比,本发明所提供的上述基于区块链技术的容器云管理***能够使得用户对自有资源拥有自己的管理权限,平台方及任何第三方均无法操作用户的应用、读取用户的操作记录以及用户的数据,不仅能让用户在不担心应用及数据安全问题的前提下便捷地应用第三方独立的容器云管理平台,还能有效降低用户的使用成本。
附图说明
图1是现有技术的容器云管理***的集群架构示意图;
图2是现有技术的容器云管理***的业务流程图;
图3是本发明实施例提供的一种基于区块链技术的容器云管理***的业务流程图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及具体实施例,对本发明做进一步详细说明。应当理解,此处所描述的具体实施例仅仅用于解释本发明,而不能理解为对本发明的限制。
公知地,云计算中越来越多的使用容器技术,这里所讲的容器本质上是一种虚拟化技术,它与虚拟机不同之处在于,虚拟机是硬件虚拟化,容器是对操作***虚拟化。通常,容器把应用以及应用的执行环境打包在一起,部署应用的时候,直接将整个容器进行部署,因为容器自带应用执行环境,所以就不存在部署过程中由于环境变化导致应用出现部署异常的问题,能够实现“一次构建,到处执行”。
一般来说,现有的容器云管理***是以对容器进行管理的容器管理平台为基础,现有的容器管理平台有Kubernetes容器管理平台、Mesos容器管理平台和Swarm容器管理平台。
Kubernetes容器管理平台是当前最流行的基于容器技术的分布式架构领先方案,其采用分布式架构,将集群中的机器划分为一个管理节点Master和一群工作节点Node。Kubernetes容器管理平台的主要功能包括:使用Docker对应用程序进行打包、实例化以及运行,以集群的方式运行及管理跨主机的容器,解决位于不同主机之间所运行的容器之间的通信问题等。
为便于描述和说明,下面将以Kubernetes容器管理平台为基础描述本发明实施例所提供的改进后的一种基于区块链技术的容器云管理***,但本领域技术人员可以理解的是本发明的基于区块链技术的容器云管理***同样可以以其它类型的容器管理平台为基础。
图1示出了现有技术的容器云管理***的集群架构。以Kubernetes容器管理平台为例,图1中,Kubernetes容器管理平台的集群架构主要由管理节点Master、工作节点Node、存储节点Storage构成。
通常,管理节点Master提供:①通过kube-apiserver进程提供服务的API Server,该kube-apiserver进程是集群的API接口且是针对所有资源进行增加、删除、修改和查找等操作的唯一入口;②通过kube-scheduler进程提供服务的Scheduler,该kube-scheduler进程负责集群资源的调度,如绑定Pod(Kubernetes中最小的管理元素)到工作节点Node上;③通过kube-controller-manager进程提供服务的Controller Manager,该kube-controller-manager进程是集群的自动化管理控制中心,负责集群内的工作节点Node、Pod副本、服务端点、命名空间、服务账号和资源定额等的管理和自动化修复流程,确保集群处于预期的工作状态;④用作存储节点Storage的etcd,etcd是集群分布式存储数据库,其负责存储持久化的数据信息。
管理节点Master中的上述这些进程用以实现整个集群的资源管理、Pod调度、弹性伸缩、安全控制、***监控纠错等管理功能,并且都是全自动完成的。
进一步地,工作节点Node提供:①kubelet进程,其用于管理Pods以及容器、镜像、Volume等,实现对节点自身的管理;②kube-proxy进程,其提供网络代理以及负载均衡,实现与管理节点Master中的kube-apiserver进程之间的通讯;③docker engine进程,其负责节点的容器管理工作。
工作节点Node中的上述这些进程负责Pod的创建、启动、监控、重启、销毁,以及实现软件模式的负载均衡器。
图2示出了现有技术的容器云管理***的业务流程。以Kubernetes容器管理平台为例,如图2所示,现有Kubernetes容器管理平台的业务流程主要有:
(1)工作节点Node接入流程:
在接入流程中,首先,欲要接入的工作节点Node会启动一个kubelet进程服 务,借助该kubelet进程服务的自动注册机制主动将该工作节点Node注册到容器云管理***中的一个管理节点Master,注册时会携带该工作节点Node的主机名称、内核版本、操作***信息以及docker版本等数据,该管理节点Master接收到该工作节点Node的注册信息后,会将该注册信息写入管理节点Master中的etcd中并将注册成功的该工作节点Node纳入集群调度中。
进一步地,该管理节点Master中的kube-controller-manager进程会实时监控注册过的工作节点Node的状态。
进一步地,kubelet进程完成注册之后,会定时向管理节点Master汇报其工作节点Node的状态信息,该管理节点Master将接收到的状态信息写入其etcd,同时基于该工作节点Node的状态信息作出相应的分析处理。
进一步地,工作节点Node的kubelet进程同时会借助Watch机制(观察通知机制)通过管理节点Master的API Server监听管理节点Master中的etcd中的/registry/pods/$NodeName和/registry/pods目录,所有针对Pod的操作都将会被kubelet进程监听到。
通常,工作节点Node会如此地响应上述监听:①如果发现有新绑定到该工作节点Node的Pod,则按照Pod清单的要求创建该Pod;②如果发现有对该工作节点Node中已创建Pod的修改,则对该Pod做相应的修改;③如果发现需要删除该工作节点Node中的Pod,则执行相应的Pod删除操作。
(2)应用部署流程:
步骤S1:用户使用工作节点Node上的客户端通过kubectl(Kubernetes自带的客户端,可以用它来直接操作Kubernetes)或RESTful API接口向管理节点Master提交Pod创建请求(支持JSON或YAML格式的数据);
步骤S2:该管理节点Master中的kube-apiserver进程接收并处理用户提交的该Pod创建请求,并将原始模板数据存储到该管理节点Master中的etcd;
步骤S3:该管理节点Master中的kube-sechduler进程通过发现有新的未绑定的Pod产生,则会尝试为该Pod分配工作节点Node;
步骤S4:该kube-scheduler进程依据调度规则过滤并筛选符合条件的(如Pod所需的CPU和Memory等)工作节点Node,并根据筛选出的工作节点Node的当前运行状态进行选优,选择打分最高的工作节点Node,并将该Pod绑定到 该工作节点Node,然后将绑定结果写入该管理节点Master中的etcd中;
步骤S5:工作节点Node中的kubelet进程通过Watch机制(观察通知机制)发现并获取到新创建的Pod任务,然后调用Docker API(Docker API是docker engine进程提供的外部操作接口)创建并启动Pod;
步骤S6:该kubelet进程会将创建的结果状态和后续的运行状态定期汇报给管理节点Master;
步骤S7:该管理节点Master中的kube-controller-manager进程同时会实时监控集群中资源的状态,并根据当前状态判断处理,尝试将资源状态修复到期望状态。
如下,表1示出了现有技术中kubectl发送数据的具体格式:
Figure PCTCN2018108575-appb-000001
表1
(3)应用删除流程:
步骤S1:用户使用客户端通过kubectl或RESTful API接口向管理节点Master提交Pod删除请求;
步骤S2:该管理节点Master中的kube-apiserver进程接收并处理用户提交的该Pod删除请求,并查询管理节点Master中的etcd中与之匹配的资源对象,生成删除任务并下发给工作节点Node中的kubelet进程;
步骤S3:该工作节点Node中的kubelet进程调用Docker API(Docker API是docker engine进程提供的外部操作接口)删除Pod相关的容器,并清理Pod相关数据,释放资源;
步骤S4:该kubelet进程汇报删除结果给管理节点Master中的kube-apiserver进程;
步骤S5:该kube-apiserver进程更新结果到管理节点Master中的etcd,并清理资源对象。
基于上述内容,从现有容器云管理***的具体业务流程可以看到,整个流程 需要资源、应用、数据与平台之间的紧密耦合,且存在安全隐患,用户对自有资源的管理权限、对应用及数据的操作权限等没有控制权,平台方可以操作用户的资源或操作用户的应用及读取用户的操作记录和用户的数据,倘若容器云管理***中的某个或某些管理节点Master崩溃或者被攻克,会影响整个***的使用和安全性,并进一步影响所有用户的应用及服务安全。
如今,区块链技术以其高安全性而得到越来越广泛的应用。区块链技术是利用块链式数据结构来验证与存储数据、利用分布式节点共识算法来生成和更新数据、利用密码学的方式保证数据传输和访问的安全、利用由自动化脚本代码组成的智能合约来编程和操作数据的一种全新的分布式基础架构与计算方式。区块链技术具有显著的去中心化特点,且开放、自治及信息不可篡改。区块链技术通过共识算法实现不同节点之间的信任建立和权益获取,并保障在一个分布式***中集群中的所有节点中的数据完全相同且能够对某个提案达成一致。其中,常见的区块链共识算法有Raft共识算法、Paxos共识算法以及工作量证明(POW)、权益证明(POS)和委托权益证明(DPOS)等。
由上述可知,区块链技术能够有效提高分布式***的安全性和一致性。基于此认识,为进一步优化容器云管理***的业务流程,提高***使用安全性和可靠性,本发明实施例旨在提供一种经改进的基于区块链技术的容器云管理***。
具体地,本发明实施例所提供的该种容器云管理***,其包括:至少三个管理节点Master,该管理节点Master可以提供kube-apiserver进程、kube-scheduler进程、kube-controller-manager进程并拥有集群分布式存储数据库etcd;至少一个工作节点Node,该工作节点Node可以提供kubelet进程、kube-proxy进程、docker engine进程。
进一步地,本发明实施例的该种基于区块链技术的容器云管理***中的至少三个管理节点Master之间可以通过网络进行相互通信,每一个工作节点Node均可通过网络与其中任何一个管理节点Master进行通信。
下面结合图3进行详细描述,图3示出了本发明实施例的容器云管理***的业务流程,该流程具体包括:
(1)工作节点Node接入流程:
在用户欲要将其设备作为工作节点Node接入容器云管理***之前,每个用 户都需要事先生成自己的公私钥账户,其中公钥可以对所有人公开,私钥由用户自己保管,不可外泄,一旦外泄,安全性将无法得到保障。
在接入流程中,首先,每个欲要接入的工作节点Node都会启动自己的kubelet进程,借助该kubelet进程的自动注册机制主动将该工作节点Node注册到容器云管理***中的其中一个管理节点Master,注册时会携带该工作节点Node的主机名称、内核版本、操作***信息、docker版本等数据,该管理节点Master接收到该工作节点Node的注册信息后,会将该注册信息写入管理节点Master中的etcd,并将注册成功的该工作节点Node纳入集群调度中。
进一步地,该管理节点Master中的kube-controller-manager进程会实时监控注册过的工作节点Node的状态。
进一步地,kubelet进程完成注册之后,会定时向管理节点Master汇报其对应工作节点Node的状态信息,该管理节点Master同样会将接收到的该状态信息写入etcd,同时基于该工作节点Node的状态信息作出相应的分析处理。
进一步地,工作节点Node中的kubelet进程同时会借助Watch机制(观察通知机制)通过管理节点Master的API Server监听该管理节点Master中的etcd中的/registry/pods/$NodeName和/registry/pods目录,所有针对Pod的操作都将会被kubelet进程监听到。
通常,工作节点Node会如此地响应上述监听:①如果发现有新绑定到该工作节点Node的Pod,则按照Pod清单的要求创建该Pod;②如果发现有对该工作节点Node中已创建Pod的修改,则对该已创建Pod做相应的修改;③如果发现需要删除该工作节点Node中的Pod,则执行相应的Pod删除操作以删除该Pod。
特别地,当本发明实施例的容器云管理***中的任何一个工作节点Node通过上述接入流程将自己接入到***中的其中一个管理节点Master后,注册信息数据会经由该管理节点Master中的kube-apiserver进程和etcd同步到***中的所有其它管理节点Master,这样可以使得一个工作节点Node不必对应于特定的某个或某些管理节点Master,而是可以灵活地选取网络中的任一管理节点Master接入。
进一步地,工作节点Node虽然是通过其中一个管理节点Master接入的,但该工作节点Node接入容器云管理***后可以通过网络与***中的所有其它管理 节点Master相互通信,且用户可以基于该工作节点Node通过***中的其它管理节点Master而在***中部署应用。
(2)应用部署流程:
步骤S1:当需要进行应用部署时,用户使用一已接入容器云管理***的工作节点Node上的客户端通过kubectl(Kubernetes自带的客户端,可以它用来直接操作Kubernetes)或RESTful API接口向容器云管理***中的其中一个管理节点Master提交Pod创建请求(支持JSON或YAML格式的数据),该Pod创建请求会携带用户公钥和用户使用私钥签名的签名信息以及用于创建Pod的原始模板数据;
步骤S2:该管理节点Master中的kube-apiserver进程接收、处理并解析用户提交的该Pod创建请求;
步骤S3:该kube-apiserver进程使用用户公钥对该Pod创建请求进行用户签名校验,若该用户签名校验失败,则拒绝创建相应Pod,并提示用户Pod创建失败;
步骤S4:若kube-apiserver进程校验用户签名成功,则会执行尝试将上述Pod创建请求写入到该管理节点Master中的etcd的写入操作,相应的,该etcd也会根据其内部的共识算法将该Pod创建请求分发给所有其它管理节点Master中的etcd;
步骤S5:若上述写入操作成功执行,则该管理节点Master中的kube-sechduler进程会发现有新的未绑定的Pod产生,则会尝试为该Pod分配所需的工作节点Node;
步骤S6:该kube-scheduler进程依据调度规则筛选符合条件的(如符合Pod所需的CPU和Memory等资源的)工作节点Node,并根据筛选出的工作节点Node的当前运行状态进行选优,选择打分最高的工作节点Node,并将Pod绑定到该工作节点Node,然后将绑定结果写入管理节点Master中的etcd中;
步骤S7:工作节点Node中的kubelet进程会通过Watch机制(观察通知机制)发现并获取到新的Pod创建任务;
步骤S8:该kubelet进程接收该Pod创建任务,并使用用户公钥进行用户签名校验,若该用户签名校验失败则会拒绝创建Pod,并提示用户Pod创建失败;
步骤S9:若kubelet进程校验用户签名成功,则会调用Docker API(Docker API是docker engine进程提供的外部操作接口)创建并启动Pod,并提示用户Pod创建成功;
步骤S10:该kubelet进程会将创建的结果状态和后续的运行状态定期汇报给管理节点Master;
步骤S11:该管理节点Master中的kube-controller-manager进程会实时监控集群中资源的状态,并根据当前状态判断处理,尝试将资源状态修复到期望状态。
特别地,在上述步骤S4中:当kube-apiserver进程尝试将上述Pod创建请求写入到管理节点Master中的etcd时,该etcd会通过Raft共识算法在***中的所有其它管理节点Master中的etcd中进行数据一致性以及签名数据的二次验证。
具体地,本发明实施例的容器云管理***中的所有管理节点Master中的etcd整体地被视为etcd集群,上述二次验证具体为:①当该etcd集群中有大于1/2数量的etcd通过了上述二次验证时,上述Pod创建请求方可被成功写入etcd集群中的每个etcd中,随后该管理节点Master中的etcd将数据写入成功信息返回给kube-apiserver进程,并继续执行上述步骤S5;②如果该etcd集群中有大于或等于1/2数量的etcd没有通过上述二次验证,说明数据及操作共识失败,该etcd集群拒绝写入kube-apiserver进程发送的上述Pod创建请求,随后该管理节点Master中的etcd将数据写入失败信息返回给kube-apiserver进程,提示用户Pod创建失败。
基于上述,在上述步骤S3中,如果kube-apiserver进程校验用户签名失败,会直接拒绝接受Pod创建请求;在上述步骤S4中,如果数据写入失败信息返回给了kube-apiserver进程,则意味着kube-apiserver进程没有权限将上述Pod创建请求写入etcd集群,这时kube-apiserver进程同样也会直接拒绝接受上述Pod创建请求。
优选地,虽然上述步骤S4中使用了Raft共识算法,但本领域技术人员可以理解的是,其它适用的区块链共识算法同样可以应用于上述步骤S4中。
如下,表2示出了本发明实施例中kubectl发送数据的具体格式:
Figure PCTCN2018108575-appb-000002
表2
(3)应用删除流程:
步骤S1:用户使用客户端通过kubectl或RESTful API接口向容器云管理***中的其中一个管理节点Master提交Pod删除请求,该Pod删除请求中会携带用户公钥以及用户使用私钥签名的签名信息;
步骤S2:该管理节点Master中的kube-apiserver进程接收、处理并解析用户提交的该Pod删除请求;
步骤S3:该kube-apiserver进程使用用户公钥对该Pod删除请求进行用户签名校验,若该用户签名校验失败,则会拒绝删除Pod,并提示用户Pod删除失败;
步骤S4:若该kube-apiserver进程校验用户签名成功,则会查询容器云管理***中的etcd中与之匹配的资源对象并生成Pod删除任务,etcd集群同样通过Raft共识算法对kube-apiserver的Pod删除任务进行数据一致性及签名数据的验证,该验证具体为:①当该etcd集群中有大于1/2数量的etcd通过了上述验证,则向kube-apiserver进程响应校验成功,kube-apiserver进程将该Pod删除任务下发给相应工作节点Node中的kubelet进程;②若该etcd集群中有大于或等于1/2数量的etcd没有通过上述验证,则直接拒绝该Pod删除任务,并提示用户Pod删除失败;
步骤S5:相应工作节点Node中的kubelet进程接收该Pod删除任务,并再次使用用户公钥进行用户签名校验,若该用户签名校验失败,则kubelet进程会拒绝删除对应的Pod,并提示用户Pod删除失败;
步骤S6:若该kubelet进程校验用户签名成功,则会调用Docker API(Docker API是docker engine进程提供的外部操作接口)删除对应的Pod,清理该Pod的相关数据,释放资源;
步骤S7:该kubelet进程汇报删除结果汇报给kube-apiserver进程;
步骤S8:该kube-apiserver进程将Pod删除结果写入到etcd集群,该etcd集群再次通过Raft共识算法进行数据一致性以及签名数据的验证,该验证具体为:①当该etcd集群中有大于1/2数量的etcd通过了上述验证,etcd集群接受该Pod删除结果的写入,并响应该kube-apiserver进程,Pod删除结果写入成功,并提示用户Pod删除成功;②若etcd集群中有大于或等于1/2数量的etcd没有通过上述验证,则直接拒绝该Pod删除结果的写入,并提示用户Pod删除失败。
优选地,虽然上述步骤S8中使用了Raft共识算法,但本领域技术人员可以理解的是,其它适用的区块链共识算法同样可以应用在上述步骤S8中。
(4)用户日志审计功能:
容器云管理***中的etcd中的操作日志全部采用追加方式,不支持日志删除操作,所以可随时查看并追踪个人操作日志,以完成相关日志审计操作。
本发明所提供的容器云管理***基于区块链技术和共识算法进行了改进,很好地解决了资源、应用、数据的安全性问题,使得用户拥有了对自有资源的管理权限,平台方无法操作用户的应用、读取用户的操作记录以及用户的数据,让用户在不担心应用及数据安全问题的前提下便捷地应用第三方提供的独立的公有云平台。
尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在不脱离本发明的原理和宗旨的情况下在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (43)

  1. 一种基于区块链技术的容器云管理***,所述容器云管理***包括多个管理节点Master,所述多个管理节点Master中的每一个均可以通过网络与其它管理节点Master相互通信,所述多个管理节点Master中的每一个均可以接收并处理一工作节点Node的接入请求,该接入请求为所述工作节点Node请求加入所述容器云管理***;其特征在于,在所述工作节点Node向所述管理节点Master发送所述接入请求之前,用户基于所述工作节点Node生成用于该接入请求的一公私钥账户。
  2. 根据权利要求1所述的一种基于区块链技术的容器云管理***,其特征在于,所述接入请求包括所述工作节点Node向所述管理节点Master注册自身信息。
  3. 根据权利要求2所述的一种基于区块链技术的容器云管理***,其特征在于,所述自身信息包括所述工作节点Node的主机名称、内核版本、操作***信息和docker版本。
  4. 根据权利要求2或3所述的一种基于区块链技术的容器云管理***,其特征在于,所述多个管理节点Master中的每一个均提供集群分布式存储数据库etcd,所述管理节点Master处理所述工作节点Node的所述接入请求包括所述管理节点Master将所述工作节点Node的所述自身信息写入其集群分布式存储数据库etcd中。
  5. 根据权利要求1-4中任意一项所述的一种基于区块链技术的容器云管理***,其特征在于,所述管理节点Master处理所述工作节点Node的所述接入请求包括所述管理节点Master将所述工作节点Node纳入集群调度中。
  6. 根据权利要求5所述的一种基于区块链技术的容器云管理***,其特征在于,所述管理节点Master实时监控纳入集群调度中的所述工作节点Node的状态。
  7. 根据权利要求6所述的一种基于区块链技术的容器云管理***,其特征在于,所述实时监控具体为:所述工作节点Node定时向所述管理节点Master发送其状态信息,所述管理节点Master将接收到的该状态信息写入其集群分布式存储数据库etcd中,同时分析并处理该状态信息。
  8. 根据权利要求1-7中任意一项所述的一种基于区块链技术的容器云管理 ***,其特征在于,在所述接入请求后,所述管理节点Master将接入的所述工作节点Node的自身信息和状态信息同步到所述容器云管理***中的所有其它管理节点Master。
  9. 根据权利要求1-8中任意一项所述的一种基于区块链技术的容器云管理***,其特征在于,在所述接入请求后,接入的所述工作节点Node能够通过网络与所述多个管理节点Master中的每一个相互通信。
  10. 根据权利要求9所述的一种基于区块链技术的容器云管理***,其特征在于,所述工作节点Node与所述多个管理节点Master中的每一个相互通信包括所述工作节点Node监听所述管理节点Master中针对应用的操作。
  11. 根据权利要求10所述的一种基于区块链技术的容器云管理***,其特征在于,所述针对应用的操作包括:创建应用、修改应用和删除应用。
  12. 根据权利要求10或11所述的一种基于区块链技术的容器云管理***,其特征在于,当所述针对应用的操作为创建应用时,所述工作节点Node按照创建要求创建应用;当所述针对应用的操作为修改应用时,所述工作节点Node按照修改要求修改应用;当所述针对应用的操作为删除应用时,所述工作节点Node按照删除要求删除应用。
  13. 根据权利要求10-12中任意一项所述的一种基于区块链技术的容器云管理***,其特征在于,所述应用基于Pod,所述针对应用的操作包括:创建Pod、修改Pod和删除Pod。
  14. 根据权利要求1-13中任意一项所述的一种基于区块链技术的容器云管理***,其特征在于,用户能够通过接入的所述工作节点Node向所述多个管理节点Master中的任意一个发送应用创建请求。
  15. 根据权利要求14所述的一种基于区块链技术的容器云管理***,其特征在于,所述应用创建请求携带用户公钥和用户使用私钥签名的签名信息以及用于创建应用的原始模板数据。
  16. 根据权利要求14或15所述的一种基于区块链技术的容器云管理***,其特征在于,所述管理节点Master能够接收并响应所述应用创建请求。
  17. 根据权利要求16所述的一种基于区块链技术的容器云管理***,其特征在于,所述管理节点Master响应所述应用创建请求包括使用用户公钥对所述 应用创建请求进行用户签名校验,若该用户签名校验失败则拒绝创建应用,并提示用户应用创建失败。
  18. 根据权利要求17所述的一种基于区块链技术的容器云管理***,其特征在于,若所述用户签名校验成功,所述管理节点Master执行将所述应用创建请求写入其集群分布式存储数据库etcd的写入操作。
  19. 根据权利要求18所述的一种基于区块链技术的容器云管理***,其特征在于,所述写入操作包括向所有其它管理节点Master分发所述应用创建请求。
  20. 根据权利要求19所述的一种基于区块链技术的容器云管理***,其特征在于,所述写入操作进一步包括所述容器云管理***中的所有集群分布式存储数据库etcd使用区块链共识算法基于所述应用创建请求进行数据一致性及签名数据的验证,该验证具体为:若所述容器云管理***中通过该验证的所述集群分布式存储数据库etcd的数量大于一半,则所述应用创建请求被写入所述容器云管理***中的每个集群分布式存储数据库etcd中;否则,所述容器云管理***中的每个集群分布式存储数据库etcd均拒绝写入所述应用创建请求,并提示用户应用创建失败。
  21. 根据权利要求20所述的一种基于区块链技术的容器云管理***,其特征在于,所述区块链共识算法为Raft共识算法。
  22. 根据权利要求20或21所述的一种基于区块链技术的容器云管理***,其特征在于,所述管理节点Master筛选符合条件的工作节点Node用以创建应用。
  23. 根据权利要求22所述的一种基于区块链技术的容器云管理***,其特征在于,所述管理节点Master筛选符合条件的工作节点Node用以创建应用具体为:所述管理节点Master依据调度规则筛选符合条件的工作节点Node。
  24. 根据权利要求23所述的一种基于区块链技术的容器云管理***,其特征在于,所述管理节点Master将筛选结果写入其集群分布式存储数据库etcd。
  25. 根据权利要求22-24中任意一项所述的一种基于区块链技术的容器云管理***,其特征在于,所述工作节点Node通过Watch机制获取并处理所述创建应用。
  26. 根据权利要求25所述的一种基于区块链技术的容器云管理***,其特征在于,所述获取所述创建应用包括:所述工作节点Node获取所述应用创建请 求所携带的用户公钥和用户使用私钥签名的签名信息以及用于创建应用的原始模板数据。
  27. 根据权利要求25或26所述的一种基于区块链技术的容器云管理***,其特征在于,所述处理所述创建应用包括:所述工作节点Node基于所述应用创建请求使用用户公钥进行用户签名校验,若该用户签名校验成功,则创建应用,并提示用户应用创建成功;若该用户签名校验失败,则拒绝创建应用,并提示用户应用创建失败。
  28. 根据权利要求27所述的一种基于区块链技术的容器云管理***,其特征在于,应用创建成功后,所述工作节点Node将创建结果状态和后续运行状态定期发送给所述管理节点Master。
  29. 根据权利要求14-28中任意一项所述的一种基于区块链技术的容器云管理***,其特征在于,所述管理节点Master能够实时监控***资源的状态,根据当前状态判断处理,并将资源状态修复到期望状态。
  30. 根据权利要求1-29中任意一项所述的一种基于区块链技术的容器云管理***,其特征在于,用户能够通过接入的所述工作节点Node向所述多个管理节点Master中的任意一个发送应用删除请求。
  31. 根据权利要求30所述的一种基于区块链技术的容器云管理***,其特征在于,所述应用删除请求携带用户公钥和用户使用私钥签名的签名信息。
  32. 根据权利要求31所述的一种基于区块链技术的容器云管理***,其特征在于,所述管理节点Master能够接收并响应所述应用删除请求。
  33. 根据权利要求32所述的一种基于区块链技术的容器云管理***,其特征在于,所述管理节点Master响应所述应用删除请求包括使用用户公钥对所述应用删除请求进行用户签名校验,若该用户签名校验失败,则拒绝删除应用,并提示用户应用删除失败。
  34. 根据权利要求33所述的一种基于区块链技术的容器云管理***,其特征在于,若所述用户签名校验成功,所述管理节点Master在所述容器云管理***中的所有集群分布式存储数据库etcd中查询目标应用资源对象。
  35. 根据权利要求34所述的一种基于区块链技术的容器云管理***,其特征在于,若所述用户签名校验成功,所述管理节点Master向所有其它管理节点 Master分发所述应用删除请求。
  36. 根据权利要求35所述的一种基于区块链技术的容器云管理***,其特征在于,所述容器云管理***中的所有集群分布式存储数据库etcd使用区块链共识算法基于所述应用删除请求进行数据一致性及签名数据的验证,该验证具体为:若所述容器云管理***中通过该验证的所述集群分布式存储数据库etcd的数量大于一半,所述管理节点Master向相应的工作节点Node发送所述应用删除请求;否则,所述管理节点Master拒绝所述应用删除请求,并提示用户应用创建失败。
  37. 根据权利要求36所述的一种基于区块链技术的容器云管理***,其特征在于,所述区块链共识算法为Raft共识算法。
  38. 根据权利要求36或37所述的一种基于区块链技术的容器云管理***,其特征在于,所述相应的工作节点Node接收并处理所述应用删除请求,所述处理所述应用删除请求具体为:所述工作节点Node基于所述应用删除请求使用用户公钥进行用户签名校验,若该用户签名校验失败,则拒绝删除应用,并提示用户应用删除失败;若该用户签名校验成功则删除应用。
  39. 根据权利要求38所述的一种基于区块链技术的容器云管理***,其特征在于,删除应用后,所述相应的工作节点Node将删除结果发送给所述管理节点Master。
  40. 根据权利要求39所述的一种基于区块链技术的容器云管理***,其特征在于,所述管理节点Master执行将所述删除结果写入集群分布式存储数据库etcd的写入操作。
  41. 根据权利要求40所述的一种基于区块链技术的容器云管理***,其特征在于,所述写入操作进一步包括所述容器云管理***中的所有集群分布式存储数据库etcd使用区块链共识算法对所述删除结果进行数据一致性及签名数据的验证,该验证具体为:若所述容器云管理***中通过所述验证的所述集群分布式存储数据库etcd的数量大于一半,则所述删除结果被写入所述容器云管理***中的每个集群分布式存储数据库etcd中,并提示用户应用删除成功;否则,所述容器云管理***中的每个集群分布式存储数据库etcd均拒绝写入所述删除结果,并提示用户应用删除失败。
  42. 根据权利要求41所述的一种基于区块链技术的容器云管理***,其特征在于,所述区块链共识算法为Raft共识算法。
  43. 根据权利要求14-42中任意一项所述的一种基于区块链技术的容器云管理***,其特征在于,所述应用基于Pod。
PCT/CN2018/108575 2018-09-29 2018-09-29 一种基于区块链技术的容器云管理*** WO2020062131A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880097738.0A CN113169952B (zh) 2018-09-29 2018-09-29 一种基于区块链技术的容器云管理***
PCT/CN2018/108575 WO2020062131A1 (zh) 2018-09-29 2018-09-29 一种基于区块链技术的容器云管理***

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/108575 WO2020062131A1 (zh) 2018-09-29 2018-09-29 一种基于区块链技术的容器云管理***

Publications (1)

Publication Number Publication Date
WO2020062131A1 true WO2020062131A1 (zh) 2020-04-02

Family

ID=69952642

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/108575 WO2020062131A1 (zh) 2018-09-29 2018-09-29 一种基于区块链技术的容器云管理***

Country Status (2)

Country Link
CN (1) CN113169952B (zh)
WO (1) WO2020062131A1 (zh)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111580930A (zh) * 2020-05-09 2020-08-25 山东汇贸电子口岸有限公司 一种面向国产平台的云原生应用架构支撑方法及***
CN112333004A (zh) * 2020-10-13 2021-02-05 北京京东尚科信息技术有限公司 基于容器集群基因的专有云流式重建及校验方法及装置
CN112634058A (zh) * 2020-12-22 2021-04-09 无锡井通网络科技有限公司 基于区块链的数据互信互享互通平台
CN112995335A (zh) * 2021-04-07 2021-06-18 上海道客网络科技有限公司 一种位置感知的容器调度优化***及方法
CN113296711A (zh) * 2021-06-11 2021-08-24 中国科学技术大学 一种数据库场景中优化分布式存储延迟的方法
CN113312429A (zh) * 2021-06-22 2021-08-27 工银科技有限公司 区块链中的智能合约管理***、方法、介质和产品
CN113672348A (zh) * 2021-08-10 2021-11-19 支付宝(杭州)信息技术有限公司 基于容器集群对联合计算多方进行服务校验的方法及***
CN113934707A (zh) * 2021-10-09 2022-01-14 京东科技信息技术有限公司 云原生数据库、数据库扩容方法、数据库缩容方法和装置
CN114625320A (zh) * 2022-03-15 2022-06-14 江苏太湖慧云数据***有限公司 一种基于特征的混合云平台数据管理***
CN114968092A (zh) * 2022-04-28 2022-08-30 江苏安超云软件有限公司 容器平台下基于qcow2技术的存储空间动态供应的方法及应用
CN115189995A (zh) * 2022-09-07 2022-10-14 江苏博云科技股份有限公司 Kubernetes环境下多集群网络联邦通信建立方法、设备及存储介质
US11575499B2 (en) 2020-12-02 2023-02-07 International Business Machines Corporation Self auditing blockchain
CN115834595A (zh) * 2022-11-17 2023-03-21 浪潮云信息技术股份公司 一种Kubernetes控制组件的管理方法及***
WO2024051577A1 (zh) * 2022-09-06 2024-03-14 中兴通讯股份有限公司 分布式***部署方法、配置方法、***、设备及介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113656148B (zh) * 2021-08-20 2024-02-06 北京天融信网络安全技术有限公司 一种容器管理的方法、装置、电子设备及可读存储介质
CN115550375B (zh) * 2022-08-31 2024-03-15 云南电网有限责任公司信息中心 基于容器化技术实现区块链轻量化的***、方法及设备
CN115499442B (zh) * 2022-11-15 2023-01-31 四川华西集采电子商务有限公司 一种基于容器编排的快速部署型云计算架构
CN117118747A (zh) * 2023-10-20 2023-11-24 南京飓风引擎信息技术有限公司 基于智能合约和预言机的云资源数据控制***及方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106027643A (zh) * 2016-05-18 2016-10-12 无锡华云数据技术服务有限公司 一种基于Kubernetes容器集群管理***的资源调度方法
US20160359955A1 (en) * 2015-06-05 2016-12-08 Nutanix, Inc. Architecture for managing i/o and storage for a virtualization environment using executable containers and virtual machines
CN106850621A (zh) * 2017-02-07 2017-06-13 南京云创大数据科技股份有限公司 一种基于容器云技术快速搭建Hadoop集群的方法
US20180173562A1 (en) * 2016-12-16 2018-06-21 Red Hat, Inc. Low impact snapshot database protection in a micro-service environment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160359955A1 (en) * 2015-06-05 2016-12-08 Nutanix, Inc. Architecture for managing i/o and storage for a virtualization environment using executable containers and virtual machines
CN106027643A (zh) * 2016-05-18 2016-10-12 无锡华云数据技术服务有限公司 一种基于Kubernetes容器集群管理***的资源调度方法
US20180173562A1 (en) * 2016-12-16 2018-06-21 Red Hat, Inc. Low impact snapshot database protection in a micro-service environment
CN106850621A (zh) * 2017-02-07 2017-06-13 南京云创大数据科技股份有限公司 一种基于容器云技术快速搭建Hadoop集群的方法

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111580930A (zh) * 2020-05-09 2020-08-25 山东汇贸电子口岸有限公司 一种面向国产平台的云原生应用架构支撑方法及***
CN112333004A (zh) * 2020-10-13 2021-02-05 北京京东尚科信息技术有限公司 基于容器集群基因的专有云流式重建及校验方法及装置
US11575499B2 (en) 2020-12-02 2023-02-07 International Business Machines Corporation Self auditing blockchain
CN112634058A (zh) * 2020-12-22 2021-04-09 无锡井通网络科技有限公司 基于区块链的数据互信互享互通平台
CN112995335A (zh) * 2021-04-07 2021-06-18 上海道客网络科技有限公司 一种位置感知的容器调度优化***及方法
CN113296711A (zh) * 2021-06-11 2021-08-24 中国科学技术大学 一种数据库场景中优化分布式存储延迟的方法
CN113312429A (zh) * 2021-06-22 2021-08-27 工银科技有限公司 区块链中的智能合约管理***、方法、介质和产品
CN113672348A (zh) * 2021-08-10 2021-11-19 支付宝(杭州)信息技术有限公司 基于容器集群对联合计算多方进行服务校验的方法及***
CN113934707A (zh) * 2021-10-09 2022-01-14 京东科技信息技术有限公司 云原生数据库、数据库扩容方法、数据库缩容方法和装置
CN114625320A (zh) * 2022-03-15 2022-06-14 江苏太湖慧云数据***有限公司 一种基于特征的混合云平台数据管理***
CN114625320B (zh) * 2022-03-15 2024-01-02 江苏太湖慧云数据***有限公司 一种基于特征的混合云平台数据管理***
CN114968092A (zh) * 2022-04-28 2022-08-30 江苏安超云软件有限公司 容器平台下基于qcow2技术的存储空间动态供应的方法及应用
CN114968092B (zh) * 2022-04-28 2023-10-17 安超云软件有限公司 容器平台下基于qcow2技术的存储空间动态供应的方法及应用
WO2024051577A1 (zh) * 2022-09-06 2024-03-14 中兴通讯股份有限公司 分布式***部署方法、配置方法、***、设备及介质
CN115189995A (zh) * 2022-09-07 2022-10-14 江苏博云科技股份有限公司 Kubernetes环境下多集群网络联邦通信建立方法、设备及存储介质
CN115189995B (zh) * 2022-09-07 2022-11-29 江苏博云科技股份有限公司 Kubernetes环境下多集群网络联邦通信建立方法、设备及存储介质
CN115834595A (zh) * 2022-11-17 2023-03-21 浪潮云信息技术股份公司 一种Kubernetes控制组件的管理方法及***

Also Published As

Publication number Publication date
CN113169952A (zh) 2021-07-23
CN113169952B (zh) 2022-12-02

Similar Documents

Publication Publication Date Title
WO2020062131A1 (zh) 一种基于区块链技术的容器云管理***
US12003571B2 (en) Client-directed placement of remotely-configured service instances
CN109542611B (zh) 数据库即服务***、数据库调度方法、设备及存储介质
JP6615796B2 (ja) マルチテナントアプリケーションサーバ環境におけるパーティションマイグレーションのためのシステムおよび方法
EP3271819B1 (en) Executing commands within virtual machine instances
EP3313023B1 (en) Life cycle management method and apparatus
US9432350B2 (en) System and method for intelligent workload management
TWI473029B (zh) 可延伸及可程式化之多租戶服務結構
CN112840321A (zh) 用于自动化操作管理的应用程序编程接口
US9521194B1 (en) Nondeterministic value source
WO2018133721A1 (zh) 鉴权***、方法及服务器
US11953997B2 (en) Systems and methods for cross-regional back up of distributed databases on a cloud service
WO2017107827A1 (zh) 一种环境隔离方法及设备
US9535629B1 (en) Storage provisioning in a data storage environment
US10104163B1 (en) Secure transfer of virtualized resources between entities
WO2019001140A1 (zh) 一种管理vnf实例化的方法和设备
US11093477B1 (en) Multiple source database system consolidation
US10516756B1 (en) Selection of a distributed network service
CN117131493A (zh) 权限管理***构建方法、装置、设备及存储介质
US10587725B2 (en) Enabling a traditional language platform to participate in a Java enterprise computing environment
US20220255970A1 (en) Deploying And Maintaining A Trust Store To Dynamically Manage Web Browser Extensions On End User Computing Devices
US10200304B1 (en) Stateless witness for multiple sites
US20230168979A1 (en) Managing nodes of a dbms
US20240184914A1 (en) Multiple synonymous identifiers in data privacy integration protocols
EP3449601B1 (en) Configuration data as code

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18935350

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 02/07/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18935350

Country of ref document: EP

Kind code of ref document: A1