WO2022140945A1 - 容器集群的管理方法和装置 - Google Patents

容器集群的管理方法和装置 Download PDF

Info

Publication number
WO2022140945A1
WO2022140945A1 PCT/CN2020/140276 CN2020140276W WO2022140945A1 WO 2022140945 A1 WO2022140945 A1 WO 2022140945A1 CN 2020140276 W CN2020140276 W CN 2020140276W WO 2022140945 A1 WO2022140945 A1 WO 2022140945A1
Authority
WO
WIPO (PCT)
Prior art keywords
container cluster
container
instance
ccm
management
Prior art date
Application number
PCT/CN2020/140276
Other languages
English (en)
French (fr)
Inventor
夏海涛
克莱伯乌尔里希
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2020/140276 priority Critical patent/WO2022140945A1/zh
Priority to CN202080108230.3A priority patent/CN116724543A/zh
Priority to JP2023539232A priority patent/JP2024501005A/ja
Priority to EP20967297.1A priority patent/EP4258609A4/en
Publication of WO2022140945A1 publication Critical patent/WO2022140945A1/zh
Priority to US18/342,472 priority patent/US20230342183A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/40Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/084Configuration by using pre-existing information, e.g. using templates or copying from other elements
    • H04L41/0843Configuration by using pre-existing information, e.g. using templates or copying from other elements based on generic templates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0895Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/34Signalling channels for network management communication
    • H04L41/342Signalling channels for network management communication between virtual entities, e.g. orchestrators, SDN or NFV entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Definitions

  • the present application relates to the field of communications, and in particular, to a container cluster management method and apparatus.
  • Network Function Virtualization means that telecom network operators draw on the virtualization technology in the field of Information Technology (IT, Information Technology), and integrate some telecom network functions (such as: The realization of the core network function) decouples software and hardware, so as to realize the rapid and efficient deployment and operation of network services (NS, Network Service), and at the same time achieve the goal of saving network investment cost CAPEX and operating cost OPEX.
  • NFV Network Function Virtualization
  • IT Information Technology
  • NS Network Service
  • telecommunications network functions are implemented in software and run on general-purpose server hardware, which can be migrated, instantiated, and deployed in different physical locations in the network as needed, without the need to install new equipment.
  • the standardization work of NFV mainly focuses on network services, virtual network functions (VNF, Virtualised Network Function) and dynamic management and orchestration of virtual resources (MANO, Management and Orchestration), under the European Telecommunications Standards Institute (ETSI, European Telecommunications Standards Institute).
  • VNF virtual network functions
  • MANO Management and Orchestration
  • ETSI European Telecommunications Standards Institute
  • IFA InterFace and Architecture
  • the NFV orchestrator (NFVO) 102 is mainly responsible for the life cycle management of the NS and the allocation and scheduling of virtual resources in the network functions virtualisation infrastructure (NFVI, network functions virtualisation infrastructure) 104 .
  • NFVO 102 can communicate with one or more virtual network function managers (VNFM, Virtualised Network Function Manager) 106 to perform operations related to instantiating NS, such as sending corresponding configuration information to VNFM 106, requesting the status of one or more VNF 108 from VNFM 106 information.
  • VNFM virtual network function managers
  • VNFM Virtualised Network Function Manager
  • the NFVO 102 can also communicate with a virtual infrastructure manager (VIM, virtualized infrastructure manager) 110 to perform allocation and/or reservation of various resources in the NFVI 104, exchange resource configuration and status information, and the like.
  • VIP virtual infrastructure manager
  • VNFM106 is mainly responsible for the life cycle management of one or more VNF108, such as instantiating (instantiating) VNF108, updating (updating) VNF108, querying VNF108, elastically scaling (scaling) VNF108, terminating (terminating) VNF108, etc.
  • the VNFM 106 can communicate with the VNF 108 to manage the life cycle of the VNF 108, exchange configuration information and status information with the VNF, and the like. It can be understood that the NFV system 100 may include one or more VNFMs 106 , and each VNFM 106 performs lifecycle management for different types of VNFs 108 .
  • NFVI 104 refers to the infrastructure of the NFV system 100, including hardware components, software components, and combinations thereof, in order to establish a virtualized environment, and to deploy, manage, and implement VNFs 108 in the virtualized environment.
  • NFVI 104 may include at least computing (computing) hardware 1041, storage hardware 1042, and network hardware 1043; the virtualization layer 1044 of NFVI 104 may abstract the aforementioned hardware, decouple each hardware and VNF 108, and obtain corresponding virtual computing (virtual computing) resources 1045 , virtual storage resources 1046 , and virtual network resources 1047 , thereby providing virtual machines and other forms of virtualized containers for the VNF 108 .
  • the VIM 110 is mainly used to control and manage the interaction between the VNF 108 and the computing hardware 1041 , storage hardware 1042 , network hardware 1043 , virtual computing resources 1045 , virtual storage resources 1046 , and virtual network resources 1047 .
  • the VIM 110 may perform resource management functions, such as adding corresponding virtual resources to a virtual machine or other forms of virtual containers, collecting fault information of the NFVI 104 during system operation, and the like.
  • the VIM 110 may communicate with the VNFM 106, such as receiving resource allocation requests from the VNFM 106, feeding back resource configuration and status information to the VNFM 106, and the like.
  • VNF108 includes one or more VNFs (usually multiple VNFs) that can run one or more virtual machines or other forms of virtual containers, corresponding to a set of network functions originally implemented by dedicated devices.
  • a network element management system (EMS, element management system) 112 can be used to configure and manage the VNF 108 , and initiate lifecycle management operations such as instantiation of a new VNF 108 to the VNFM 106 . It will be appreciated that one or more EMS 112 may be included in the NFV system 100 .
  • An operations support system (OSS, operations support system) or a business support system (BSS, business support system) 114 can support various end-to-end telecommunication services.
  • the management functions supported by OSS can include network configuration, service provision, fault management, etc.; BSS can be used to process orders, payment, income and other related services, and support functions such as product management, order management, revenue management, and customer management.
  • the OSS/BSS 114 can be the service requester to request the NFVO to instantiate the NS, and the OSS/BSS 114 or the computing device on which the OSS/BSS 114 depends can generally be referred to as the service requester.
  • NFV Network Function Virtualisation
  • Cloud native is a new system implementation paradigm for building, running and managing software in the cloud environment, making full use of cloud infrastructure and platform services, adapting to the cloud environment, with (micro) service, elastic scaling, distributed, high availability Architecture practices for key features such as multi-tenancy, multi-tenancy, and automation.
  • NFV Management and Orchestration is a key part of many practices for NFV to become cloud-native.
  • Container as a Service is a specific type of Platform as a Service (PaaS, Platform as a Service).
  • PaaS Platform as a Service
  • a container is an operating system-level virtualization technology that isolates different processes through operating system isolation technologies such as CGroup and NameSpace under Linux.
  • Container technology is different from hardware virtualization (Hypervisor) technology. There is no virtual hardware, and there is no operating system inside the container, only processes. It is precisely because of this important feature of container technology that containers are lighter and easier to manage than virtual machines.
  • a set of common management operations are defined, such as: start, stop, pause and delete, etc., to carry out unified life cycle management of the container.
  • the Kubernetes project of the Cloud Native Computing Foundation is currently the industry-recognized de facto standard for container management and orchestration.
  • the introduction of the container-as-a-service architecture in the cloud-native evolution of telecom networks has brought agile changes to the development operations (DevOps) of the telecom industry.
  • the corresponding change is that the traditional large-granular monomer network functions are gradually deconstructed into services, and even further micro-services.
  • Each service-based function is independently developed, delivered, and maintained, and version upgrades become more frequent; on the other hand, the surge in the number of containerized network functions will not bring exponential workload growth to interoperability testing.
  • the API interface definition ensures the consistency and reliability of interface function calls.
  • Kubernetes (K8S) container cluster management technology based on the open source platform. Its core idea is that "everything is service-centric and everything revolves around services”. Following this idea, container application systems built on Kubernetes can not only run independently on physical machines, virtual machines, or enterprise private clouds, but can also be hosted on to the public cloud. Another feature of Kubernetes is automation, a service can be self-scaling, self-diagnosing, and easy to upgrade.
  • container cluster management includes container cluster management (create/delete container clusters), container cluster node management (add/reduce nodes in the cluster, elastically update the scale of the cluster).
  • Container clusters can be dynamically created on demand, that is, NFV MANO determines the number of container clusters created and the capacity of each cluster according to the scale and reliability policy of the managed containerized VNFs.
  • the embodiments of the present application provide a management method and device for a container cluster node resource pool, which are as follows:
  • An embodiment of the present application provides a method for managing a container cluster node resource pool, the method comprising:
  • the container cluster management CCM receives an instantiation request message of the container cluster from the management entity, and the request message carries the instantiation parameters of the container cluster; the CCM instantiates the container cluster according to the instantiation parameters of the container cluster; the instantiation of the container cluster The parameters are determined by the management entity by accessing the container cluster descriptor CCD.
  • the method further includes:
  • the CCM receives an instantiation request message of the container cluster node from the management entity, where the request message carries the instantiation parameter of the container cluster node, and the instantiation parameter of the container cluster node is determined by the management entity accessing the container cluster node descriptor CCND ; or, the CCM accesses the CCND to determine the instantiation parameters of the container cluster nodes.
  • the CCM instantiates the container cluster node according to the instantiation parameters of the container cluster node; and instantiates the CISM instance and the CIS instance on the container cluster node according to the instantiation parameter of the container cluster.
  • the method further includes: the CCM receives an update request message of the container cluster from the management entity, the request message carries the parameters of the updated container cluster instance, and the CCM updates the container cluster instance according to the updated parameters of the container cluster instance.
  • the method further includes: the CCM receives a container cluster deletion request message from the management entity, the deletion request message carries the identification information of the container cluster instance to be deleted, and/or the type of deletion operation; the CCM deletes the container cluster instance.
  • the embodiment of the present application provides a container cluster management system, and the system includes:
  • the management entity is used to determine the instantiation parameters of the container cluster from the container cluster descriptor CCD, and send the instantiation parameters of the container cluster to the container cluster management CCM; the CCM is used to according to the instantiation parameters of the container cluster Instantiate a container cluster.
  • the management entity is further configured to access the container cluster node descriptor CCND to determine the instantiation parameters of the container cluster nodes; and send the instantiation parameters of the container cluster nodes to the CCM.
  • the CCM instantiates the container cluster node according to the instantiation parameters of the container cluster node and instantiates the CISM instance and the CIS instance on the container cluster node according to the instantiation parameter of the container cluster.
  • the embodiment of the present application further provides a container cluster management apparatus, including a module for performing the above method steps.
  • An embodiment of the present application also provides a container cluster management device, which is characterized in that it includes a processor and a memory, the processor and the memory are coupled, and a computer program is stored in the memory; the processor is used to call the memory A computer program in , causing the management device to perform the above method.
  • An embodiment of the present application further provides a computer-readable storage medium, characterized in that, a computer program is stored in the storage medium, and when the computer program is executed, the foregoing method is executed.
  • Embodiments of the present application also provide a computer program product, where the computer program product includes computer program code, and when the computer program code is run on a computing device, causes the computing device to execute the above method.
  • FIG. 1 is a frame diagram of an NFV system in the prior art.
  • FIG. 2 is an architectural diagram of a Kubernetes (K8S) container management and orchestration system provided by an embodiment of the present application.
  • K8S Kubernetes
  • FIG. 3 is an architectural diagram of an NFV management and orchestration system for managing a container cluster according to an embodiment of the present application.
  • FIG. 4 is a logical relationship diagram of a container cluster, container cluster nodes, and namespaces according to an embodiment of the present application
  • FIG. 5 is a schematic flowchart of creating a container cluster according to an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of updating a container cluster according to an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of deleting a container cluster according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a management entity device module according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a CCM device module according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a hardware structure of a management entity device according to an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a hardware structure of a CCM device according to an embodiment of the present application.
  • Figure 2 is an architectural diagram of the Kubernetes (K8S) container management and orchestration system.
  • Kubernetes divides the infrastructure resources in a container cluster into a Kubernetes master node (master) and a group of worker nodes (Node).
  • master node also called management node
  • API Server Application Programming Interface Server
  • Replication Controller Replication Controller
  • RC Replication Controller
  • These processes implement management functions such as resource management, pod scheduling, elastic scaling, security control, system monitoring, and error correction of the entire container cluster.
  • Run Kubelet, Proxy, and Docker three components on each worker node, which are responsible for managing the life cycle of the Pod on this node and implementing the function of service proxy.
  • a Pod can include at least one container, and a Pod can be understood as a container bin composed of one or more containers.
  • API Server provides the only operation entry for resource objects. All other components must operate resource data through the API interface provided by it, and complete related business functions through "full query” and “change monitoring” of related resource data. .
  • the Controller Manager is the management and control center of the container cluster. Its main purpose is to automate the fault detection and recovery of the Kubernetes cluster. For example, Pod can be copied or removed according to the definition of RC to ensure that the number of Pod instances conforms to the definition of RC, and the creation and update of endpoint objects of service, node discovery, management and status monitoring, as well as cleaning of locally cached image files, etc.
  • the Kubelet component is responsible for the creation, modification, monitoring, and deletion of Pods on this node throughout the life cycle management. At the same time, the Kubelet regularly reports the status information of this node to the API Server.
  • the Proxy component is used to implement the load balancing between the proxy of the service and the software mode.
  • the Docker component is the runtime environment for the container.
  • the NFV industry standard group under the European Telecommunications Standards Institute defines the standardized functions of the NFV management orchestration (Management and Orchestration) system management container in the release 4 feature work, as shown in Figure 3, in this reference
  • ETSI European Telecommunications Standards Institute
  • Container Infrastructure Service Management (also known as CaaS management, the open source prototype is Kubernetes) is responsible for managing container objects called by containerized VNFs, including creation, update, and deletion of container objects, and in its ( In the container cluster node resource pool managed by CISM), the container objects are scheduled to the corresponding node resources (computing, storage and network).
  • the concept corresponding to the container object in the ETSI standard is the managed container infrastructure object (Managed Container Infrastructure Object, MCIO).
  • Container Cluster Management is responsible for managing the container cluster, including the creation of node resource pools used by the container cluster and the expansion and contraction of nodes.
  • a container cluster is a collection consisting of a monitoring and management system (eg, Kubernetes Master in Figure 2) and a series of computing nodes (eg, node in Figure 2, which can be physical servers, bare metal, or virtual machines).
  • a container cluster is a dynamic system in which multiple containers can be deployed, and the status of these containers and the communication between containers can be monitored by the system.
  • the corresponding concept of container cluster in ETSI standard is Container Infrastructure Service Cluster (CIS Cluster).
  • Containerized VNF can be understood as a containerized workload that encapsulates NFVI resources such as computing, storage, and network.
  • the container object MCIO called by the workload is scheduled to run on the node of the container cluster.
  • Load the image of a CISM instance (CaaS management plane functions, such as: Kubernetes Master) or a container infrastructure service (Container Infrastructure Service, CIS) instance (CaaS user plane functions, such as: kubelet, kube-proxy and docker on the Kubernetes worker node ) mirror image.
  • CISM in each container cluster provides management functions such as namespace creation, query, update and deletion (Create/Read/Update/Delete, CRUD).
  • a namespace is a logical grouping of a specific set of identifiers, resources, policies, and authorizations, and acts like a folder in a server.
  • NFVO can create multiple namespaces in a container cluster, and implement resource and identity isolation of multi-tenant (ie: containerized VNF) container objects MCIO in the container cluster through namespaces.
  • container cluster CIS cluster
  • CIS cluster node container cluster node
  • namespace namespace
  • CISM and CCM provide NFVO or VNFM with management services to invoke their functions on the northbound interface.
  • the solution of the present invention proposes a container cluster management method based on NFV template.
  • a container cluster descriptor template and a container cluster node descriptor template By defining a container cluster descriptor template and a container cluster node descriptor template, and applying the newly defined descriptor template in the process of container cluster management, it supports the management of container clusters. Dynamic management enables consistent deployment and batch replication of large-scale container clusters.
  • a container cluster descriptor (CCD, CIS Cluster Descriptor) is a type of NFV template file defined in the embodiment of the present invention that describes the deployment and operation behavior requirements of a container cluster.
  • CCD can refer to or use a template similar to VNFD (Virtual Network Function Descriptor), which includes but is not limited to the following basic deployment information:
  • the name, ID, provider, version information, etc. of the container cluster descriptor The name, ID, provider, version information, etc. of the container cluster descriptor.
  • the size (size) of the container cluster that is, the maximum number of CISM instances and/or the maximum number of CIS instances included in the container cluster.
  • the basic characteristics of the container cluster elastic scaling (scale) operation including the minimum step size, the maximum step size and/or the achievable elastic scaling level (Scale level) that the container cluster can perform in the elastic scaling operation.
  • the overall affinity/anti-affinity rule of the container cluster refers to the identification information of the affinity/anti-affinity group where the container cluster instance created based on the CCD belongs, and is used to indicate the container cluster instance created based on the CCD and the Affinity/anti-affinity between container cluster instances created by CCD.
  • An affinity group is a logical relationship group formed around the similarity of resources. Objects belonging to the same affinity group use similar resources during deployment. For example, all objects in the affinity group are deployed in the same data center; An affinity group is a logical relationship group formed around the distance of resources. Objects belonging to the same anti-affinity group use dissimilar resources during deployment. For example, each object in the anti-affinity group is deployed in a different in the data center;
  • the affinity/anti-affinity rule between CISM instances deployed in the container cluster refers to the identification information of the affinity/anti-affinity group where the CISM instance in the container cluster instance created based on the CCD is located.
  • the affinity/anti-affinity rule between the CIS instances deployed in the container cluster refers to the identification information of the affinity/anti-affinity group where the CIS instance in the container cluster instance created based on the CCD belongs, and is used to indicate the The affinity/anti-affinity relationship between the CIS instance in the container cluster instance created by this CCD and other CIS instances in the same container cluster instance created based on this CCD.
  • the affinity/anti-affinity rule between the CISM instance deployed in the container cluster and the CIS instance refers to the identifier of the affinity/anti-affinity group where the CISM instance and the CIS instance in the container cluster instance created based on the CCD are located Information used to indicate the affinity/anti-affinity relationship between the CISM instance in the container cluster instance created based on the CCD and the CIS instance in the container cluster instance created based on the CCD.
  • the (primary CIS cluster external network) feature of the primary container cluster external network refers to the basic configuration information of the primary external network of the container cluster instance created based on the CCD, for example: the IP address and port of the container in the container cluster connected to the external network Feature requirements; among them, the external network of the main container cluster refers to the network exposed outside the container cluster, and the container (OS container) in the container cluster is indirectly connected to the externally exposed network through the native network capabilities of the underlying container infrastructure layer.
  • the characteristics of the secondary CIS cluster external network refers to the basic configuration information of the secondary external network of the container cluster instance created based on the CCD, for example: Container Network Interface (CNI) used by the container cluster Among them, the external network of the subordinate container cluster refers to the network exposed outside the container cluster, and the containers (OS containers) in the container cluster are directly interconnected through other network interfaces other than the main network interface.
  • CNI Container Network Interface
  • Container Cluster Node Descriptor (CCND, CIS Cluster Node Descriptor) is a type of NFV template file that describes the deployment and operation behavior requirements of container cluster nodes.
  • CCND is analogous to the definition of virtual computing or storage resource descriptors, including but not limited to the following deployment information:
  • the node type of the container cluster created based on the CCND for example, indicating whether the node is a physical machine (bare metal) or a virtual machine.
  • the affinity/anti-affinity rule between nodes in the container cluster created based on the CCND refers to the identification information of the affinity/anti-affinity group where the node instance of the container cluster created based on the CCND is located, and is used to indicate the The affinity/anti-affinity relationship between a container cluster node (or referred to as a container cluster node instance) created by CCND and other container cluster node instances created based on the CCND.
  • Embodiment 1 of the present invention provides a method for creating (or instantiating) a container cluster, as shown in FIG. 5 , which specifically includes the following steps:
  • Step 501 The NFV MANO management entity (or simply: management entity, the same below) accesses the container cluster descriptor CCD, and obtains the deployment information of the container cluster to be created (or referred to as a container cluster instance) from the CCD file.
  • CCD container cluster descriptor
  • the management entity may be NFVO or VNFM, and which one executes all the steps of the method in this embodiment depends on the system configuration, which is not specifically limited here.
  • Step 502 The management entity determines the instantiation parameters of the container cluster instance to be created according to the deployment information of the container cluster in the CCD, for example: the name or identification information of the container cluster descriptor CCD, the scale of the container cluster, the CISM instance created by the container cluster initialization and CIS instances, affinity/anti-affinity rules between CISM instances within a container cluster, between CIS instances, and between CISM instances and CIS instances.
  • the management entity may use the deployment information of the container cluster in the container cluster descriptor CCD as the instantiation parameter of the container cluster instance, or may refer to other network element systems (such as OSS/BSS) on the basis of satisfying the deployment information. Enter to determine the instantiation parameters of the container cluster instance.
  • Step 503 The management entity sends a container cluster creation request to the container cluster management CCM.
  • the request message carries the scale of the container cluster to be created, the number of CISM instances and the number of CIS instances initially created by the container cluster, the CISM instances in the container cluster, and the number of CISM instances in the container cluster. Affinity/anti-affinity rules between instances and between CISM instances and CIS instances.
  • Step 504 The CCM returns a container cluster creation response to the management entity, indicating that the container cluster creation request message is successfully received.
  • Step 505 The CCM sends a change notification of the container cluster management process to the management entity, and instructs the management entity to start the container cluster instantiation process.
  • Step 506 the management entity obtains the identification information of the container cluster node descriptor CCND of the container cluster node instance to be created from the container cluster descriptor CCD, and obtains the CCND file through the identification information of CCND; the management entity accesses CCND to obtain the container to be created Deployment information for cluster node instances.
  • Step 507 The management entity determines the instantiation parameters of the container cluster node instance to be created according to the deployment information of the container cluster node in CCND, such as the type of the container cluster node and the affinity/anti-affinity group to which the container cluster node belongs.
  • Step 508 The management entity sends a container cluster node creation request to the container cluster management CCM, and the request message carries the name or identification information of the container cluster node descriptor to be created, the type of the container cluster node, and the affinity/anti-counter to which the container cluster node belongs. Affinity group.
  • Step 509 The CCM returns a container cluster node creation response to the management entity, indicating that the container cluster node creation request message is successfully received.
  • the CCM obtains the identification information of the container cluster node descriptor CCND from the container cluster descriptor CCD, and determines the instance of the container cluster node by accessing the container cluster node descriptor CCND. parameters.
  • CCM can use the deployment information of the container cluster in the container cluster descriptor CCD as the instantiation parameter of the container cluster instance, or it can refer to other network element systems (such as OSS/BSS) on the basis of satisfying the deployment information. Enter to determine the instantiation parameters of the container cluster instance.
  • Step 510 The CCM completes the creation process of the container cluster nodes initialized in the container cluster to be created, thereby completing the creation of the container cluster instance locally. Further, the CCM accesses the container cluster descriptor CCD to obtain the software image (image) information of the CISM instance and/or the CIS instance to be deployed, and deploys the CISM instance and the CIS instance on the container cluster node (optionally, the CIS instance can also be configured by Created CISM instance), and CCM creates container cluster instance information, such as: CCD identification information and version used by the instantiated container cluster instance, instantiation status, elastic scaling status, maximum allowed elastic scaling level, External network information, node resource information, etc.
  • container cluster instance information such as: CCD identification information and version used by the instantiated container cluster instance, instantiation status, elastic scaling status, maximum allowed elastic scaling level, External network information, node resource information, etc.
  • the software image of the CISM instance and/or the CIS instance can be stored in the package file of the container cluster in the NFV-MANO management domain, or in the software image registry (image registry) outside the NFV-MANO management domain.
  • the container cluster descriptor CCD contains index information pointing to the package file of the container cluster storing the CISM instance and/or the software image of the CIS instance or the directory address of the external software image library.
  • Step 511 The CCM sends a notification of the change of the container cluster management process to the management entity, and sends a notification message of the completion of the instantiation of the container cluster to the management entity.
  • Embodiment 2 of the present invention provides a method for updating a container cluster, as shown in FIG. 6 , which specifically includes the following steps:
  • Step 601 The management entity sends a container cluster update request to the container cluster management CCM.
  • the request message carries the identification information of the container cluster instance to be updated, the type of the update operation is elastic scaling, and the scale or elasticity of the target container cluster achieved by the elastic scaling.
  • Step 602 The CCM returns a container cluster update response to the management entity, indicating that the container cluster update request message is successfully received.
  • Step 603 The CCM sends a change notification of the container cluster management process to the management entity, and instructs the management entity to start the container cluster update process.
  • Step 604 the management entity obtains the identification information of the container cluster node descriptor CCND of the container cluster node instance to be updated from the container cluster descriptor CCD, and obtains the CCND file through the identification information of CCND; the management entity accesses CCND to obtain the container to be created Deployment information of cluster node instances;
  • Step 605 The management entity determines the instantiation parameters of the container cluster node instance to be created according to the deployment information of the container cluster node instance in CCND, for example: the name or identification information of the container cluster node descriptor, the type of the container cluster node, the container cluster node The affinity/anti-affinity group to which you belong.
  • Step 606 The management entity sends a container cluster node creation request message to the container cluster management CCM, where the request message carries the type of the container cluster node instance to be created and the affinity/anti-affinity group to which the container cluster node instance belongs.
  • Step 607 The CCM returns a container cluster node creation response to the management entity, indicating that the container cluster node creation request message is successfully received.
  • the CCM obtains the identification information of the container cluster node descriptor CCND from the container cluster descriptor CCD, and determines the instance of the container cluster node by accessing the container cluster node descriptor CCND. parameters.
  • Step 608 The CCM completes the process of creating the container cluster node instance in the container cluster to be updated, and locally generates information about the newly created container cluster node instance.
  • Step 609 The CCM returns a container cluster update completion notification message to the management entity, and indicates to the management entity that the container cluster update process ends.
  • Embodiment 3 of the present invention provides a method for deleting a container cluster, as shown in FIG. 7 , which specifically includes the following steps:
  • Step 701 The management entity sends a container cluster deletion request message to the container cluster management CCM, and the request message carries the identification information of the container cluster instance to be deleted, and/or the type of deletion operation, for example: forced deletion (Forceful deletion) or friendly Graceful deletion.
  • Step 702 CCM locally uninstalls the CISM instance and/or CIS instance in the container cluster to be deleted according to the type of the deletion operation in the request message, releases the I-layer resources occupied by the container cluster node, deletes the container cluster node instance, and deletes the container cluster instance. At the same time, CCM deletes the information of the container cluster instance.
  • Step 703 The CCM returns a container cluster deletion response to the management entity, indicating that the container cluster instance is successfully deleted.
  • an information model for defining a container cluster descriptor CCD and a container cluster node descriptor CCND is added to the NFV template.
  • CCD mainly includes the scale of the cluster, elastic scaling properties, and affinity/anti-affinity rules for object instances in the cluster.
  • CCND includes the type of node, the node's requirements for hardware acceleration, network interface, local storage, and the affinity/anti-affinity rules between nodes in the container cluster.
  • the management entity obtains the information of the container cluster to be created/updated/deleted by accessing the CCD, obtains the information of the nodes in the container cluster by accessing the CCND, and sends the creation/update to the CCM according to the information /Delete the container cluster request, CCM returns the response to the management entity after completing the creation/update/deletion of the container cluster.
  • the solutions of the embodiments of the present invention can support dynamic management of container clusters, and realize consistent deployment and batch replication of large-scale container clusters.
  • the foregoing mainly introduces the solutions provided by the embodiments of the present application from the perspective of interaction between various network elements.
  • the above-mentioned NFVO, VNFM, or CCM, etc. include corresponding hardware structures and/or software modules for executing each function.
  • Those skilled in the art should easily realize that the unit and algorithm operations of each example described in conjunction with the embodiments disclosed herein can be implemented in hardware or in the form of a combination of hardware and computer software. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
  • NFVO NFVO
  • VNFM NFVM
  • CCM CCM
  • NFVO NFVO
  • VNFM NFVM
  • CCM CCM
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. It should be noted that, the division of modules in the embodiments of the present application is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
  • FIG. 8 shows a schematic structural diagram of a communication device 80 .
  • the communication device 80 includes a transceiver module 801 and a processing module 802 .
  • the communication device 80 is used to implement the functions of NFVO or VNFM.
  • the communication device 80 is, for example, the NFVO or VNFM described in the embodiment shown in FIG. 5 , the embodiment shown in FIG. 6 , or the embodiment shown in FIG. 7 .
  • the communication device 80 may be NFVO or VNFM, or may be a chip applied in NFVO or VNFM, or other combined devices or components having the above-mentioned NFVO or VNFM functions.
  • the transceiver module 801 may be a transceiver, the transceiver may include an antenna and a radio frequency circuit, etc.
  • the processing module 802 may be a processor (or a processing circuit), such as a baseband processor, a baseband processor can include one or more CPUs.
  • the transceiver module 801 may be a radio frequency unit, and the processing module 802 may be a processor (or a processing circuit), such as a baseband processor.
  • the transceiver module 801 may be an input/output interface of a chip (eg, a baseband chip), and the processing module 802 may be a processor (or a processing circuit) of the chip system, which may include one or more central processing units unit.
  • transceiver module 801 in this embodiment of the present application may be implemented by a transceiver or a transceiver-related circuit component
  • processing module 802 may be implemented by a processor or a processor-related circuit component (or referred to as a processing circuit).
  • the transceiving module 801 may be used to perform all transceiving operations performed by NFVO or VNFM in the embodiment shown in FIG. 5, eg, S503, and/or other processes for supporting the techniques described herein.
  • the processing module 802 may be configured to perform all operations performed by the NFVO or VNFM in the embodiment shown in FIG. 5 except for the transceiving operations, such as S501, S502, S505 and/or other operations used to support the techniques described herein process.
  • the transceiving module 801 may be configured to perform all transceiving operations performed by NFVO or VNFM in the embodiment shown in FIG. 6 , such as S603 , and/or other processes used to support the techniques described herein.
  • Processing module 802 may be used to perform all operations performed by NFVO in the embodiment shown in FIG. 6 except for transceiving operations, such as S601, S602, S605 and/or other processes for supporting the techniques described herein.
  • the transceiving module 801 may be used to perform all transceiving operations performed by NFVO or VNFM in the embodiment shown in FIG. 7 , such as S701 , and/or other processes used to support the techniques described herein.
  • the processing module 802 may be configured to perform all operations performed by the NFVO in the embodiment shown in FIG. 6 except for the transceiving operations, eg, S703, and/or other processes for supporting the techniques described herein.
  • the communication device 80 can also be used to implement the function of the CCM, which is the embodiment shown in FIG. 5 , the embodiment shown in FIG. 6 or the CCM described in the embodiment shown in FIG. 7 , and executes the CCM described in the embodiment shown in FIG. 5 . - All operations performed by the CCM in the embodiment shown in FIG. 7 will not be repeated.
  • FIG. 9 shows a schematic diagram of the composition of a communication system.
  • the communication system 90 may include: a management entity 901 and a CCM 902. It should be noted that FIG. 9 is only an exemplary drawing, and the embodiment of the present application does not limit the network elements and the number of network elements included in the communication system 90 shown in FIG. 9 .
  • the NFVO 901 is used to implement the functions of the management entity 901 in the method embodiments shown in the above-mentioned FIGS. 5-7 .
  • the management entity 901 can be used to access the container cluster descriptor file CCD, obtain the deployment information of the container cluster to be created from the file, determine the instantiation parameters of the container cluster according to the deployment information of the container cluster in the CCD, and manage the container cluster to the container cluster.
  • the CCM sends a container cluster creation request, and the request message carries the instantiation parameters of the container cluster to be created.
  • the CCM 902 is used to implement the functions of the CCM in the method embodiments shown in FIG. 5 to FIG. 7 above. For example, the CCM 902 returns a container cluster creation response to the management entity 901, indicating the success or failure of the container cluster creation, and the reason for the creation failure, and locally creates a container cluster instance, and completes the initial creation of a specified number of container cluster nodes, etc.
  • An embodiment of the present application provides a computing device 1000, as shown in FIG. 10, including at least one memory 1030 for storing program instructions and/or data, the memory 1030 is coupled with the processor 1020, and the processor 1020 runs the Stored program instructions and/or processing of the stored data to implement corresponding functions.
  • the computing device 1000 may be the NFVO or VNFM in the embodiments shown in FIG. 5 to FIG. 7 , and can implement the functions of the NFVO or VNFM in the methods provided in the embodiments.
  • the computing device 1000 may be a chip system. In this embodiment of the present application, the chip system may be composed of chips, or may include chips and other discrete devices.
  • the computing device 1000 may also include a communication interface 1010 for communicating with other devices over a transmission medium.
  • the other device may be a control device.
  • the processor 1020 may utilize the communication interface 1010 to send and receive data.
  • the specific connection medium between the communication interface 1010 , the processor 1020 , and the memory 1030 is not limited in this embodiment of the present application.
  • the memory 1030, the processor 1020, and the communication interface 1010 are connected through a bus 1040 in FIG. 10.
  • the bus is represented by a thick line in FIG. 10, and the connection between other components is only for schematic illustration. , is not limited.
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of presentation, only one thick line is used in FIG. 10, but it does not mean that there is only one bus or one type of bus.
  • the processor 1020 may be a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, which may implement Alternatively, each method, step, and logic block diagram disclosed in the embodiments of the present application are executed.
  • a general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the methods disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
  • the memory 1030 may be a non-volatile memory, such as a hard disk drive (HDD) or a solid-state drive (SSD), etc., or a volatile memory (volatile memory), Such as random-access memory (random-access memory, RAM).
  • Memory is, but is not limited to, any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • the memory in this embodiment of the present application may also be a circuit or any other device capable of implementing a storage function, for storing program instructions and/or data.
  • An embodiment of the present application also provides a computing device 1100, as shown in FIG. 11, including at least one memory 1130 for storing program instructions and/or data, and the memory 1130 is coupled with the processor 1120.
  • the processor 1120 implements corresponding functions by executing the stored program instructions and/or processing the stored data.
  • the computing device 1000 may be the CCM in the embodiments shown in FIG. 5 to FIG. 7 , and can implement the functions of the CCM in the methods provided in the embodiments.
  • the computing device 1100 also includes a communication interface 1110 for communicating with other devices over a transmission medium.
  • the processor 1120 may use the communication interface 1110 to send and receive data.
  • Embodiments of the present application further provide a computer-readable storage medium for storing instructions, which, when executed by a processor of a computing device, enable the computing device to implement the method provided in any one of the embodiments of the present application .
  • An embodiment of the present application provides a computer program product, where the computer program product includes computer program code, and when the computer program code runs on a computing device, the computing device is made to execute any one of the embodiments of the present application. Methods.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请实施例提供了一种容器集群的管理方法,所述方法包括:容器集群管理CCM从管理实体接收容器集群的实例化请求消息,所述请求消息携带容器集群的实例化参数;CCM根据所述容器集群的实例化参数实例化容器集群;所述容器集群的实例化参数是所述管理实体通过访问容器集群描述符CCD确定的。通过本发明实施例的方案,通过定义容器集群描述符模板和容器集群节点描述符模板,支持容器集群的动态管理,实现大规模容器集群一致性部署以及批量复制。

Description

容器集群的管理方法和装置 技术领域
本申请涉及通信领域,尤其涉及一种容器集群的管理方法和装置。
背景技术
网络功能虚拟化(NFV,Network Function Virtualization)是指电信网络运营商借鉴了信息技术(IT,Information Technology)领域的虚拟化技术,在通用的服务器、交换机和存储器中将部分电信网络功能(例如:核心网功能)的实现进行软件和硬件解耦,从而实现网络服务(NS,Network Service)快速、高效部署和运营,同时达到节省网络投资成本CAPEX和运营成本OPEX的目标。通过应用NFV技术,电信网络功能以软件方式实现,并能在通用的服务器硬件上运行,可以根据需要进行迁移、实例化(instantiation)、部署在网络的不同物理位置,并且不需要安装新设备。
NFV的标准化工作主要集中在网络服务、虚拟网络功能(VNF,Virtualised Network Function)和虚拟资源的动态管理和编排(MANO,Management and Orchestration),由欧洲通信标准协会(ETSI,European Telecommunications Standards Institute)下NFV行业标准组的接口与架构(IFA,InterFace and Architecture)工作组完成MANO框架内的功能制订工作,其功能架构如图1所示,NFV***100主要包括如下功能实体:
NFV编排器(NFV orchestrator,NFVO)102,主要用于负责NS的生命周期管理,以及负责网络功能虚拟基础设施(NFVI,network functions virtualisation infrastructure)104中虚拟资源的分配和调度。NFVO102可以与一个或多个虚拟网络功能管理器(VNFM,Virtualised Network Function Manager)106通信,执行实例化NS的相关操作,比如向VNFM106发送相应的配置信息、从VNFM106请求一个或多个VNF108的状态信息。另外,NFVO102还可以与虚拟基础设施管理器(VIM,virtualized infrastructure manager)110通信,执行对NFVI104中的各项资源进行分配和/或预留,交换资源配置和状态信息等。
VNFM106,主要负责一个或多个VNF108的生命周期管理,比如实例化(instantiating)VNF108、更新(updating)VNF108、查询VNF108、弹性伸缩(scaling)VNF108,终止(terminating)VNF108等。VNFM106可以与VNF108通信,从而管理VNF108的生命周期、与VNF交换配置信息和状态信息等。可以理解,NFV***100中可以包含一个或多个VNFM106,各个VNFM106分别对不同类型的VNF108进行生命周期管理。
NFVI104,指的是NFV***100的基础设施,包含硬件部件、软件部件及其结合,以便建立虚拟化环境,在虚拟化环境中部署、管理及实现VNF108。NFVI104至少可以包含计算(computing) 硬件1041、存储硬件1042、网络硬件1043;NFVI104的虚拟化层1044可以对前述各个硬件进行抽象,解耦各个硬件与VNF108,得到相应的虚拟计算(virtual computing)资源1045、虚拟存储资源1046、虚拟网络资源1047,从而为VNF108提供虚拟机以及其它形式的虚拟化容器。
VIM110,主要用于控制和管理VNF108与计算硬件1041、存储硬件1042、网络硬件1043、虚拟计算资源1045、虚拟存储资源1046、虚拟网络资源1047的交互。比如,VIM110可以执行资源管理功能,具体如向虚拟机或其它形式的虚拟容器增加相应的虚拟资源、搜集NFVI104在***运行过程中的故障信息等。另外,VIM110可以和VNFM106进行通信,比如接收来自VNFM106的资源分配请求、向VNFM106反馈资源配置和状态信息等。
VNF108,VNF108包括一个或多个VNF(通常是多个VNF),可以运行一个或多个虚拟机或其它形式的虚拟容器,对应于一组原本由专用设备实现的网络功能。
网元管理***(EMS,element management system)112,可以用于对VNF108进行配置和管理,以及向VNFM106发起新的VNF108的实例化等生命周期管理操作。可以理解,NFV***100中可以包括一个或多个EMS112。
运营支持***(OSS,operations support system)或者业务支持***(BSS,business support system)114,可以支持各种端到端的电信业务。其中,OSS支持的管理功能可以包括网络配置、业务提供、故障管理等;BSS可以用于处理订单、付费、收入等相关业务,支持产品管理、订单管理、收益管理及客户管理等功能。需要说明的是,OSS/BSS114可以作为业务请求方请求NFVO实例化NS,OSS/BSS114或OSS/BSS114所依赖的计算设备通常可以对应称为业务请求方。
可以理解,如图1所示的NFV***100中,前述各个功能实体可以各自部署在不同的计算设备中,也可能将部分功能实体集成到同一个计算设备中。
当前,电信领域的网络变革正经历着从网络功能虚拟化(Network Function Virtualisation,NFV)向云原生(Cloud-Native)演进的进程中。云原生是在云环境下构建、运行和管理软件的一种新的***实现范式,充分利用云基础设施和平台服务,适应云环境,具备(微)服务化、弹性伸缩、分布式、高可用、多租户和自动化等关键特征的架构实践。在这场变革中,在NFV管理编排(Management and Orchestration,MANO)的参考架构内引入容器管理是NFV走向云原生的众多实践的关键一环。
容器即服务(Container as a Service,CaaS)是一种特定类型的平台即服务(PaaS,Platform as a Service)。通常而言,容器是一种操作***级别的虚拟化技术,通过操作***隔离技术如Linux下的CGroup和NameSpace,将不同的进程隔离开来。容器技术不同于硬件 虚拟化(Hypervisor)技术,并没有虚拟硬件,容器内部也没有操作***,只有进程。正是由于容器技术的这个重要特点,使得容器相比虚拟机更轻量,管理也更方便。在容器的运行态,定义了一组公共的管理操作,例如:启动、停止、暂停和删除等,对容器进行统一的生命周期管理。云原生计算基金会(Cloud Native Computing Foundation)的Kubernetes项目是目前业内公认的容器管理编排的事实标准。
容器即服务架构在电信网络云原生演进进程中的引入为电信行业的开发运维(DevOps)带来了敏捷性的变革。与之相呼应的变化是,传统的大颗粒单体网络功能逐渐被解构为服务化,甚至进一步进行微服务化。每个服务化的功能独立进行开发、交付和维护,版本的升级变得更加频繁;但另一方面容器化网络功能数量的激增不会对互操作测试带来指数级工作量的增长,稳定的API接口定义保证了接口功能调用的一致性和可靠性。
当前在容器管理编排领域最流行的应用是Google公司基于开源平台的Kubernetes(简称:K8S)容器集群管理技术。它的核心思想是“一切以服务为中心,一切围绕服务运转”,遵循这一思想在Kubernetes上构建的容器应用***不仅可以独立运行在物理机、虚拟机或企业私有云上,也可以被托管到公有云上。Kubernetes的另一个特点是自动化,一个服务可以自我扩缩容、自我诊断,并且容易升级。
容器集群管理的功能范围包括容器集群的管理(创建/删除容器集群)、容器集群节点的管理(在集群中增加/减少节点、弹性更新集群的规模)。容器集群可以按需动态创建,即:NFV MANO根据所管理的容器化VNF的规模、可靠性策略确定创建的容器集群的数量和每个集群的容量。
在容器集群的动态管理模式下,如何管理容器集群,使得容器集群的创建或更新简易快捷,批量操作更具效率,在电信云大规模的容器化VNF管理编排中尤为重要。目前,在开源社区有一些基本的容器集群管理原型工具,例如:Google Kubeadm,但这些原型不足以支持电信云大规模部署和管理容器集群的需求。
发明内容
为解决上述现有技术中的技术问题,本申请实施例提供了一种容器集群节点资源池的管理方法和装置,具体如下:
本申请实施例提供了一种容器集群节点资源池的管理方法,所述方法包括:
容器集群管理CCM从管理实体接收容器集群的实例化请求消息,所述请求消息携带容器集群的实例化参数;CCM根据所述容器集群的实例化参数实例化容器集群;所述容器集群的实例化参数是所述管理实体通过访问容器集群描述符CCD确定的。
所述方法进一步包括:
CCM从管理实体接收容器集群节点的实例化请求消息,所述请求消息携带容器集群节点的实例化参数,所述容器集群节点的实例化参数是所述管理实体访问容器集群节点描述符CCND确定的;或者,CCM访问CCND从而确定容器集群节点的实例化参数。
CCM根据所述容器集群节点的实例化参数实例化容器集群节点;以及根据所述容器集群的实例化参数实例化容器集群节点上的CISM实例和CIS实例。
所述方法进一步包括:CCM从管理实体接收容器集群的更新请求消息,所述请求消息携带更新的容器集群实例的参数,CCM根据所述更新的容器集群实例的参数更新所述容器集群实例。
所述方法进一步包括:CCM从管理实体接收容器集群删除请求消息,所述删除请求消息中携带待删除的容器集群实例的标识信息,和/或删除操作的类型;CCM删除所述容器集群实例。
本申请实施例提供了一种容器集群的管理***,所述***包括:
管理实体,用于从容器集群描述符CCD确定容器集群的实例化参数,并向容器集群管理CCM发送所述容器集群的实例化参数;所述CCM,用于根据所述容器集群的实例化参数实例化容器集群。
所述管理实体,还用于访问容器集群节点描述符CCND从而确定容器集群节点的实例化参数;并向CCM发送所述容器集群节点的实例化参数。
CCM根据所述容器集群节点的实例化参数实例化容器集群节点以及根据所述容器集群的实例化参数实例化容器集群节点上的CISM实例和CIS实例。
本申请实施例还提供了一种容器集群管理装置,包括用于执行上述方法步骤的模块。
本申请实施例还提供了一种容器集群管理装置,其特征在于,包括处理器、存储器,所述处理器和存储器耦合,所述存储器中存储有计算机程序;所述处理器用于调用所述存储器中的计算机程序,使得所述管理装置执行上述方法。
本申请实施例还提供了一种计算机可读存储介质,其特征在于,所述存储介质中存储有计算机程序,当所述计算机程序被执行时,执行上述方法。
本申请实施例中还提供了一种计算机程序产品,所述计算机程序产品包括计算机程序代码,当所述计算机程序代码在计算设备上运行时,使得所述计算设备执行上述方法。
通过本发明实施例的方案,通过定义容器集群描述符模板和容器集群节点描述符模板,支持容器集群的动态管理,实现大规模容器集群一致性部署以及批量复制。
附图说明
下面对实施例或现有技术描述中所需使用的附图作简单地介绍。
图1为现有技术中的一种NFV***框架图。
图2为本申请实施例提供的一种Kubernetes(K8S)容器管理编排***的架构图。
图3为本申请实施例提供的一种管理容器集群的NFV管理编排***架构图。
图4为本申请实施例提供的一种容器集群、容器集群节点和命名空间的逻辑关系图
图5为本申请实施例提供的一种创建容器集群的流程示意图。
图6为本申请实施例提供的一种更新容器集群的流程示意图。
图7为本申请实施例提供的一种删除容器集群的流程示意图。
图8为本申请实施例提供的一种管理实体装置模块示意图。
图9为本申请实施例提供的一种CCM装置模块示意图。
图10为本申请实施例提供的一种管理实体装置的硬件结构示意图。
图11为本申请实施例提供的一种CCM装置的硬件结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。
请参考图2,图2为Kubernetes(K8S)容器管理编排***的架构图。
Kubernetes将容器集群中的基础设施资源划分为一个Kubernetes主节点(master)和一群工作节点(Node)。其中,主节点(也称管理节点)上运行着容器集群管理相关的一组进程,例如,应用程序编程接口服务(Application Programming Interface Server,API Server)、复制控制器(Replication Controller,RC)等,这些进程实现了整个容器集群的资源管理、容器仓(Pod)调度、弹性伸缩、安全控制、***监控和纠错等管理功能。在每个工作节点上运行Kubelet,Proxy,Docker三个组件,负责对本节点上的Pod的生命周期进行管理,以及实现服务代理的功能。如图2所示,一个Pod中可以包括至少一个容器,则一个Pod可以理解为一个或多个容器组成的容器仓。
其中,API Server提供了资源对象的唯一操作入口,其他所有组件都必须通过它提供的API接口来操作资源数据,通过对相关的资源数据“全量查询”以及“变化监听”,完成相关的业务功能。
Controller Manager是容器集群的管理控制中心,其主要目的是实现Kubernetes集群的故障检测和恢复自动化工作。例如,可以根据RC的定义完成Pod的复制或移除,以确保Pod实例数符合RC的定义、根据服务(Service)与Pod的管理关系,完成service的端点(Endpoints)对象的创建和更新、node的发现、管理和状态监控、以及本地缓存的镜像文件的清理等。
Kubelet组件负责本节点上的Pod的创建、修改、监控、删除等全生命周期管理,同时Kubelet定时向API Server上报本节点的状态信息。
Proxy组件用于实现service的代理与软件模式的负载均衡。
Docker组件为容器的运行环境。
欧洲电信标准协会(European Telecommunications Standards Institute,ETSI)下属的NFV行业标准组在Release 4特性工作中定义了NFV管理编排(Management and Orchestration)***管理容器的标准化功能,如图3所示,在这个参考功能的框架中,右侧的管理面有两个新引入的逻辑功能如下:
容器基础设施服务管理(Container Infrastructure Service Management,CISM)(也称为CaaS管理,开源原型是Kubernetes)负责管理容器化VNF所调用的容器对象,包括容器对象的创建、更新和删除,并在其(CISM)纳管的容器集群节点资源池中将容器对象调度到相应的节点资源(计算、存储和网络)上。容器对象在ETSI标准中对应的概念是被管理的容器基础设施对象(Managed Container Infrastructure Object,MCIO)。
容器集群管理(Container Cluster Management,CCM)负责对容器集群进行管理,包括容器集群所使用的节点资源池的创建和节点扩缩容。容器集群是由一个监控和管理***(例如,图2中的Kubernetes Master)和一系列的计算节点(例如,图2中的node,可以是物理服务器、裸金属或虚拟机)组成的集合。容器集群是一个动态的***,在***中可以部署多个容器,这些容器的状态和容器之间的通信可以被***所监控。容器集群在ETSI标准中对应的概念是容器基础设施服务集群(Container Infrastructure Service Cluster,CIS Cluster)。
容器化VNF可以理解为封装了计算、存储和网络等NFVI资源的容器化工作负载(containerized workload),工作负载所调用的容器对象MCIO被调度到该容器集群的节点上运行,容器集群系节点上加载CISM实例(CaaS管理面功能,如:Kubernetes Master)的镜像或容器基础设施服务(Container Infrastructure Service,CIS)实例(CaaS用户面的功能,如:Kubernetes worker node上的kubelet,kube-proxy和docker)的镜像。在ETSI NFV标准中,每个容器集群内的CISM提供了命名空间(namespace)的创建、查询、更新和删除(Create/Read/Update/Delete,CRUD)等管理功能。命名空间是由一组特定的标识符、资源、策略和授权组成的逻辑分组,它的作用类似于服务器中的文件夹。NFVO可以在容器集群内创建多个命名空间,通过命名空间实现容器集群内多租户(即:容器化VNF)的容器对象MCIO的资源和标识隔离。容器集群(CIS cluster)、容器集群节点(CIS cluster node)和命名空间(namespace)的关系如图4示,CISM和CCM在北向接口上为NFVO或VNFM提供了调用其功能的管理服务。
本发明方案提出一种基于NFV模板的容器集群管理方法,通过定义容器集群描述符模板和容器集群节点描述符模板,并在容器集群管理过程中应用新定义的描述符模板,来支持容器集群的动态管理,实现大规模容器集群一致性部署以及批量复制。
容器集群描述符(CCD,CIS Cluster Descriptor)是本发明实施例定义的一类描述容器集群的部署和运营行为需求的NFV模板文件。CCD可以参考或使用类似VNFD(Virtual Network Function Descriptor)的模板,其包括但不限于以下基本部署信息:
容器集群描述符的名称、标识、提供商、版本信息等。
容器集群的规模(size),即:容器集群中所包含的CISM实例最大数量和/或CIS实例最大数量。
容器集群弹性伸缩(scale)操作的基本特征,包括容器集群在弹性伸缩操作中可以执行的最小步长、最大步长和/或可达的弹性伸缩等级(Scale level)。
容器集群整体的亲和/反亲和规则,是指基于该CCD创建的容器集群实例所处的亲和/反亲和组的标识信息,用于指示基于该CCD创建的容器集群实例和基于其他CCD创建的容器集群实例之间的亲和/反亲和关系。亲和组是围绕资源相近性而形成的逻辑关系组,属于同一亲和组的对象在部署时使用的资源是相近的,例如:亲和组中的所有对象部署在同一个数据中心中;反亲和组是围绕资源相远性而形成的逻辑关系组,属于同一反亲和组的对象在部署时使用的资源是不相近的,例如:反亲和组中的每个对象都部署在不同的数据中心中;
容器集群内部署的CISM实例之间的亲和/反亲和规则,是指基于该CCD创建的容器集群实例中的CISM实例所处的亲和/反亲和组的标识信息,用于指示基于该CCD创建的容器集群实例中的CISM实例和基于该CCD创建的同一容器集群实例中的其他CISM实例之间的亲和/反亲和关系。
容器集群内部署的CIS实例之间的亲和/反亲和规则,是指基于该CCD创建的容器集群实例中的CIS实例所处的亲和/反亲和组的标识信息,用于指示基于该CCD创建的容器集群实例中的CIS实例和基于该CCD创建的同一容器集群实例中的其他CIS实例之间的亲和/反亲和关系。
容器集群内部署的CISM实例和CIS实例之间的亲和/反亲和规则,是指基于该CCD创建的容器集群实例中的CISM实例和CIS实例所处的亲和/反亲和组的标识信息,用于指示基于该CCD创建的容器集群实例中的CISM实例和基于该CCD创建的容器集群实例中的CIS实例之间的亲和/反亲和关系。
主容器集群外部网络的(primary CIS cluster external network)特性,是指基于该CCD创建的容器集群实例的主外部网络的基本配置信息,例如:容器集群内的容器连接外部网络的IP地址和端口的特性要求;其中,主容器集群外部网络是指在容器集群外部公开的网络,通过底层的容器基础设施层原生的网络能力将容器集群中的容器(OS container)间接连接到该外部公开网络。
从属容器集群外部网络(secondary CIS cluster external network)的特性,是指基于该CCD创建的容器集群实例的从属外部网络的基本配置信息,例如:容器集群使用的容器网络接口(Container Network Interface,CNI)的特性要求;其中,从属容器集群外部网络是指容器集群外部暴露的网络,容器集群中的容器(OS container)通过除主网络接口之外的其他网络接口直接进行互联。
容器集群节点描述符(CCND,CIS Cluster Node Descriptor),是一类描述容器集群节点的部署和运营行为需求的NFV模板文件。CCND类比虚拟计算或存储资源描述符的定义,包括但不限于以下部署信息:
基于该CCND创建的容器集群节点类型,例如:指示该节点的类型是物理机(裸金属)还是虚拟机。
基于该CCND创建的容器集群节点对硬件加速、网络接口、本地存储的要求。
基于该CCND创建的容器集群中节点之间的亲和/反亲和规则,是指基于该CCND创建的容器集群节点实例所处的亲和/反亲和组的标识信息,用于指示基于该CCND创建的容器集群节点(或称作容器集群节点实例)和基于该CCND创建的其他容器集群节点实例之间的亲和/反亲和关系。
基于以上模板文件,本发明实施例一提供了一种容器集群的创建(或称作实例化)方法,如图5所示,其具体包括如下步骤:
步骤501:NFV MANO管理实体(或简称:管理实体,下同)访问容器集群描述符CCD,从CCD文件中获取待创建的容器集群(或称作容器集群实例)的部署信息。
管理实体可以是NFVO或VNFM,具体由哪一个执行该实施例方法的全部步骤取决于***配置,这里不做具体限定。
步骤502:管理实体根据CCD中容器集群的部署信息确定待创建的容器集群实例的实例化参数,例如:容器集群描述符CCD的名称或标识信息、容器集群的规模、容器集群初始化创建的CISM实例数和CIS实例数、容器集群内CISM实例之间、CIS实例之间以及CISM实例和CIS实例之间的亲和/反亲和规则。
所述管理实体可以使用容器集群描述符CCD中的容器集群的部署信息作为容器集群实例的实例化参数,也可以在满足所述部署信息的基础上参考其他网元***(如OSS/BSS)的输入,确定容器集群实例的实例化参数。
步骤503:管理实体向容器集群管理CCM发送容器集群创建请求,请求消息中携带待创建的容器集群的规模、容器集群初始化创建的CISM实例数和CIS实例数、容器集群内CISM实例之间、CIS实例之间以及CISM实例和CIS实例之间的亲和/反亲和规则。
步骤504:CCM向管理实体返回容器集群创建应答,指示容器集群创建请求消息接收成功。
步骤505:CCM向管理实体发送容器集群管理过程的改变通知,向管理实体指示容器集群实例化过程开始。
步骤506:管理实体从容器集群描述符CCD中获取待创建的容器集群节点实例的容器集群节点描述符CCND的标识信息,通过CCND的标识信息获取到CCND文件;管理实体访问CCND获取待创建的容器集群节点实例的部署信息。
步骤507:管理实体根据CCND中容器集群节点的部署信息确定待创建的容器集群节点实例的实例化参数,例如:容器集群节点的类型、容器集群节点所属的亲和/反亲和组。
步骤508:管理实体向容器集群管理CCM发送容器集群节点创建请求,请求消息中携带待创建的容器集群节点描述符的名称或标识信息、容器集群节点的类型、容器集群节点所属的亲和/反亲和组。
步骤509:CCM向管理实体返回容器集群节点创建应答,指示容器集群节点创建请求消息接收成功。
可选地,作为步骤506至509的一种可替换的方法,CCM从容器集群描述符CCD中获取容器集群节点描述符CCND的标识信息,通过访问容器集群节点描述符CCND确定容器集群节点的实例化参数。
同样地,CCM可以使用容器集群描述符CCD中的容器集群的部署信息作为容器集群实例的实例化参数,也可以在满足所述部署信息的基础上参考其他网元***(如OSS/BSS)的输入,确定容器集群实例的实例化参数。
步骤510:CCM完成待创建的容器集群中初始化的容器集群节点的创建过程,从而在本地 完成创建容器集群实例。进一步地,CCM访问容器集群描述符CCD获取待部署的CISM实例和/或CIS实例的软件镜像(image)信息,在容器集群节点上部署CISM实例和CIS实例(可选地,CIS实例也可以由创建的CISM实例来创建),同时CCM创建容器集群实例的信息,例如:实例化的容器集群实例使用的CCD标识信息和版本、实例化的状态、弹性伸缩的状态、允许的最大弹性伸缩等级、外部网络信息、节点资源信息等。
需要说明的是,CISM实例和/或CIS实例的软件镜像可以存放在NFV-MANO管理域中的容器集群的包文件中,也可以存放在NFV-MANO管理域外部的软件镜像库(image registry)中,容器集群描述符CCD包含指向存放CISM实例和/或CIS实例的软件镜像的容器集群的包文件或外部的软件镜像库目录地址的索引信息。
步骤511:CCM向管理实体发送容器集群管理过程的改变通知,向管理实体发送容器集群实例化结束通知消息。
本发明实施例二提供了一种容器集群更新的方法,如图6所示,其具体包括如下步骤:
步骤601:管理实体向容器集群管理CCM发送容器集群更新请求,请求消息中携带待更新的容器集群实例的标识信息、更新操作的类型为弹性伸缩、弹性伸缩所达到的目标容器集群的规模或弹性伸缩等级、弹性伸缩的目标容器集群节点之间的亲和/反亲和规则。
步骤602:CCM向管理实体返回容器集群更新应答,指示容器集群更新请求消息接收成功。
步骤603:CCM向管理实体发送容器集群管理过程的改变通知,向管理实体指示容器集群更新过程开始。
步骤604:管理实体从容器集群描述符CCD中获取待更新的容器集群节点实例的容器集群节点描述符CCND的标识信息,通过CCND的标识信息获取到CCND文件;管理实体访问CCND获取待创建的容器集群节点实例的部署信息;
步骤605:管理实体根据CCND中容器集群节点实例的部署信息确定待创建的容器集群节点实例的实例化参数,例如:容器集群节点描述符的名称或标识信息、容器集群节点的类型、容器集群节点所属的亲和/反亲和组。
步骤606:管理实体向容器集群管理CCM发送容器集群节点创建请求消息,该请求消息中携带待创建的容器集群节点实例的类型、容器集群节点实例所属的亲和/反亲和组。
步骤607:CCM向管理实体返回容器集群节点创建应答,指示容器集群节点创建请求消息接收成功。
可选地,作为步骤604至607的一种可替换的方法,CCM从容器集群描述符CCD中获取容器集群节点描述符CCND的标识信息,通过访问容器集群节点描述符CCND确定容器集群节点的实例化参数。
步骤608:CCM完成待更新的容器集群中的容器集群节点实例的创建过程,在本地生成新创建的容器集群节点实例的信息。
步骤609:CCM向管理实体返回容器集群更新完成通知消息,向管理实体指示容器集群更新过程结束。
本发明实施例三提供了一种容器集群的删除方法,如图7所示,其具体包括如下步骤:
步骤701:管理实体向容器集群管理CCM发送容器集群删除请求消息,该请求消息中携带待删除的容器集群实例的标识信息,和/或删除操作的类型,例如:强制删除(Forceful deletion)或友好删除(Graceful deletion)。
步骤702:CCM根据请求消息中的删除操作的类型在本地卸载待删除的容器集群中的CISM实例和/或CIS实例,释放容器集群节点占用的I层资源并删除容器集群节点实例,删除容器集群实例。同时CCM删除容器集群实例的信息。
步骤703:CCM向管理实体返回容器集群删除应答,指示容器集群实例删除成功。
本发明实施例在NFV模板中增加定义容器集群描述符CCD和容器集群节点描述符CCND的信息模型。CCD主要包括集群的规模、弹性伸缩属性和集群内对象实例的亲和/反亲和规则。CCND包括节点的类型,节点对硬件加速、网络接口、本地存储的需求,以及容器集群内节点间的亲和/反亲和规则。在创建/更新/删除容器集群的过程中,管理实体通过访问CCD获取待创建/更新/删除容器集群的信息,通过访问CCND获取容器集群内节点的信息,并根据这些信息向CCM发送创建/更新/删除容器集群请求,CCM完成容器集群的创建/更新/删除后向管理实体返回应答。
通过实施上述各个实施例的方法,本发明实施例的方案可以支持容器集群的动态管理,实现大规模容器集群一致性部署以及批量复制。
上述主要从各个网元之间交互的角度对本申请实施例提供的方案进行了介绍。可以理解的是,上述NFVO、VNFM或CCM等为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法操作,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例对NFVO、VNFM或CCM等进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
比如,以采用集成的方式划分各个功能模块的情况下,图8示出了一种通信装置80的结构示意图。通信装置80包括收发模块801和处理模块802。
示例性地,通信装置80用于实现NFVO或VNFM的功能。通信装置80例如为图5所示的实施例,图6所示的实施例或图7所示的实施例所述的NFVO或VNFM。
在本申请实施例中,通信装置80可以是NFVO或VNFM,也可以是应用于NFVO或VNFM中的芯片或者其他具有上述NFVO或VNFM功能的组合器件、或部件等。当通信装置80是NFVO或VNFM时,收发模块801可以是收发器,收发器可以包括天线和射频电路等,处理模块802可以是处理器(或者,处理电路),例如基带处理器,基带处理器中可以包括一个或多个CPU。当通信装置80是具有上述NFVO或VNFM功能的部件时,收发模块801可以是射频单元,处理模块802可以是处理器(或者,处理电路),例如基带处理器。当通信装置80是芯片***时, 收发模块801可以是芯片(例如基带芯片)的输入输出接口,处理模块802可以是芯片***的处理器(或者,处理电路),可以包括一个或多个中央处理单元。应理解,本申请实施例中的收发模块801可以由收发器或收发器相关电路组件实现,处理模块802可以由处理器或处理器相关电路组件(或者,称为处理电路)实现。
例如,收发模块801可以用于执行图5所示的实施例中由NFVO或VNFM所执行的全部收发操作,例如S503,和/或用于支持本文所描述的技术的其它过程。处理模块802可以用于执行图5所示的实施例中由NFVO或VNFM所执行的除了收发操作之外的全部操作,例如S501,S502,S505和/或用于支持本文所描述的技术的其它过程。
又例如,收发模块801可以用于执行图6所示的实施例中由NFVO或VNFM所执行的全部收发操作,例如S603,和/或用于支持本文所描述的技术的其它过程。处理模块802可以用于执行图6所示的实施例中由NFVO所执行的除了收发操作之外的全部操作,例如S601,S602,S605和/或用于支持本文所描述的技术的其它过程。
又例如,收发模块801可以用于执行图7所示的实施例中由NFVO或VNFM所执行的全部收发操作,例如S701,和/或用于支持本文所描述的技术的其它过程。处理模块802可以用于执行图6所示的实施例中由NFVO所执行的除了收发操作之外的全部操作,例如S703,和/或用于支持本文所描述的技术的其它过程。
同样地,通信装置80也可以用于实现CCM的功能,为图5所示的实施例,图6所示的实施例或图7所示的实施例所述的CCM,并执行所述图5-图7所示实施例中由CCM所执行的全部操作,不再赘述。
图9示出了的一种通信***的组成示意图,如图9所示,该通信***90中可以包括:管理实体901和CCM 902。需要说明的是,图9仅为示例性附图,本申请实施例不限定图9所示通信***90包括的网元以及网元的个数。
其中,NFVO 901用于实现上述图5-图7所示的方法实施例中管理实体901的功能。例如,管理实体901可以用于访问容器集群描述符文件CCD,从文件中获取待创建的容器集群的部署信息,根据CCD中容器集群的部署信息确定容器集群的实例化参数,并向容器集群管理CCM发送容器集群创建请求,请求消息中携带待创建的容器集群的实例化参数。
CCM 902用于实现上述图5-图7所示的方法实施例中CCM的功能。例如,CCM 902向管理实体901返回容器集群创建应答,指示容器集群创建成功或者失败,以及创建失败的原因,并在本地创建容器集群实例,并完成指定数量的容器集群节点的初始创建等。
需要说明的是,上述方法实施例涉及的各步骤的所有相关内容均可以援引到该通信***90对应网元的功能描述,在此不再赘述。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
本申请实施例中提供了一种计算设备1000,如图10所示,包括至少一个存储器1030,用于存储程序指令和/或数据,存储器1030和处理器1020耦合,处理器1020通过运行所述存储的程序指令和/或处理所述存储的数据来实现相应的功能。所述计算设备1000可以是图 5~图7所示的实施例中的NFVO或VNFM,能够实现所述实施例提供的方法中NFVO或VNFM的功能。该计算设备1000可以为芯片***,本申请实施例中,芯片***可以由芯片构成,也可以包含芯片和其他分立器件。
该计算设备1000还可以包括通信接口1010,用于通过传输介质和其它设备进行通信。示例性地,该其它设备可以是控制设备。处理器1020可以利用通信接口1010收发数据。
本申请实施例中不限定上述通信接口1010、处理器1020以及存储器1030之间的具体连接介质。本申请实施例在图10中以存储器1030、处理器1020以及通信接口1010之间通过总线1040连接,总线在图10中以粗线表示,其它部件之间的连接方式,仅是进行示意性说明,并不引以为限。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图10中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
在本申请实施例中,处理器1020可以是通用处理器、数字信号处理器、专用集成电路、现场可编程门阵列或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
在本申请实施例中,存储器1030可以是非易失性存储器,比如硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD)等,还可以是易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM)。存储器是能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。本申请实施例中的存储器还可以是电路或者其它任意能够实现存储功能的装置,用于存储程序指令和/或数据。
本申请实施例中还提供了一种计算设备1100,如图11所示,包括至少一个存储器1130,用于存储程序指令和/或数据,存储器1130和处理器1120耦合。处理器1120通过运行所述存储的程序指令和/或处理所述存储的数据来实现相应的功能。所述计算设备1000可以是图5~图7所示的实施例中的CCM,能够实现所述实施例提供的方法中CCM的功能。
该计算设备1100同样包括通信接口1110,用于通过传输介质和其它设备进行通信。处理器1120可以利用通信接口1110收发数据。
其他功能和结构跟上述计算设备1000类似,这里不再赘述。
本申请实施例中还提供了一种计算机可读存储介质,用于存储指令,当所述指令被计算设备的处理器执行时,使得所述计算设备实现本申请任意一个实施例中提供的方法。
本申请实施例中提供了一种计算机程序产品,所述计算机程序产品包括计算机程序代码,当所述计算机程序代码在计算设备上运行时,使得所述计算设备执行本申请任意一个实施例中提供的方法。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的范围。
最后需要说明的是,以上实施例仅用以说明本申请的技术方案,而未对其限制;尽管参 照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解,依然可以对前述各个实施例中所提供的技术方案进行修改,或者对其中部分技术特征进行等同替换,而这些修改或替换,并不使相应技术方案的本质脱离本申请各个实施例中所提供技术方案的范围。

Claims (23)

  1. 一种容器集群的管理方法,其特征在于,所述方法包括:
    容器集群管理CCM从管理实体接收容器集群的实例化请求消息,所述请求消息携带容器集群的实例化参数;
    CCM根据所述容器集群的实例化参数实例化容器集群;
    所述容器集群的实例化参数是所述管理实体通过访问容器集群描述符CCD确定的。
  2. 根据权利要求1所述的管理方法,其特征在于,其特征在于,所述CCM根据所述容器集群的实例化参数实例化容器集群包括:
    CCM从管理实体接收容器集群节点的实例化请求消息,所述请求消息携带容器集群节点的实例化参数,所述容器集群节点的实例化参数是所述管理实体访问容器集群节点描述符CCND确定的;或者,
    CCM访问CCND从而确定容器集群节点的实例化参数;
    CCM根据所述容器集群节点的实例化参数实例化容器集群节点,以及根据所述容器集群的实例化参数实例化容器集群节点上的容器基础设施服务管理CISM实例和/或容器基础设施服务CIS实例。
  3. 根据权利要求2所述的管理方法,其特征在于,所述CCM根据所述容器集群的实例化参数实例化容器集群节点上的CISM实例和CIS实例包括:
    CCM在容器集群节点上创建容器基础设施服务管理CISM实例和/或容器基础设施服务CIS实例;或
    CCM在容器集群节点上创建容器基础设施服务管理CISM实例,所述CISM实例进一步在容器集群节点上创建CIS实例。
  4. 根据权利要求1所述的管理方法,其特征在于,所述容器集群的实例化参数包括如下一种或多种:容器集群描述符CCD的名称或标识信息、容器集群的规模、容器集群初始化创建的CISM实例数和CIS实例数、容器集群内CISM实例之间、CIS实例之间以及CISM实例和CIS实例之间的亲和/反亲和规则。
  5. 根据权利要求2所述的管理方法,其特征在于,所述容器集群节点的实例化参数包括如下一种或多种:容器集群节点描述符的名称或标识信息、容器集群节点的类型、容器集群节点所属的亲和/反亲和组。
  6. 根据权利要求1所述的管理方法,其特征在于,
    CCM从管理实体接收容器集群的更新请求消息,所述请求消息携带更新的容器集群实例的参数;
    CCM根据所述更新的容器集群实例的参数更新所述容器集群。
  7. 根据权利要求6所述的管理方法,其特征在于,其特征在于,所述CCM根据所述更新的容器集群实例的参数更新所述容器集群包括:
    CCM从管理实体接收容器集群节点的创建请求消息,所述创建请求消息携带容器集群节 点的实例化参数;或者,
    CCM访问CCND从而确定容器集群节点的实例化参数;
    CCM根据所述容器集群节点的实例化参数创建容器集群节点。
  8. 根据权利要求6所述的管理方法,其特征在于,所述更新的容器集群实例的参数包括如下的一种或多种:
    容器集群实例的标识信息、更新操作的类型为弹性伸缩且弹性伸缩所达到的目标容器集群的规模或弹性伸缩等级,和弹性伸缩的目标容器集群节点之间的亲和/反亲和规则。
  9. 根据权利要求1所述的管理方法,其特征在于,
    CCM从管理实体接收容器集群删除请求消息,所述删除请求消息中携带待删除的容器集群实例的标识信息,和/或删除操作的类型;
    CCM删除所述容器集群实例。
  10. 根据权利要求9所述的管理方法,其特征在于,
    CCM删除所述容器集群实例具体包括删除容器集群包括的各个容器集群节点以及节点上的CISM实例和/或CIS实例,以及释放容器集群节点占用的I层资源,删除容器集群实例。
  11. 根据权利要求1-10任一项所述的管理方法,其特征在于,所述管理实体是网络功能虚拟化编排器NFVO或虚拟网络功能管理器VNFM。
  12. 一种容器集群的管理方法,其特征在于,所述方法包括:
    管理实体访问容器集群描述符CCD并确定容器集群的实例化参数;
    管理实体向容器集群管理CCM发送所述容器集群的实例化参数;
    CCM根据所述容器集群的实例化参数实例化容器集群。
  13. 根据权利要求12所述的管理方法,其特征在于,其特征在于,所述CCM根据所述容器集群的实例化参数实例化容器集群包括:
    所述管理实体访问容器集群节点描述符CCND并确定容器集群节点的实例化参数;
    管理实体向CCM发送所述容器集群节点的实例化参数;或者,
    CCM访问CCND从而确定容器集群节点的实例化参数;
    CCM根据所述容器集群节点的实例化参数实例化容器集群节点以及根据所述容器集群的实例化参数实例化容器集群节点上的容器基础设施服务管理CISM实例和/或容器基础设施服务CIS实例。
  14. 根据权利要求13所述的管理方法,其特征在于,所述CCM根据所述容器集群的实例化参数实例化容器集群节点上的CISM实例和CIS实例包括:
    CCM在容器集群节点上创建CISM实例和/或容CIS实例;或
    CCM在容器集群节点上创建CISM实例,所述CISM实例进一步在容器集群节点上创建CIS实例。
  15. 根据权利要求12所述的管理方法,其特征在于,所述容器集群的实例化参数包括如下一种或多种:容器集群描述符CCD的名称或标识信息、容器集群的规模、容器集群初始化创建的CISM实例数和CIS实例数、容器集群内CISM实例之间、CIS实例之间以及CISM实例和CIS实例之间的亲和/反亲和规则。
  16. 根据权利要求13所述的管理方法,其特征在于,所述容器集群节点的实例化参数包括如下一种或多种:容器集群节点描述符CCND的名称或标识信息、容器集群节点的类型、容器集群节点所属的亲和/反亲和组。
  17. 一种容器集群的管理***,其特征在于,所述***包括:
    管理实体,用于访问容器集群描述符CCD并确定容器集群的实例化参数,并向容器集群管理CCM发送所述容器集群的实例化参数;
    所述CCM,用于根据所述容器集群的实例化参数实例化容器集群。
  18. 根据权利要求17所述的管理***,其特征在于,其特征在于,
    所述管理实体,还用于访问容器集群节点描述符CCND并确定容器集群节点的实例化参数;
    管理实体向CCM发送所述容器集群节点的实例化参数;
    CCM根据所述容器集群节点的实例化参数实例化容器集群节点,以及根据所述容器集群的实例化参数实例化容器集群节点上的容器基础设施服务管理CISM实例和/或容器基础设施服务CIS实例。
  19. 根据权利要求17所述的管理***,其特征在于,其特征在于,
    所述CCM,还用于访问CCND从而确定容器集群节点的实例化参数;
    CCM根据所述容器集群节点的实例化参数实例化容器集群节点,以及根据所述容器集群的实例化参数实例化容器集群节点上的容器基础设施服务管理CISM实例和/或容器基础设施服务CIS实例。
  20. 根据权利要求18所述的管理***,其特征在于,所述CCM根据所述容器集群的实例化参数实例化容器集群节点上的CISM实例和CIS实例包括:
    CCM在容器集群节点上创建CISM实例和/或CIS实例;或
    CCM在容器集群节点上创建CISM实例,所述CISM实例进一步在容器集群节点上创建CIS实例。
  21. 一种容器集群管理装置,包括用于执行如权利要求1至11中任一项所述方法步骤的模块。
  22. 一种容器集群管理装置,其特征在于,包括处理器、存储器,所述处理器和存储器耦合,所述存储器中存储有计算机程序;所述处理器用于调用所述存储器中的计算机程序,使得所述管理装置执行如权利要求1至11任一所述的方法。
  23. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有计算机程序,当所述计算机程序被执行时,实现如权利要求1至11中任一项所述的方法。
PCT/CN2020/140276 2020-12-28 2020-12-28 容器集群的管理方法和装置 WO2022140945A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
PCT/CN2020/140276 WO2022140945A1 (zh) 2020-12-28 2020-12-28 容器集群的管理方法和装置
CN202080108230.3A CN116724543A (zh) 2020-12-28 2020-12-28 容器集群的管理方法和装置
JP2023539232A JP2024501005A (ja) 2020-12-28 2020-12-28 コンテナクラスタのための管理方法および装置
EP20967297.1A EP4258609A4 (en) 2020-12-28 2020-12-28 METHOD AND APPARATUS FOR MANAGING CONTAINER CLUSTERS
US18/342,472 US20230342183A1 (en) 2020-12-28 2023-06-27 Management method and apparatus for container cluster

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/140276 WO2022140945A1 (zh) 2020-12-28 2020-12-28 容器集群的管理方法和装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/342,472 Continuation US20230342183A1 (en) 2020-12-28 2023-06-27 Management method and apparatus for container cluster

Publications (1)

Publication Number Publication Date
WO2022140945A1 true WO2022140945A1 (zh) 2022-07-07

Family

ID=82258970

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/140276 WO2022140945A1 (zh) 2020-12-28 2020-12-28 容器集群的管理方法和装置

Country Status (5)

Country Link
US (1) US20230342183A1 (zh)
EP (1) EP4258609A4 (zh)
JP (1) JP2024501005A (zh)
CN (1) CN116724543A (zh)
WO (1) WO2022140945A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116541133A (zh) * 2023-07-05 2023-08-04 苏州浪潮智能科技有限公司 容器应用的纳管方法、其装置及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019233273A1 (zh) * 2018-06-05 2019-12-12 华为技术有限公司 管理容器服务的方法和装置
CN111447076A (zh) * 2019-01-17 2020-07-24 ***通信有限公司研究院 网络功能虚拟化nvf***的容器部署方法及网元
CN111641515A (zh) * 2019-03-01 2020-09-08 华为技术有限公司 Vnf的生命周期管理方法及装置
CN111949364A (zh) * 2019-05-16 2020-11-17 华为技术有限公司 容器化vnf的部署方法和相关设备

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109814881A (zh) * 2017-11-21 2019-05-28 北京京东尚科信息技术有限公司 用于部署数据库集群的方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019233273A1 (zh) * 2018-06-05 2019-12-12 华为技术有限公司 管理容器服务的方法和装置
CN111447076A (zh) * 2019-01-17 2020-07-24 ***通信有限公司研究院 网络功能虚拟化nvf***的容器部署方法及网元
CN111641515A (zh) * 2019-03-01 2020-09-08 华为技术有限公司 Vnf的生命周期管理方法及装置
CN111949364A (zh) * 2019-05-16 2020-11-17 华为技术有限公司 容器化vnf的部署方法和相关设备

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "Network Functions Virtualisation (NFV) Release 4; Management and Orchestration; Functional requirements specification", ETSI DRAFT; ETSI GS NFV-IFA 010, vol. ISG - NFV, no. V4.1.1, 9 November 2020 (2020-11-09), pages 1 - 118, XP014385204 *
ANONYMOUS: "Network Functions Virtualisation (NFV) Release 4; Management and Orchestration; Specification of requirements for the management and orchestration of container cluster nodes", ETSI DRAFT SPECIFICATION; NFV-IFA 036, no. V0.0.4, 30 November 2020 (2020-11-30), pages 1 - 12, XP014402568 *
HE ZHENWEI;HUANG DANCHI;YAN LIYUN;LIN YUANZHI;YANG XINZHANG: "Kubernetes based converged cloud Native Infrastructure Ssolution and Key Technologies", TELECOMMUNICATIONS SCIENCE, 20 December 2020 (2020-12-20), pages 77 - 88, XP055948450 *
See also references of EP4258609A4 *
TSC CHAIR,TSC: "NFV Release 4 Definition v0.2.0", ETSI DRAFT; NFV(20)000160, EUROPEAN TELECOMMUNICATIONS STANDARDS INSTITUTE (ETSI), 21 August 2020 (2020-08-21), pages 1 - 1, XP014377576 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116541133A (zh) * 2023-07-05 2023-08-04 苏州浪潮智能科技有限公司 容器应用的纳管方法、其装置及电子设备
CN116541133B (zh) * 2023-07-05 2023-09-15 苏州浪潮智能科技有限公司 容器应用的纳管方法、其装置及电子设备

Also Published As

Publication number Publication date
EP4258609A4 (en) 2024-01-17
JP2024501005A (ja) 2024-01-10
EP4258609A1 (en) 2023-10-11
CN116724543A (zh) 2023-09-08
US20230342183A1 (en) 2023-10-26

Similar Documents

Publication Publication Date Title
US11354167B2 (en) Container service management method and apparatus
US11928522B2 (en) Containerized VNF deployment method and related device
CN111385114B (zh) Vnf服务实例化方法及装置
US8141090B1 (en) Automated model-based provisioning of resources
WO2017012381A1 (zh) 一种生命周期管理方法及装置
US20220004410A1 (en) Method For Deploying Virtual Machine And Container, And Related Apparatus
WO2020186911A1 (zh) 一种容器化虚拟网络功能vnf的资源管理方法及装置
WO2016165292A1 (zh) 一种实现虚拟网络功能部署规格配置的方法及装置
US20220078230A1 (en) System and method for dynamic auto-scaling based on roles
CN109428764B (zh) 虚拟网络功能的实例化方法
WO2020011214A1 (zh) 管理虚拟化资源的方法和装置
WO2020103925A1 (zh) 一种容器化虚拟网络功能的部署方法和装置
CA2882751A1 (en) Integrated computing platform deployed in an existing computing environment
EP3883183A1 (en) Virtualization management method and device
US20230342183A1 (en) Management method and apparatus for container cluster
EP4177742A1 (en) Multitenancy management method and apparatus
CN112015515B (zh) 一种虚拟网络功能的实例化方法及装置
WO2020077585A1 (zh) Vnf服务实例化方法及装置
WO2023274014A1 (zh) 容器集群的存储资源管理方法、装置及***
WO2024104311A1 (zh) 部署虚拟化网络功能的方法和通信装置
WO2018120222A1 (zh) 一种管理vnffg的方法、装置和***
WO2022141293A1 (zh) 一种弹性伸缩的方法及装置
US12020055B2 (en) VNF service instantiation method and apparatus
CN113098705B (zh) 网络业务的生命周期管理的授权方法及装置
KR102674017B1 (ko) 네트워크 자원 관리 방법, 시스템, 네트워크 디바이스 및 판독 가능한 저장 매체

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2023539232

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 202080108230.3

Country of ref document: CN

ENP Entry into the national phase

Ref document number: 2020967297

Country of ref document: EP

Effective date: 20230706

NENP Non-entry into the national phase

Ref country code: DE