CN115086166B - Computing system, container network configuration method, and storage medium - Google Patents

Computing system, container network configuration method, and storage medium Download PDF

Info

Publication number
CN115086166B
CN115086166B CN202210557898.7A CN202210557898A CN115086166B CN 115086166 B CN115086166 B CN 115086166B CN 202210557898 A CN202210557898 A CN 202210557898A CN 115086166 B CN115086166 B CN 115086166B
Authority
CN
China
Prior art keywords
network service
network
container
component
container instance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210557898.7A
Other languages
Chinese (zh)
Other versions
CN115086166A (en
Inventor
鲁金达
侯志远
邬宗勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202210557898.7A priority Critical patent/CN115086166B/en
Publication of CN115086166A publication Critical patent/CN115086166A/en
Application granted granted Critical
Publication of CN115086166B publication Critical patent/CN115086166B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the application provides a computing system, a container network configuration method and a storage medium. In the embodiment of the application, the capability that different networks can be opened for the container by using the network service in the network service cluster is utilized, a Container Network Interface (CNI) component corresponding to the network service cluster is additionally arranged in a working node of the computing cluster, and the CNI component can access the container instance deployed in the working node to the network service, so that the container instance can access other networks through the network service. The CNI component can access the container instance deployed in the working node to the network service, and can occur at any stage of the life cycle of the container instance, so that decoupling of container network configuration and the life cycle of the container is realized, and the flexibility of the container network configuration is improved.

Description

Computing system, container network configuration method, and storage medium
Technical Field
The present disclosure relates to the field of cloud computing technologies, and in particular, to a computing system, a container network configuration method, and a storage medium.
Background
Server virtualization technology is a key technology based on an infrastructure layer in cloud computing. The technology realizes the deployment of a plurality of Virtual Machines (VM) on a single physical Machine by virtualizing a physical server. In order to improve the resource utilization rate of the server and reduce the use cost, the computing cluster integrally manages a plurality of virtual machines or physical machines, abstracts physical resources into a resource pool formed by various resources such as storage, computation, network and the like through a virtualization technology, and provides the resources to users in a mode of applying resources according to requirements.
In practical application, in order to implement resource management on a computing cluster, a Kubernetes (K8 s for short) control plane program needs to be deployed in a central cloud, computing nodes are first uniformly taken over into a K8s control plane, and then an application container is deployed on resources accessed to K8s to provide cloud computing services for users.
The demands of the application container on the network are different. Currently K8s uses the container network interface (Container Network Interface, CNI) as container network configuration interface for container network configuration. When the Kubelet component in each working node in K8s starts the container, the ADD interface of the CNI is called, and a network is added to the container. Before the Kubelet component destroys the Pod, it invokes the DEL interface of the CNI, removing the container from the network. The ADD and DEL interfaces are invoked only when Pod is started and destroyed, so the current container network configuration mode is tightly coupled with the container lifecycle, and the container network configuration flexibility is poor. For example, network properties of the container cannot be dynamically changed at the time of container run, and so on.
Disclosure of Invention
Aspects of the present application provide a computing system, a container network configuration method, and a storage medium to implement decoupling of a container network configuration process from a container lifecycle, which helps to improve flexibility of container network configuration.
Embodiments of the present application provide a computing system comprising: the control node, the computing cluster and the network service cluster; the computing cluster includes a plurality of working nodes; the network service cluster is used for deploying network services;
the working node includes: a container network interface CNI component corresponding to the network service;
the CNI component is configured to access the container instance deployed in the working node to the network service;
the container instance accesses other networks through the network service.
The embodiment of the application also provides a container network configuration method, which comprises the following steps:
determining a container instance of the target working node deployment;
and accessing the container instance to the network service of the network service cluster by utilizing a CNI component corresponding to the network service cluster in the target working node so as to enable the container instance to access other networks through the network service.
Embodiments also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform the steps in the container network configuration method described above.
In the embodiment of the application, the capability that different networks can be opened for the container by using the network service in the network service cluster is utilized, a Container Network Interface (CNI) component corresponding to the network service cluster is additionally arranged in a working node of the computing cluster, and the CNI component can access the container instance deployed in the working node to the network service, so that the container instance can access other networks through the network service. The CNI component can access the container instance deployed in the working node to the network service, and can occur at any stage of the life cycle of the container instance, so that decoupling of container network configuration and the life cycle of the container is realized, and the flexibility of the container network configuration is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is a schematic diagram of a container network configuration process according to a container network configuration method provided by some open source schemes;
FIG. 2 is a schematic diagram of a computing system according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a container network configuration flow provided in an embodiment of the present application;
fig. 4 is a schematic flow chart of allocating a virtual network card to a container according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a computing system according to an embodiment of the present application;
fig. 6 is a schematic diagram of a network service creation process provided in an embodiment of the present application;
fig. 7 is a flowchart of a network configuration method according to an embodiment of the present application.
Detailed Description
For the purposes, technical solutions and advantages of the present application, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
In some open source CNI plugins (e.g., multus-CNI plugins), multiple other CNI plugins may be invoked to implement container instances (e.g., pod) belonging to multiple networks at the same time. Fig. 1 illustrates a container network configuration provided by some open source schemes. As shown in fig. 1, the container network configuration method mainly includes the following steps:
step 1: a node proxy component (e.g., kubelet in a K8s system) runs the container.
Step 2: the container runtime (Container runtime) component invokes a network plug in (networkplug) to configure the container network (Setup pod).
Step 3: the CNI plug-in authorizes the CNI ADD operation (delete ADD).
Step 4: the master CNI plug in performs CNI ADD operations to configure the network 1 for the container.
Step 5: the Multus plug-in authorizes the CNI ADD operation (delete ADD).
Step 6: the CNI ADD operation is performed from a plug-in (mini plug in) to configure the virtual network 2 for the container.
The container network configuration shown in fig. 1 may implement a container with multiple networks of properties at the same time. However, the container network configuration process is still strongly coupled to the container startup process, i.e., the network properties of the container are determined at the container configuration (Setup) stage during the container startup process, and cannot be modified in time at the time of container operation. The application in the mode needs to carry out fine routing control among a plurality of networks, needs application developers to realize the routing control in a container (such as Pod) in a code or script mode, and increases development complexity. Furthermore, the CNI implementation needs to consider idempotency of CNI ADD operation, and if it is desired to have Pod possess multiple networks, it is necessary to invoke different CNI implementations. The different CNIs may be implemented by completely different networks, and if the user needs to access different networks in the same network implementation, the container network configuration mode cannot be implemented. The need to modify a specific CNI implementation to implement different networks under the same network implementation clearly increases development complexity.
Aiming at the technical problems existing in the conventional scheme, particularly the technical problems that the container network configuration and the container start up are strongly coupled, and the flexibility of the container network configuration mode is low, in some embodiments of the present application, the capability of different networks can be opened for the container by utilizing the network service in the network service cluster, a Container Network Interface (CNI) component corresponding to the network service cluster is added in a working node of the computing cluster, and the CNI component can access the container instance deployed in the working node to the network service, so that the container instance can access other networks through the network service. The CNI component can access the container instance deployed in the working node to the network service, and can occur at any stage of the life cycle of the container instance, so that decoupling of container network configuration and the life cycle of the container is realized, and the flexibility of the container network configuration is improved.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
It should be noted that: like reference numerals denote like objects in the following figures and embodiments, and thus once an object is defined in one figure or embodiment, further discussion thereof is not necessary in the subsequent figures and embodiments.
Fig. 2 is a schematic structural diagram of a computing system according to an embodiment of the present application. As shown in fig. 2, the computing system provided in the embodiment of the present application mainly includes: a management (Master) node 10, a computing cluster 20, and a network service cluster 30. As shown in fig. 2, the computing cluster 20 refers to a computing cluster composed of a plurality of working nodes (works) 201. The network service cluster 30 may also be composed of a plurality of working nodes (works) 301.
In this embodiment, the management node 10 is a computer device capable of performing management of a working node, responding to a service request of the user terminal 30, and providing computing services for the user by scheduling the working node 201, and generally has the capability of bearing services and guaranteeing the services. The management node 10 may be a single server device, a cloud server array, or a Virtual Machine (VM), a container, a group of containers, or the like running in the cloud server array. In addition, the server device may also refer to other computing devices having corresponding service capabilities, for example, a terminal device (running a service program) such as a computer, and the like. In this embodiment, the management node 10 may be deployed in a cloud, such as a central cloud of an edge cloud system.
A worker node refers to a computer device that provides computing resources. The working node can be a physical machine or a virtual machine virtualized in the physical machine. In this embodiment, the working node may provide other hardware resources and software resources in addition to the computing resources. Wherein the hardware resources may include: computing resources such as processors, memory, disk, and the like. Wherein the processor may be CPU, GPU, FPGA, etc. The software resources may include: bandwidth, network segment, network card configuration, etc., and an operating system, etc.
In this embodiment, the working node may be deployed in a central cloud, and may also be implemented as an edge cloud node in an edge cloud network. An edge node may be a machine room, a Data Center (DC), or an internet Data Center (Internet Data Center, IDC), etc. For an edge cloud network, one working node may include one or more edge nodes. The plural means 2 or more than 2. Each edge node may include a series of edge infrastructures including, but not limited to: distributed Data Center (DC), wireless room or cluster, operator's communication network, core network devices, base stations, edge gateways, home gateways, computing devices or storage devices, and corresponding network environments, etc. The location, capabilities, and inclusion of infrastructure of the different edge nodes may or may not be the same as described herein.
In the present embodiment, the working node 301 of the Network Service cluster 30 is mainly used for deploying a Network Service (Network Service), such as the Network services 1-3 in fig. 2. In embodiments of the present application, network service cluster 30 is used to implement gateway orchestration services, which may provide an infrastructure layer that handles communications between different networks. The network service cluster 30 may provide the container accessing the network service cluster 30 with the ability to access other networks. A web service is a gateway abstraction for a visited network. The network service may have a variety of gateway implementations, such as a VPC gateway implementation, that may be ported into another VPC. In some embodiments, the network service cluster 30 may be implemented as a network service grid (Network Service Mesh, NSM) cluster. Where a network service grid (NSM) is a service grid that provides network services, it may be an infrastructure layer that handles communications between different networks.
In the embodiment of the present application, the management and control nodes 10 corresponding to the network service cluster 30 and the computing cluster 20 may be the same master node, or may be different master nodes. Fig. 1 illustrates only the same master node as the corresponding management node 10 of the network service cluster 30 and the computing cluster 20, but is not limited thereto.
In the present embodiment, the control node 10 and the working nodes 201, 301 may be connected wirelessly or by wire. Alternatively, the management node 10 and the working nodes 201, 301, etc. may be communicatively connected through a mobile network, and accordingly, the network system of the mobile network may be any one of 2G (GSM), 2.5G (GPRS), 3G (WCDMA, TD-SCDMA, CDMA2000, UTMS), 4G (LTE), 4g+ (lte+), 5G, wiMax, etc. Alternatively, the management node 10 and the working nodes 201, 301 may be communicatively connected by bluetooth, wiFi, infrared, etc. The different working nodes 201 in the computing cluster 20 may also be connected by intranet communication, etc. Of course, the different working nodes 301 in the network service cluster 30 may also be connected by intranet communication, etc.
In embodiments of the present application, the management node 10 may respond to the container creation request, schedule a target working node adapted to the container creation request in the working node 201, and bind the container to be created with the working node. The node proxy component 20a in the target working node (e.g., kubelet component in K8 s) can create and initiate a container (e.g., pod, etc.) by the docker upon hearing that the container is bound, enabling deployment of the container in the target working node.
In order to implement decoupling of the container network configuration and the container lifecycle, in the embodiment of the present application, the capability of the network service to open different networks for the container is utilized, and a CNI (NSM-CNI for short) component 20b corresponding to the network service cluster is set in the working node 201 in the computing cluster 20. NSM-CNI component 20b may be implemented as a CNI plug-in. The CNI plug-in is a CNI plug-in corresponding to a network service cluster, and can be simply called as an NSM-CNI component. NSM-CNI component 20b is an executable program. In the embodiment of the present application, the CNI interface refers to a call to an executable program. The executable is called a CNI plug-in. In the embodiment of the present application, the NSM-CNI component 20b may be deployed in the working node 201 in the form of a depoyment. As shown in fig. 4, the N NSM-CNI component 20b may be deployed to the working node 201 in a CNI chain (CNI chain) manner. The CNI chain approach allows NSM-CNI component 20b to be used in conjunction with other CNI plugins. Among them, the CNI configuration of the NSM-CNI component 20b is exemplified as follows:
data structure 1: CNI configuration of NSM-CNI component 20b
In the embodiment of the present application, when a container network needs to be set, the NSM-CNI component 20b may access a container instance (such as Pod, etc.) deployed in the working node 201 into a network service in the network service cluster 30. In embodiments of the present application, container instances may be implemented in a group of containers, such as Pod. Wherein a group of containers may comprise one or more containers. The plural means 2 or more than 2. Because the web service is an abstraction of the gateway that accesses the network, the container instance may access other networks through the web service. Wherein the other network refers to a network outside the internal network of the computing cluster 20 relative to the computing cluster 20, and is mainly used for realizing the access of the container instance in the computing cluster 20 to the nodes in the other network.
Specifically, in connection with fig. 3, 4, and 5, nsm-CNI component 20b may allocate a virtual network card (e.g., nsm0 in fig. 3-5) for a container instance during deployment of the container instance by worker node 201. The virtual network card is a virtual network that communicates with the network service cluster 30. For embodiments in which the network service cluster is an NSM cluster, the virtual network card may be referred to as an NSM virtual network card. Nsm0 in fig. 3 to 5 indicates the name of the virtual network card.
Alternatively, in conjunction with fig. 3 and 4, during deployment of a container instance by the worker node 201, the node proxy component (e.g., kubelet) 20a may invoke the container runtime component (Container Runtime) 20d, and the container runtime component 20d may invoke the NSM-CNI component 20b to assign the NSM virtual network card NSM0 to the container instance (e.g., pod). Specifically, during the container configuration (Setup Pod) phase, the container runtime component 20d may invoke the NSM-CNI component 20b to execute the CNI ADD interface, assigning the NSM virtual network card NSM0 to the container instance.
Of course, in some embodiments, to enable communication of the container instance in the computing cluster, the container runtime component 20d may also invoke other CNI plug-ins (e.g., CNI plug-ins 20f shown in fig. 3 and 4) to allocate network cards for communication within the computing cluster to the container instance. Specifically, during the container configuration (Setup Pod) phase, the container runtime component 20d may invoke other CNI plug-ins to execute the CNI ADD interface, allocating network cards (e.g., cluster network cards, etc.) for the container instance for communication within the computing cluster.
In some embodiments, to avoid that some container instances that do not require NSM are also assigned NSM virtual network cards, NSM-CNI component 20b only notes in comments (actions) for container resources (e.g., pod resources) that NSM virtual network cards need to be assigned using the container instances of NSM. For example, the nsm.closed.com/ansm-cni field of the annotation field of the container resource indicates whether the container uses NSM. If the value of the nsm.closed.com/ansm-cni field is set to true, then this indicates that the container instance uses NSM. Accordingly, if the value of the nsm.closed.com/ansm-cni field is set to false, it indicates that the container instance does not use NSM.
In the embodiment of the present application, as shown in fig. 4, the container network may be opened by a network namespace (network namespaces, netns) tap interface and a Traffic Control (TC) policy, and before creating a sandbox (sandbox) environment, a network namespace is first created, where the network namespace has two network interfaces, a veth-pair and a tap. eth0 and nsm belong to the veth-pair type interfaces, one end is accessed cni to create a network namespace, and the other end is accessed to the host. the ta0_kata and the ta1_kata belong to a tap type interface, one end is connected to a network naming space created by CNI, the other end is connected to a hypervisor created by qemu, and the network naming space created by the CNI plug-in 20f uses a TC strategy to open an eth0 network interface and a ta0_kata network interface, which is equivalent to communicating the eth0 network interface and the ta0_kata network interface. Of course, in addition, the network namespaces created by NSM-CNI component 20b use TC policies to open up NSM and Tap1_kata network interfaces, which is equivalent to connecting NSM and Tap1_kata network interfaces.
Only the eth0 and NSM0 network interfaces are in the Sandbox environment, these two interfaces are qemu and tap emulated interfaces, and the MAC address, IP address and mask are configured identically in eth0 and NSM0, respectively, in the network namespaces created by the CNI plug-ins in the host (CNI plug-ins 20f and NSM-CNI component 20b described above).
The NSM-CNI component 20b may also persist the name of the container instance, the name space of the container instance, the network name space of the container instance, and the sandbox environment of the container instance to storage media corresponding to the working node 201 after the NSM virtual network card (i.e., NSM network interface) has been allocated for the container instance, so as to perform a network flow table update on the container instance subsequently.
In the embodiment of the present application, for the network service cluster, the data plane component 20c corresponding to the network service cluster is disposed in the working node 201. For NSM, this data plane component 20c may also be referred to as an NSM data plane component. Where data plane component 20c is a logical functional component that can implement the full capacity of the gateway in a programming language (e.g., go language, etc.). The data plane component is responsible for opening an Overlay link between the network service cluster 30 and the computing cluster 20, so that load balancing of the Overlay link can be realized, high availability of the link is ensured, and the Overlay link has end-to-end detection capability.
Based on the data plane component 20c described above, the NSM-CNI component 20b may take over an NSM virtual network card (e.g., NSM0 in fig. 3) to the data plane component 20c. Further, data plane component 20c may establish a connection between the NSM virtual network card (NSM 0) and the web service. In particular, network communications may be established between data plane component 20c and data plane component 30a on working node 301 of network service cluster 30. Data plane assembly 30a functions the same as data plane assembly 20c. For data plane component 30a, the backend service is a web service to which access requests for container instances in working node 201 may be forwarded, accessing other networks via the web service. The network configuration process moves the network configuration capabilities of the container out of the lifecycle of the container such that the network namespace (net namespace) of the container is not aware of the container network changes.
According to the computing system provided by the embodiment of the application, the capability that different networks can be opened for the container by utilizing the network service in the network service cluster is utilized, a container network interface component (CNI component) corresponding to the network service cluster is additionally arranged in a working node of the computing cluster, and the CNI component can access the container instance deployed in the working node to the network service, so that the container instance can access other networks through the network service. The CNI component can access the container instance deployed in the working node to the network service, and can occur at any stage of the life cycle of the container instance, so that decoupling of container network configuration and the life cycle of the container is realized, and the flexibility of the container network configuration is improved. For example, the container instance is in a run time (run time) state, network properties of the container may also be dynamically changed, and so on.
In this embodiment, when the container instance runs, if some or a certain network is to be accessed, only the binding condition of the container and the network rule needs to be updated to the Custom Resource (CRD) of the computing system. For the K8s system, CRD is a way for K8s to make the developer go to custom resources in order to improve scalability. CRD resources can be dynamically registered into a K8s cluster, and after registration is completed, client api of a kube-apiserver can be called to access the self-defined resource object. The CRD is a definition of resources that requires a controller to monitor various events of the CRD to add custom processing logic. In the embodiment of the present application, in order to monitor the network demand resource of the container instance, a management and control component 10a corresponding to the network service cluster is added in the management and control node 10. The management and control component 10a is a logic functional component, and is mainly implemented to monitor network demand resources of a container instance, and add customized network demand processing logic when monitoring an update or creation event of CRD for the network demand resources. The working logic of the management and control component 10a is illustrated in the following.
With reference to fig. 2 and 4, for the management end 40 of the computing system, an administrator of the computing system may register network service rules (network service role) resources in the form of CRDs with the API service (API server) component 10b when configuration of the network of container instances is required. The API service component 10b is a server for adding, deleting, looking up, changing, staring (listening) resource objects in the K8s system. The data is stored in the etcd database, and the API service component 10b may pass a series of functions such as authentication, caching, and API version adaptation conversion of the data stored in the etcd. Other modules in the management node 10 may query or modify the data in etcd through the API service component 10b. The etcd database is a distributed, highly available and consistent Key-Value (Key Value pair) storage database, and is mainly used for sharing configuration and service discovery.
For the above network service rule resources, it may include: a container select (Pod select) field, a routing field, etc. Wherein the container selection field is used to determine a container applicable to the network service rule resource; the routing field is used to determine network information for the container applicable to the network service rule resource. In connection with fig. 5, an example of CRD of network service rule resources is as follows:
data structure 2: CRD example of network service rule resources:
in the CRD example of the network service rule resource, apiVersion represents version information of a resource object defined by the CRD; kined indicates that the CRD defined resource object type is a network service rule resource. generation represents the current version of the CRD of the network service rule resource, and this field is updated each time the CRD of the network service rule resource is updated. The spec field represents the resource manifest of the CRD-defined resource object. Wherein, the container selection field (podSelector) in the spec field: representing the Pod range of the netserviceroller adaptation. Routing fields routes: each of the routes fields determines the network direction within the Pod. The Target field indicates the access destination decided by the network service rule resource. The Target field may be represented by an IP/MASK (IP/MASK), e.g., writable host clear route (mask= 32), or a network address (MASK < 32). Alternatively, the Target field may be represented by a built-in network address. The built-in network address can be configured by a K8s manager to meet the application scene. The Via field determines how to go to the access destination. Wherein the value of the type field in the Via field is optional. The value of the type field may be: network service, i.e. access to a destination through network service; alternatively, the Host accesses the destination through the Host network. A value field in the Via field, when the type is Network device, the value is the name of the Network service; for other types, the value field may not be filled in.
In the CRD example of the network service rule resource, the status field counts the application state of the route rule in the Pods; where the observedGeneration represents the resource version number described by the current status. totalCount represents the total number of pod to which the podSelector matches. readyCount represents the number of Pod that have been successfully flushed into the network service rule. The nsm.isolated.com/ready in the actions field is the interaction protocol for the actions of Kubernetes administrator and NSM administration component 10 a. Each time the route field route in the network service rule resource is modified, the nsm.isolated.com/ready value needs to be changed to false. After the NSM management and control component 10a processes the network service rule resource, the NSM. Wherein, the value of nsm.isolated.com/ready is false, which indicates that the refreshing of the network service rule resources to the Pod selected by the podSelector is not completed; true indicates that the network service rule resource refresh is completed until the Pod selected by the podSelector is completed. As shown in fig. 5, the values of the fields in the resource list (Spec) in the network service rule resource may be specified by the user, specifically, by the user on the management side 40.
Based on the network service rule resources described above, the administration component 10a may monitor the network service rule resources. Specifically, the management and control component 10a may call an API service (API Server) component 10b to monitor network service rule resources; and in the event that new network service rule resources are detected, generating a flow table reflecting network requirements of the container instance, i.e., a network service flow table (Network Service Flows), in accordance with the new network service rule resources. In the embodiment of the present application, there are new network service rule resources including: the API service component 10b adds network service rule resources which do not exist originally, and may also include: the web service rule resources in the API service component 10b are updated.
Specifically, the administration component 10a may determine a target container instance for which the network service rule resource is adapted based on the container selection rules in the network service rule resource. That is, the target container instance for network service rule resource adaptation is determined based on the value of the podSelector field in the network service rule resource. Wherein the number of target container instances may be 1 or more. The plural means 2 or more than 2. Multiple container instances may be deployed at the same working node or at different working nodes. The container selection rules in the web service rule resource may be represented by labels (Label) of the group of containers (e.g., pod). The labels of the container groups are used for screening out the container groups with the labels.
Further, the management component 10a may obtain the network resource of the target container instance, i.e., the value of the route field, from the resource request of the network service rule resource. The NSM management and control component 10a may determine that the network resource of the target container instance is a network resource in the Flow table of the target container instance, thereby obtaining a Flow table (Flow) of the target container instance.
In the embodiment of the present application, the flow table of the container instance may be registered as a CRD manner with the API service component 10b. The network resources in the flow table of the target container instance may reflect the network requirements of the target container instance. With reference to fig. 5, an exemplary description is given of the CRD implementation of the flow table:
Data structure 3: CRD example of flow table:
in the CRD of the above-described flow table, the status field is used to describe the status of the applications in the Pods by the routes in the flow table. The Phase field indicates the application Phase of the network service rule resource in Pod. Where Bound indicates that the stream table has been flushed, i.e. the stream table has been flushed to Pod. Unbound indicates that the flow table has been flushed to Pod without flushing. Error indicates a brush failure, i.e., a stream table entry pod failure. message: the state and phase descriptions are consistent.
Based on the CRD examples of the network service rule resources and the flow table, when the management and control component 10a generates the flow table of the target container instance, the target container instance (e.g. the target pod) adapted to the network service rule resources can be determined according to the container selection rule described in the podselect field of the network service rule resources; the network resource described by the route field in the network service rule resource can be determined to be the value of the route field in the flow table of the target container instance, namely the network resource of the flow table of the target container xxxxyyyyyzzz. Further, the management and control component 10a may also determine the working node where the target container group is located; and writing the identification of the working node where the target container instance is located into the label field of the flow table of the target container group. As in the CRD example of the flow table described above, "closed.com/nodenam: xxxxyyyyzzzz", nodenam represents the node name.
Because the working nodes deployed by the container instances are scheduled by the management and control node 10, the management and control node 10 can determine the correspondence between the container instances and the working nodes; and persisting the correspondence to the etcd database. Based on the etcd database, the management and control component 10a may query the corresponding relationship between the container instance stored in etcd and the working node by using the identification of the target container instance to obtain the working node where the target container instance is located. Further, the identification of the working node where the target container instance is located may be written to the tag field of the flow table of the target container instance.
Of course, the management component 10a may also determine the values of other fields in the flow table. For example, the management and control component 10a may determine the network service rule resources on which to rely to generate the flow table based on the names of the network service rule resources and write the corresponding fields. As in the example of the flow table described above, "nsm.closed.com/role: role-for-user1", the dependent network service rule resource is "role-for-user1", etc.
After obtaining the flow table of the target container instance, the management component 10a may also register the flow table of the target container instance with the API service component 10b in the form of a CRD. The flow table registered by the API service component 10b, i.e., the CRD of the monitoring API service component 10b in fig. 2, may be monitored for the NSM-CNI component 20 b. Specifically, NSM-CNI component 20b may obtain an identification of the working node contained in the flow table registered by API service component 10 b; and identifies the flow table of the container instance deployed in the target working node where NSM-CNI component 20b is located, based on the identity of the working node contained in the flow table registered by API service component 10b. The NSM-CNI component 20b may then flush the flow table of the container instance deployed in the target working node to the container instance deployed in the target working node. Alternatively, the NSM-CNI component 20b may flush the flow table of the container instance deployed in the target worker node to the container instance deployed in the target worker node by way of Remote Procedure Call (RPC).
In some embodiments, NSM-CNI component 20b will monitor the flow table belonging to the target working node in which it resides; and polymerizing the flow table of each container example according to the granularity of the container example to obtain the flow table of the container example. Wherein, aggregating the flow table of the same container instance prevents the flow table generated after the container instance from overwriting the previously generated flow table. After the flow table of the same container instance is aggregated, the aggregated flow table of the container instance may be refreshed to NSM node plugin 20e; and refreshed by NSM node plugin 20e to data plane component 20c to thereby effect the refresh of the flow table of the container instance into the container instance.
After flushing the flow table of the container instance deployed in the working node where the NSM-CNI component 20b is located to the container instance deployed in the working node, the NSM-CNI component 20b may also set the flush status field (e.g., the phase field described above) of the flow table of the corresponding container instance to the flushed status (Bound).
In the embodiment of the present application, the control component 10a may further obtain a state value of a refresh state field included in the flow table of the target container instance; and judging whether the stream table of the target container instance is refreshed by the target container instance according to the state value of the refresh state field contained in the stream table of the target container instance. Alternatively, the management component 10a may determine, based on the state value of the refresh state field contained in the flow table of the target container instance, that the state value is the number of flow tables that have been flushed to the state, i.e., the number of target container instances that have successfully flushed the flow table. The administration component 10a may also write the number of target container instances that have been successfully flushed to the flow table to the readyCount field of the network service rule resource described above. Further, if the number of target container instances that have successfully brushed the flow table is equal to the total number of pod that the network service rule resource podSelector matches, that is, the value of total count in the network service rule resource is equal to the value of readyCount, it is determined that the flow table of the target container instance is refreshed by the target container instance. Further, in the event that the flow table of the target container instance is flushed by the target container instance, the management component 10a may set a field (e.g., actions: nsm.close.com/ready) in the network service rule resource that characterizes the completion status of the routing rule as an identification of the completion. For example, the announcements: the nsm.close.com/ready field is set to wire, etc. Thus, an administrator for the K8s system can obtain the routing rule completion status.
In the embodiment of the application, after the flow table is refreshed to the container instance, the container instance can determine the routing information of the access destination of the container instance based on the network resources described by the flow table; and accessing the destination through the target network service described in the flow table in case the routing information is the network service. For example, for the flow table CRD example described above, the container instance access destination may be determined to be 192.168.1.1/32; the route information is to access the destination through the network service; the name of the target network service is: ansm-vpc-xxxxxxx. I.e., the container instance may access the 192.168.1.1/32 corresponding destination through the targeted web service of ansm-vpc-xxxxxxxxx.
In the embodiment of the present application, the network resources described by the flow table may be 1 or more. The plural kinds are 2 kinds or more than 2 kinds. Wherein the destinations of the various network resources may be the same or different. In this embodiment, when determining the destination to be accessed, the longest matching rule of the destination IP may be followed, that is, the destination IP with the finest granularity of the destination IP is selected as the destination to be accessed. Accordingly, the NSM-CNI component 20b may obtain the destination IP of the plurality of network resources from the flow table if the plurality of network resources exist in the network resources included in the flow table; according to the route length of the destination IP of various network resources, determining the destination IP with the longest route length as the destination to be accessed by the container instance; and determining routing information for the container instance to access the destination for the target network resource accessing the destination. For example, in the above example of the flow table CRD, the destinations are respectively: 192.168.1.1/32, ANYTUNEL and 0.0.0.0/0. Wherein, ANYTUNEL corresponds to a certain fixed network segment, and 0.0.0.0/0 represents any IP address. Thus, the routing length of the destination IP is 192.168.1.1/32> ANYTUNEL >0.0.0.0/0, and thus, 192.168.1.1/32 can be determined as the destination to be accessed by the container instance; and determines 192.168.1.1/32 corresponding target network resource, i.e. network service named ansm-vpc-xxxxxxxxx, as routing information of the container instance. Further, for access requests to container instances, the request may be sent to the corresponding destination 192.168.1.1/32 via a web service named ansm-vpc-xxxxxxxxx.
In the embodiment of the present application, as shown in fig. 6, an admission controller (nsm-webhook) 10c corresponding to the network service cluster 30 may also be provided in the management node 10. The admission controller 10c is a piece of code that intercepts requests arriving at the API service components after the requests are authenticated and authorized before the objects are persisted. In the embodiment of the present application, for the network service rule resource, the admission controller may detect whether destinations of multiple network resources in the network service rule resource are identical. In particular, the admission controller 10c may detect whether the classless inter-domain routes (CIDR) of the plurality of network resources among the network service rule resources completely overlap; if the non-category inter-domain routes (CIDR) of the plurality of network resources in the network service rule resources are completely overlapped, determining that the destinations of the plurality of network resources in the network service rule resources are completely the same. In the case where the destinations of the plurality of network resources are identical, the destinations of the plurality of network resources in the flow table generated later are also identical, and it is impossible for the NSM-CNI component 20b to determine which network resource to go to the destination through. Thus, in the event that the destinations of the various network resources are identical, the admission controller 10c may prevent registration of network service rule resources in the API service component 10b, and may prevent subsequent container instance access errors.
In the embodiment of the present application, in order to prevent the container instance from receiving the access traffic before the network completes initialization, the admission controller 10c may configure, for the container instance whose destination is accessible through the network service, a ready state (ready state) condition of the container instance as a flow table refresh completion in the container resource corresponding to the container instance. For example, the admission controller 10c may set readinessGates in a resource list (Spec) of container resources (e.g., pod resources) for a container instance using NSM to specify an additional condition list for kubelet to evaluate the ready state of the container instance. Wherein the Readiness gates depend on the current state of the status condition field of Pod. If such a condition is not found in the status conditions field of the container, the state of the condition defaults to "False". The state of Pod is Ready only if all container states in Pod are Ready and the readinessGates condition for additional state detection by Pod is also Ready. kubelet determines two preconditions for whether a Pod Ready: (1) All Ready (ContainersReady condition True) of the containers in Pod; (2) One or more conditions types are defined in the pod.spec.readinessgates, and it is required that these conditions types have a status of "true" corresponding to status in the pod.status. Based on this, the admission controller 10c may set the classtype in readinessGates as the flow table refresh complete (netservicef lowexpected), i.e., set the ready state additional condition of the container as the flow table refresh complete.
Based on the above-mentioned container ready state additional condition, the management and control component 10a may set, in a case where the flow table has been refreshed to the container instance, the flow table refresh state corresponding to the ready state additional condition to be the flow table refresh completion, so that the container instance receives the access traffic. For example, the flow table refresh state networkservicef lowexpected corresponding to the ready state append condition may be set to be wire, so that when all containers in the Pod reach the ready state (ready), the Pod reaches the ready state (ready) and may receive the access traffic.
In this embodiment of the present application, for a newly expanded Pod, in the case where the Pod has not matched a network service rule resource (network service rule) temporarily, the NSM management and control component 10a may set a flow table refresh state corresponding to the ready state additional condition to True, that is, the flow table refresh is completed. If the newly expanded Pod has the matched network service rule resource (network service rule), generating a flow table corresponding to the Pod based on the network service rule resource; and when status. Phase of the flow table becomes Bound (brushed state), setting the flow table refresh state corresponding to the ready state additional condition to True.
In other embodiments, the Kubernetes administrator updates the network service rule resource (networkserviceroller). The management and control component 10a may update the flow table of the Pod adapted by the network service rule resource according to the updated network service rule resource. Before the flow table refresh to Pod, the management component 10a may set the flow table refresh state corresponding to the ready state additional condition to False. In this way, pod will not accept new requests from Service. Further, when the stream table is refreshed to Pod and the status. Phase of the stream table becomes Bound (flushed), the stream table refresh state corresponding to the ready state additional condition is set to True.
The above embodiments mainly illustrate the process by which the management and control component 10a processes routing rules (i.e., network service rule resources) provided by the K8s administrator and binds the routing rules to the Pod. 2 controllers may be included for the management assembly 10 a: (1) a network service resource rule controller; (2) a network service controller. The network service resource rule controller is mainly used for processing the routing rule (i.e. network service rule resource) provided by the K8s administrator and binding the routing rule with the Pod, i.e. the process shown in the above embodiment. The network service controller is mainly used for monitoring network service resources and interacting with a network service cluster, and creating network services in the network service cluster. The process of creating a web service for the management and control component 10a is illustrated as follows.
As shown in fig. 4, the management component 10a may call the API service component 10b to monitor web services; and under the condition that the presence of network service resource update is monitored, calling the API service component 10b to acquire updated network service resources; and creating a web service in the web service cluster based on the updated web service resources. Specifically, the management and control component 10a may interact with a coordination component 30a in a management and control node corresponding to the network service cluster 30, creating a network service in the network service cluster by invoking the coordination component 30 a. Alternatively, the management and control component 10a may invoke the coordination component 30a in RPC to create a web service in the web service cluster from the updated web service resources. The network service resource may be a CRD resource of the K8s cluster, and may be registered in the API service component 10b in the form of a CRD. An example CRD of network service resources is described below in connection with fig. 5.
Data structure 4: CRD of network service resources
In the CRD of the network service resource, spec represents a resource list of the network service. The resource list of specs may be determined by the user at the resource end 40. The Replicas field indicates the number of copies of the web service. The number of copies is equal to or greater than 2 when the CRD of the network service resource is created in consideration of high availability. userId represents the identity of the user using the network service. userRoleName represents the name of the network service rule resource role of the user of the network service. usersercuitygroupid: representing the security group ID of the network service user. uservvcid represents the ID of the network service. In the network service resource example described above, the network service is in the form of a VPC network. The ENI created by the network service cluster is under the VPC. userVSwitches represents a list of virtual switch identifications (vswitches ids) under network services. When NSM creates ENI, it will choose a vswitch with IP allowance from the list to create ENI. Filling in multiple vswitches may improve the creation success rate. The status field indicates status information of the network service. Where netserviceid represents an ID globally unique to the identity NetworkService Id of the network service, which is different from one re-creation to another. Phase represents the state of the network service. When phase is Available, it indicates that the network service is Available.
In addition to the fields of the network service resources shown in the network service resource example above, in some embodiments, the CRD of the network service resource may further include: the specHash field. Wherein, the specHash field represents the hash result of all fields of the resource list (spec) in the network service resource CRD when the management and control component 10a coordinates (reconci). The administration component 10a, upon receipt of a network service ADD (ADD) or UPDATE (UPDATE) event, may compare the value of the specHash field with the hash result of the resource inventory of the network services in the network service cluster 30. If the value of the specHash field of the network service resource is the same as the hash result of the resource list of the network service in the network service cluster 30, the management and control component 10a coordinates (recorcile) again, so that the consumption of the resource can be reduced.
Based on the CRD of the network service, the management and control component 10a may invoke the coordination component 30a in an RPC manner to create the network service in the network service cluster according to the updated network service resource. In particular, the management and control component 10a may schedule a target worker node in the worker node 201 that is adapted to the updated network service resources and bind the updated network service to the worker node. The node proxy component in the target working node (e.g., kubelet component in K8 s) can create and launch a container (e.g., pod, etc.) through the container runtime component to implement deployment of the network service in the target working node, in response to the monitoring of the network service binding.
In addition to the computing system provided by the above embodiments, the embodiments of the present application further provide a container network configuration method, and the container network configuration method provided by the embodiments of the present application is described below as an example from the perspective of the computing system.
Fig. 7 is a flow chart of a container network configuration method according to an embodiment of the present application. As shown in fig. 7, the container network configuration method mainly includes:
701. a container instance of the target working node deployment is determined.
702. The NSM-CNI component in the target working node is utilized to access the container instance to the network service of the network service cluster for the container instance to access other networks through the network service.
In order to realize decoupling of container network configuration and container lifecycle, in the embodiment of the present application, the capability of the network service cluster to open different networks for the container is utilized, and a CNI (abbreviated as NSM-CNI) component corresponding to the network service cluster is set in a working node in the computing cluster. For a description of the NSM-CNI component, reference may be made to the relevant content of the system embodiment described above, and further description is omitted here.
In the embodiment of the present application, for any working node in the computing cluster, which is defined as a target working node, in step 701, a container instance deployed by the target working node may be determined; and when a container network needs to be set, in step 702, the NSM-CNI component in the target working node may be used to access the container instance (e.g., pod, etc.) deployed in the target working node to the network service in the network service cluster. Because the web service is an abstraction of the gateway that accesses the network, the container instance may access other networks through the web service. Wherein, other networks refer to networks outside the internal network of the computing cluster relative to the computing cluster, and are mainly used for realizing the access of container examples in the computing cluster to nodes in other networks.
Specifically, in conjunction with fig. 3, 4, and 5, the NSM-CNI component may be utilized to allocate a virtual network card for a container instance in a target worker node during deployment of the container instance by the target worker node. The virtual network card refers to a virtual network card that communicates with the network service by the container instance, and may be referred to as NSM virtual network card. The specific implementation manner of allocating the virtual network card to the container instance in the target working section may be referred to the relevant content of the system embodiment, and will not be described herein.
In some embodiments, to avoid that some container instances that do not require network services provided by the network service cluster are also allocated virtual network cards that communicate with the network service cluster, only the container instances that require network services provided by the network service cluster to be used, i.e., NSM virtual network cards, may be noted in annotations (announcements) of container resources (e.g., pod resources).
In this embodiment of the present application, for a network service cluster, a data plane component corresponding to the network service cluster is set in a working node. The NSM virtual network card may be taken over to the data plane component by the NSM-CNI component and a connection between the NSM virtual network card (NSM 0) and the network service may be established by the data plane component. In particular, network communications may be established between the data plane components and data plane components on the working nodes of the network service cluster. For data plane components in a network service cluster, the backend service is a network service, and access requests for container instances in working nodes in the computing cluster can be forwarded to the network service, and other networks can be accessed via the network service. The network configuration process moves the network configuration capabilities of the container out of the lifecycle of the container such that the network namespace (net namespace) of the container is not aware of the container network changes.
According to the computing system provided by the embodiment of the application, the capability that different networks can be opened for the container by utilizing the network service in the network service cluster is utilized, a container network interface component (NSM-CNI component) corresponding to the network service cluster is additionally arranged in a working node of the computing cluster, and the NSM-CNI component can access the container instance deployed in the working node to the network service, so that the container instance can access other networks through the network service. The NSM-CNI component can access the container instance deployed in the working node to the network service, and can occur at any stage of the life cycle of the container instance, so that decoupling of the container network configuration and the life cycle of the container is realized, and the flexibility of the container network configuration is improved. For example, the container instance is in a run time (run time) state, network properties of the container may also be dynamically changed, and so on.
In this embodiment, when the container instance runs, if some or a certain network is to be accessed, only the binding condition of the container and the network rule needs to be updated to the Custom Resource (CRD) of the computing system. In the embodiment of the present application, in order to monitor the network demand resource of the container instance, a management and control component corresponding to the network service cluster may be added in the management and control node. The management and control component is a logic functional component, and is mainly implemented to monitor network demand resources of a container instance, and add customized network demand processing logic when monitoring an update or creation event of a CRD for the network demand resources. For the management end of the computing system, an administrator of the computing system may register network service rules (network service role) resources in the form of CRDs with an API service (API server) component when configuration of the network of container instances is required. For the above network service rule resources, it may include: a container selection (Pod selector) rule field, a routing field, etc. Wherein the container selection rule field is used to determine a container applicable to the network service rule resource; the routing field is used to determine network information for the container applicable to the network service rule resource. Examples of CRD for web service rule resources may be found in relation to the system embodiments described above.
Based on the network service rule resources, the network service rule resources can be monitored by utilizing a management and control component; and in the event that new network service rule resources are detected, generating a flow table reflecting network requirements of the container instance, i.e., a network service flow table (Network Service Flows), in accordance with the new network service rule resources. In the embodiment of the present application, there are new network service rule resources including: the API service component adds network service rule resources which do not exist originally, and can also comprise: the web service rule resources in the API service component are updated.
Specifically, a target container instance for network service rule resource adaptation may be determined based on a container selection rule in the network service rule resource. That is, the target container instance for network service rule resource adaptation is determined based on the value of the podSelector field in the network service rule resource. Wherein the number of target container instances may be 1 or more. The plural means 2 or more than 2. Multiple container instances may be deployed at the same working node or at different working nodes.
Further, the management and control component may be utilized to obtain the value of the network resource, i.e., route field, of the target container instance from the resource request of the network service rule resource. The management and control component may determine that the network resource of the target container instance is a network resource in a Flow table of the target container instance, thereby obtaining a Flow table (Flow) of the target container instance.
In embodiments of the present application, the flow table of the container instance may be registered with the API service component as a CRD. The network resources in the flow table of the target container instance may reflect the network requirements of the target container instance. The CRD implementation of the flow table can be referred to the above-mentioned data structure 3, and will not be described herein.
Based on the CRD example of the network service rule resource and the flow table, when the flow table of the target container example is generated, the target container example adapted to the network service rule resource can be determined according to the container selection rule described by the Podselect field of the network service rule resource; the network resource described by the route field in the network service rule resource can be determined to be the value of the route field in the flow table of the target container instance, that is, the network resource of the flow table of the target container instance. Further, the working node where the target container instance is located can also be determined; and writing the identification of the working node where the target container instance is located into the label field of the flow table of the target container instance.
Of course, the control component may also be utilized to determine the values of other fields in the flow table. For example, the network service rule resources on which the flow table is generated may be determined according to the names of the network service rule resources, and the corresponding fields written. After obtaining the flow table of the target container instance, the flow table of the target container instance may also be registered with the API service component in CRD form using the management component. Accordingly, the available NSM-CNI component can monitor the flow table registered by the API service component, and refresh the updated flow table to the container instance of the target working node under the condition that the flow table corresponding to the container instance of the target working node is updated.
Specifically, the NSM-CNI component can be utilized to obtain the identification of the working node contained in the flow table registered by the API service component; and identifying the flow table of the container instance deployed in the target working node according to the identification of the working node contained in the flow table registered by the API service component. Further, in the case that the flow table of the container instance deployed by the target working node is updated, the NSM-CNI component may be utilized to refresh the updated flow table corresponding to the container instance deployed in the target working node into the container instance deployed by the target working node. Optionally, the updated flow table corresponding to the container instance deployed in the target working node may be refreshed to the container instance deployed in the target working node by Remote Procedure Call (RPC).
In some embodiments, the NSM-CNI component monitors the flow table belonging to the target working node where it is located; and polymerizing the flow table of each container example according to the granularity of the container example to obtain the flow table of the container example. Wherein, aggregating the flow table of the same container instance prevents the flow table generated after the container instance from overwriting the previously generated flow table. After the flow table of the same container instance is aggregated, the aggregated flow table of the container instance may be refreshed into the container instance.
After flushing the flow table of the container instance deployed in the working node where the NSM-CNI component is located to the container instance deployed by the working node, the NSM-CNI component may also be utilized to set a flush status field (e.g., the phase field described above) of the flow table of the corresponding container instance to a flushed status (Bound).
In the embodiment of the application, the NSM management and control component may also be used to determine whether the updated flow table is finished being refreshed by the target container instance based on the state value of the refresh state field of the updated flow table. Wherein the target container instance refers to a container instance determined by a container selection rule of a web service rule resource that generates an updated flow table. Optionally, the NSM management component may be utilized to obtain a state value of a refresh state field contained in a flow table of the target container instance corresponding to the updated flow table. Further, based on the state value of the refresh state field contained in the flow table of the target container instance, it is determined whether the flow table of the target container instance has been refreshed by the target container instance. Alternatively, the state value may be determined as the number of flow tables that have been flushed to the state, i.e., the number of target container instances that have successfully flushed the flow table, based on the state value of the flush state field contained in the flow table of the target container instance. Further, if the number of target container instances that have successfully brushed the flow table is equal to the total number of pod that the network service rule resource podSelector matches, that is, the value of total count in the network service rule resource is equal to the value of readyCount, it is determined that the flow table of the target container instance is refreshed by the target container instance. Further, in the case where the flow table of the target container instance is refreshed by the target container instance, a field in the network service rule resource that characterizes the completion state of the routing rule may be set to an identification that characterizes the completion. Thus, an administrator for the K8s system can obtain the routing rule completion status.
In the embodiment of the application, after the flow table is refreshed to the container instance, the container instance can determine the routing information of the access destination of the container instance based on the network resources described by the flow table; and accessing the destination through the target network service described in the flow table in case the routing information is the network service. In the embodiment of the present application, the network resources described by the flow table may be 1 or more. The plural kinds are 2 kinds or more than 2 kinds. Wherein the destinations of the various network resources may be the same or different. In this embodiment, when determining the destination to be accessed, the longest matching rule of the destination IP may be followed, that is, the destination IP with the finest granularity of the destination IP is selected as the destination to be accessed. Correspondingly, under the condition that the network resources contained in the flow table have a plurality of network resources, the NSM-CNI component can be utilized to acquire the destination IP of the plurality of network resources from the flow table; according to the route length of the destination IP of various network resources, determining the destination IP with the longest route length as the destination to be accessed by the container instance; and determining routing information for the container instance to access the destination for the target network resource accessing the destination. In the embodiment of the application, an admission controller (NSM-webhook) corresponding to the network service cluster, namely an NSM admission controller, can also be arranged in the management and control node. For the network service rule resources, the NSM admission controller may be utilized to detect whether destinations of multiple network resources in the network service rule resources are identical. In the case where the destinations of the plurality of network resources are identical, the NSM admission controller may be utilized to prevent registration of network service rule resources in the API service component, and subsequent container instance access errors may be prevented.
In an embodiment of the present application, in order to prevent a container instance from receiving access traffic before the network completes initialization, an NSM admission controller may be utilized to configure, for a container instance using NSM, a ready state (ready state) additional condition of the container instance in a container resource corresponding to the container instance as a flow table refresh completion. Based on the above-mentioned additional condition of the ready state of the container, the NSM management and control component may set the flow table refresh state corresponding to the additional condition of the ready state to be the completion of the flow table refresh, in case that the flow table has been refreshed to the container instance, so that the container instance receives the access flow. In other embodiments, the Kubernetes manager updates the network service rule resource (netserviceroller). And updating the flow table of the Pod adapted by the network service rule resource by utilizing the management and control component according to the updated network service rule resource. The flow table refresh state corresponding to the additional condition for the ready state may be set to False by the administration component before the flow table is refreshed to Pod. In this way, pod will not accept new requests from Service. Further, when the flow table is refreshed to Pod and the status of the flow table becomes Bound, the flow table refresh status corresponding to the ready status additional condition may be set to True by the management and control component.
In embodiments of the present application, a management and control component may also be utilized to create web services in a web service cluster. Specifically, the network service may be monitored using a management and control component; under the condition that the network service resource update is monitored, calling an API service component by using a management and control component to acquire the updated network service resource; and creating a web service in the web service cluster based on the updated web service resources.
It should be noted that, the execution subjects of each step of the method provided in the above embodiment may be the same device, or the method may also be executed by different devices. For example, the execution subject of steps 701 and 702 may be device a; for another example, the execution body of step 701 may be device a, and the execution body of step 702 may be device B; etc.
In addition, in some of the above embodiments and the flows described in the drawings, a plurality of operations appearing in a specific order are included, but it should be clearly understood that the operations may be performed out of the order in which they appear herein or performed in parallel, the sequence numbers of the operations such as 701, 702, etc. are merely used to distinguish between the various operations, and the sequence numbers themselves do not represent any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform the steps in the container network configuration method described above.
It should be noted that, the descriptions of "first" and "second" herein are used to distinguish different messages, devices, modules, etc., and do not represent a sequence, and are not limited to the "first" and the "second" being different types.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
The storage medium of the computer is a readable storage medium, which may also be referred to as a readable medium. Readable storage media, including both permanent and non-permanent, removable and non-removable media, may be implemented in any method or technology for information storage. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (14)

1. A computing system, comprising: the control node, the computing cluster and the network service cluster; the computing cluster includes a plurality of working nodes; the network service cluster is used for deploying network services; the network service is gateway abstraction of access network; the network service cluster provides an infrastructure layer that handles communications between different networks and provides the ability for container instances that access the network service cluster to access other networks;
the working node includes: a container network interface CNI component corresponding to the network service cluster;
the CNI component is configured to access the container instance deployed in the working node to the network service;
the container instance accesses other networks through the network service.
2. The system of claim 1, wherein the working node further comprises: the data surface component corresponding to the network service cluster;
The CNI component is further configured to allocate a virtual network card to the container instance and take over the virtual network card to the data plane component in the process of deploying the container instance by the working node;
the data plane component is configured to establish a connection between the virtual network card and the network service to access the container instance to the network service.
3. The system of claim 1, wherein the management node comprises: the management and control component corresponds to the network service cluster;
the management and control component is used for monitoring network service rule resources; the network service rule resource is a user-defined resource registered in the management and control node;
generating a flow table reflecting the network requirements of the container instance according to the new network service rule resources under the condition that the new network service rule resources are monitored;
the CNI component is configured to flush the flow table to the container instance;
the container instance determining routing information of the container instance access destination based on the flow table; in the case where the routing information is to access a destination through a network service, the destination is accessed through a target network service in the flow table.
4. The system of claim 3, wherein the management node further comprises: an admission controller for the network service cluster; the admission controller is further configured to:
for a second container instance using a network service grid, configuring a ready state condition of the second container instance in container resources corresponding to the second container instance as stream table refreshing completion;
the management and control assembly is further configured to: and setting a flow table refreshing state corresponding to the ready state additional condition as flow table refreshing completion under the condition that the flow table is refreshed to the second container instance, so that the second container instance receives the access flow.
5. The system of claim 3 or 4, wherein the management and control assembly is further configured to:
under the condition that the API service component is updated by the network service resource, calling the API service component to acquire the updated network service resource;
and creating network services in the network service cluster according to the updated network service resources.
6. A method of configuring a container network, comprising:
determining a container instance of the target working node deployment;
accessing the container instance to a network service of a network service cluster by utilizing a CNI component corresponding to the network service cluster in a target working node so as to enable the container instance to access other networks through the network service;
Wherein the network service is a gateway abstraction for accessing a network; the network service cluster provides an infrastructure layer that handles communications between different networks and provides the ability for container instances that access the network service cluster to access other networks.
7. The method of claim 6, wherein accessing the container instance to a web service of the web service cluster with a CNI component in a target working node corresponding to the web service cluster comprises:
in the deployment process of the container instance, the CNI component is utilized to distribute a virtual network card for the container instance;
taking over the virtual network card to a data plane component corresponding to the network service cluster by utilizing the CNI component;
a connection is established between the data plane component and the network service using the data plane component to access the container instance to the network service.
8. The method as recited in claim 6, further comprising:
monitoring a flow table registered in an API service component by utilizing the CNI component, wherein the flow table reflects the network requirements of the container instance;
and under the condition that the flow table corresponding to the container instance is updated, refreshing the updated flow table to the container instance.
9. The method as recited in claim 8, further comprising:
monitoring network service rule resources registered in an API service component by using a management and control component corresponding to the network service cluster in the management and control node; the network service rule resource is a user-defined resource;
and under the condition that the existence of the new network service rule resource is detected, generating the updated flow table by utilizing the management and control component according to the new network service rule resource.
10. The method as recited in claim 9, further comprising:
after refreshing the updated flow table to the container instance, setting a refresh status field of the updated flow table to a refreshed status with the CNI component;
determining whether the updated flow table is finished being refreshed by a target container group based on a state value of a refresh state field of the updated flow table; the target container group refers to a container instance determined by a container selection rule of a network service rule resource generating the updated flow table;
and under the condition that the updated flow table is refreshed by the target container instance, setting a routing rule completion state field of the new network service rule resource as an identification for representing completion by utilizing the management and control component so as to enable a management end of the new network service rule resource to acquire the routing rule completion state.
11. The method as recited in claim 9, further comprising:
detecting whether the destinations of network resources in the network service rule resources are identical or not by using an admission controller corresponding to the network service cluster in the management and control node;
and if the destinations of the network resources in the network service rule resources are identical, stopping registering the network service rule resources in the API service component by utilizing the admission controller.
12. The method of claim 11, wherein the container instance uses a web services network, the method further comprising:
configuring ready state conditions of the container instance in container resources corresponding to the container instance by utilizing the admission controller, wherein the ready state conditions are that the refreshing of a flow table is completed;
and under the condition that the updated flow table is refreshed to the container instance, setting the flow table refresh state corresponding to the ready state additional condition to be flow table refresh completion by utilizing the management and control component so as to enable the container instance to receive the access flow.
13. The method according to any one of claims 6-12, further comprising:
under the condition that the API service component is updated by the network service resource, calling the API service component by utilizing a management and control component in a management and control node to acquire the updated network service resource;
And creating network services in the network service cluster according to the updated network service resources.
14. A computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform the steps in the method of any of claims 6-13.
CN202210557898.7A 2022-05-19 2022-05-19 Computing system, container network configuration method, and storage medium Active CN115086166B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210557898.7A CN115086166B (en) 2022-05-19 2022-05-19 Computing system, container network configuration method, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210557898.7A CN115086166B (en) 2022-05-19 2022-05-19 Computing system, container network configuration method, and storage medium

Publications (2)

Publication Number Publication Date
CN115086166A CN115086166A (en) 2022-09-20
CN115086166B true CN115086166B (en) 2024-03-08

Family

ID=83249063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210557898.7A Active CN115086166B (en) 2022-05-19 2022-05-19 Computing system, container network configuration method, and storage medium

Country Status (1)

Country Link
CN (1) CN115086166B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116389252B (en) * 2023-03-30 2024-01-02 安超云软件有限公司 Method, device, system, electronic equipment and storage medium for updating container network
CN116319322B (en) * 2023-05-16 2023-09-12 北京国电通网络技术有限公司 Power equipment node communication connection method, device, equipment and computer medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108989091A (en) * 2018-06-22 2018-12-11 杭州才云科技有限公司 Based on the tenant network partition method of Kubernetes network, storage medium, electronic equipment
CN109582441A (en) * 2018-11-30 2019-04-05 北京百度网讯科技有限公司 For providing system, the method and apparatus of container service
EP3617880A1 (en) * 2018-08-30 2020-03-04 Juniper Networks, Inc. Multiple networks for virtual execution elements
CN111371627A (en) * 2020-03-24 2020-07-03 广西梯度科技有限公司 Method for setting multiple IP (Internet protocol) in Pod in Kubernetes
CN112187671A (en) * 2020-11-05 2021-01-05 北京金山云网络技术有限公司 Network access method and related equipment thereof
CN113300985A (en) * 2021-03-30 2021-08-24 阿里巴巴新加坡控股有限公司 Data processing method, device, equipment and storage medium
CN113709810A (en) * 2021-08-30 2021-11-26 河南星环众志信息科技有限公司 Method, device and medium for configuring network service quality
CN113760452A (en) * 2021-08-02 2021-12-07 阿里巴巴新加坡控股有限公司 Container scheduling method, system, equipment and storage medium
CN114172802A (en) * 2021-12-01 2022-03-11 百果园技术(新加坡)有限公司 Container network configuration method and device, computing node, main node and storage medium
WO2022056845A1 (en) * 2020-09-18 2022-03-24 Zte Corporation A method of container cluster management and system thereof
CN114237812A (en) * 2021-11-10 2022-03-25 上海浦东发展银行股份有限公司 Container network management system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11226847B2 (en) * 2019-08-29 2022-01-18 Robin Systems, Inc. Implementing an application manifest in a node-specific manner using an intent-based orchestrator
US11822949B2 (en) * 2020-04-02 2023-11-21 Vmware, Inc. Guest cluster deployed as virtual extension of management cluster in a virtualized computing system
US20220279420A1 (en) * 2021-03-01 2022-09-01 Juniper Networks, Inc. Containerized router with virtual networking

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108989091A (en) * 2018-06-22 2018-12-11 杭州才云科技有限公司 Based on the tenant network partition method of Kubernetes network, storage medium, electronic equipment
EP3617880A1 (en) * 2018-08-30 2020-03-04 Juniper Networks, Inc. Multiple networks for virtual execution elements
CN109582441A (en) * 2018-11-30 2019-04-05 北京百度网讯科技有限公司 For providing system, the method and apparatus of container service
CN111371627A (en) * 2020-03-24 2020-07-03 广西梯度科技有限公司 Method for setting multiple IP (Internet protocol) in Pod in Kubernetes
WO2022056845A1 (en) * 2020-09-18 2022-03-24 Zte Corporation A method of container cluster management and system thereof
CN112187671A (en) * 2020-11-05 2021-01-05 北京金山云网络技术有限公司 Network access method and related equipment thereof
CN113300985A (en) * 2021-03-30 2021-08-24 阿里巴巴新加坡控股有限公司 Data processing method, device, equipment and storage medium
CN113760452A (en) * 2021-08-02 2021-12-07 阿里巴巴新加坡控股有限公司 Container scheduling method, system, equipment and storage medium
CN113709810A (en) * 2021-08-30 2021-11-26 河南星环众志信息科技有限公司 Method, device and medium for configuring network service quality
CN114237812A (en) * 2021-11-10 2022-03-25 上海浦东发展银行股份有限公司 Container network management system
CN114172802A (en) * 2021-12-01 2022-03-11 百果园技术(新加坡)有限公司 Container network configuration method and device, computing node, main node and storage medium

Also Published As

Publication number Publication date
CN115086166A (en) 2022-09-20

Similar Documents

Publication Publication Date Title
US10701139B2 (en) Life cycle management method and apparatus
CA2914802C (en) Distributed lock management in a cloud computing environment
CN115086166B (en) Computing system, container network configuration method, and storage medium
CN110352401B (en) Local device coordinator with on-demand code execution capability
US10318347B1 (en) Virtualized tasks in an on-demand network code execution system
CN106663033B (en) System and method for supporting a wraparound domain and proxy model and updating service information for cross-domain messaging in a transactional middleware machine environment
JP2021518018A (en) Function portability for service hubs with function checkpoints
US20210405902A1 (en) Rule-based provisioning for heterogeneous distributed systems
EP3905588A1 (en) Cloud platform deployment method and apparatus, server and storage medium
US11201930B2 (en) Scalable message passing architecture in a cloud environment
CN113301116A (en) Cross-network communication method, device, system and equipment for microservice application
US20220057947A1 (en) Application aware provisioning for distributed systems
US11743117B2 (en) Streamlined onboarding of offloading devices for provider network-managed servers
CN114205342B (en) Service debugging routing method, electronic equipment and medium
CN113810230A (en) Method, device and system for carrying out network configuration on containers in container cluster
US20200218566A1 (en) Workload migration
US20160266921A1 (en) Virtual appliance management in a virtualized computing environment
CN110543315B (en) Distributed operating system of kbroker, storage medium and electronic equipment
CN110795209B (en) Control method and device
US20230138867A1 (en) Methods for application deployment across multiple computing domains and devices thereof
US10452295B1 (en) Data routing in information processing system utilizing persistent memory
US10075304B2 (en) Multiple gateway operation on single operating system
US11671353B2 (en) Distributed health monitoring and rerouting in a computer network
CN115378993B (en) Method and system for supporting namespace-aware service registration and discovery
CN118118348A (en) Instantiation method and device of Virtual Network Function (VNF)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant