CN116028163A - Method, device and storage medium for scheduling dynamic link library of container group - Google Patents

Method, device and storage medium for scheduling dynamic link library of container group Download PDF

Info

Publication number
CN116028163A
CN116028163A CN202310095946.XA CN202310095946A CN116028163A CN 116028163 A CN116028163 A CN 116028163A CN 202310095946 A CN202310095946 A CN 202310095946A CN 116028163 A CN116028163 A CN 116028163A
Authority
CN
China
Prior art keywords
created
container group
dynamic link
link library
container
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310095946.XA
Other languages
Chinese (zh)
Inventor
黄大成
刘冰
蔡安宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202310095946.XA priority Critical patent/CN116028163A/en
Publication of CN116028163A publication Critical patent/CN116028163A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a method, a device and a storage medium for scheduling a dynamic link library of a container group. The method comprises the following steps: receiving a request for creating a container group to be created, wherein the request comprises annotation information of the container group to be created, and a library file address field of a dynamic link library is filled in the annotation information; modifying the configuration file of the container group to be created according to the annotation information; and dispatching the container group to be created to a corresponding target node, wherein the container group to be created can call the dynamic link library based on the configuration information. According to the embodiment of the application, on one hand, when the container group Pod is deployed, the dynamic link library can be updated according to the needs of a user, so that the problem that the container group Pod needs to generate different multiple mirror image packages in different running environments is avoided, the management work of application release is reduced, on the other hand, the refined cluster resource scheduling can be realized, the utilization efficiency of resources is improved, and the user needs are better met.

Description

Method, device and storage medium for scheduling dynamic link library of container group
Technical Field
The present invention relates to the field of dynamic link library scheduling technologies, and in particular, to a method, an apparatus, and a storage medium for scheduling a dynamic link library of a container group.
Background
As the concept of cloud computing has matured, cloud computing of IT architecture has become one of the industry de facto standards, and the core of the cloud computing architecture of the broad IT infrastructure is to sink the infrastructure capabilities to the cloud computing platform, relative to the development of application software architecture toward focused applications. In the current containerization background, K8S (Kubernetes) is a common cloud computing platform, and by sinking an IT infrastructure into the K8S, efficient IT infrastructure services are uniformly provided for applications on the platform, so that the utilization efficiency of basic resources is improved, and the management cost is reduced. The K8S cluster can better meet the requirements of the universal abstract resources required by the running of the application program. However, in the deployment process of the application program container group (Pod), particularly in the practical application scene of edge computing, the Pod needs to generate a plurality of mirror image packets according to different running environments, so that the resource matching performance in the cluster resource scheduling process is low. Therefore, there is a need to propose a method for scheduling a dynamic link library of a container group to solve the above-mentioned problems.
Disclosure of Invention
The present application has been made in view of at least one of the above-mentioned problems occurring in the prior art. According to an aspect of the present application, there is provided a method for scheduling a dynamic link library of a container group, the method including:
receiving a request for creating a container group to be created, wherein the request comprises annotation information of the container group to be created, and a library file address field of a dynamic link library is filled in the annotation information;
modifying the configuration file of the container group to be created according to the annotation information;
and dispatching the container group to be created to a corresponding target node, wherein the container group to be created can call the dynamic link library based on the configuration information.
In some embodiments, the method further comprises:
detecting the target node which can meet the resource requirement in the cluster nodes;
and writing the address field of the target node into the configuration file.
In some embodiments, modifying the configuration file of the set of containers to be created according to the annotation information comprises:
and injecting environment variables into the configuration file, wherein the environment variables are used for loading the dynamic link library.
In some embodiments, the method further comprises:
Detecting a storage space used for storing the dynamic link library in the target node;
and in the case that the target node does not have enough storage space, the data storage volume is mounted on the target node.
In some embodiments, the method further comprises:
and adopting environment variables to inject the library file address of the dynamic link library into the target node.
In some embodiments, scheduling the set of containers to be created to the respective target node includes:
receiving the to-be-created container group under the condition that the to-be-created container group is monitored to be created;
determining the target node according to the configuration file of the container group to be created;
binding the to-be-created container group with the target node.
In some embodiments, the container group to be created updates an original dynamic link library in the image to the dynamic link library at runtime.
In some embodiments, the state information of the group of containers to be created is updated after the group of containers to be created is run.
Another aspect of the embodiments of the present application provides a device for scheduling a dynamic link library of a container group, where the device includes:
The receiving module receives a request for creating a container group to be created, wherein the request comprises annotation information of the container group to be created, and a dynamic link library field is filled in the annotation information;
the modification module is used for modifying the configuration file of the container group to be created according to the annotation information;
and the scheduling module is used for scheduling the container group to be created to a corresponding target node, and the container group to be created can call the dynamic link library based on the configuration information.
In yet another aspect, a storage medium is provided, where a computer program is stored, where the computer program when executed by a processor causes the processor to perform the method for scheduling a dynamically linked library of container groups as described above.
As can be seen from the above technical solution, the embodiments of the present application provide a method for scheduling a dynamic link library of a container group, where the method modifies a configuration file of a container group to be created based on an annotation file filled with a library file address field of the dynamic link library, and then schedules the container group to be created to a corresponding target node, so that the container group to be created invokes the dynamic link library based on the configuration information, on one hand, when deploying a container group Pod, the dynamic link library can be updated according to the needs of a user, thereby avoiding the problem that the container group Pod needs to generate multiple different mirror packages in different operating environments, reducing management work of application publishing, on the other hand, realizing refined cluster resource scheduling, improving the utilization efficiency of resources, and better meeting the needs of the user.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
FIG. 1 shows a schematic flow chart of a method of dynamic linked library scheduling of a container group according to an embodiment of the present application;
FIG. 2 illustrates a timing diagram of a method of dynamically linked library scheduling of container groups according to an embodiment of the present application;
FIG. 3 shows a schematic flow chart of step S102 of a container group according to an embodiment of the present application;
FIG. 4 shows a schematic flow chart of detecting a target node in a cluster node that is capable of meeting a resource demand in accordance with an embodiment of the present application;
fig. 5 shows a schematic flow chart of step S103 of a container group according to an embodiment of the present application;
FIG. 6 shows a schematic flow chart of detecting storage space in a target node for storing a dynamic link library according to an embodiment of the present application;
FIG. 7 shows a schematic flow chart of injecting library file addresses of a dynamically linked library into a target node using environment variables in accordance with an embodiment of the present application;
Fig. 8 shows a schematic block diagram of a dynamically linked library scheduling apparatus for a container group according to an embodiment of the present application.
Detailed Description
In order to better understand the technical solutions of the embodiments of the present application, the following descriptions will clearly and completely describe the technical solutions of the embodiments of the present application with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
In the edge computing scenario, containers and k8s (kubernetes) clusters are one of the mainstream schemes in the industry. A container contains a complete runtime environment, and all dependencies, class libraries, other binaries, configuration files, etc. required by the application, except the application itself, are uniformly driven into a package called a container image. By containerizing the application itself, and its dependencies, differences between the release version of the operating system and other underlying environments are abstracted. This allows for more flexibility in migration of images from one environment to another, since the container encapsulates all relevant details necessary to run the application, such as application dependencies and operating systems. For example, the same image may run in a Windows or Linux, development, testing or production environment. Most container implementations are based on open standards and can run on all mainstream Linux release, microsoft, etc. operating systems. The container image provides version control so that different versions of the container can be tracked and differences between the versions monitored. The security brought by the isolation of the containers can run a plurality of containers on one host machine, but the processes in the containers are isolated from each other and cannot be perceived from each other. The upgrade or failure of one of the containers does not affect the other containers. The container is lighter weight than the virtual machine, which is similar in purpose to the container, and aims to isolate the application program and its association, thereby constructing a set of application units that can run independent of the specific environment. The method has agility of application program creation and deployment, and can also ensure consistency of application program running environments. And since the complete running environment of the application program is packed in the container, the running environment of the host computer is not dependent.
Although the k8s cluster can meet the requirement of general abstract resources required by application running (the application container group (Pod) can be scheduled to a proper node by setting a configuration file deployed by the application container group (Pod), the shared library in the container mirror image cannot be updated only aiming at the actual scene of the edge in the deployment process of the application container group Pod, and meanwhile, the capability of matching the optimal host node running Pod according to the shared lib library schedule is not provided. Taking an opensl library as an example, when the TLS version cannot meet the requirement of a server program running on an edge, the TLS version of an application program in a container is not matched, and negotiation and connection establishment cannot be completed; in some practical operation scenarios, the encryption of TLS connection by using the national encryption standard is required, but the current mainstream openssl library does not support the national encryption standard; some central processing units (Central Processing Unit, CPU) of Node support AVX-512VAES instruction, can greatly improve encryption and decryption performance, only optimize the openssl library in container to AVX-512VAES instruction, can exert CPU potential; some Node nodes have hardware acceleration cards, and only a shared library with special operation instructions of the acceleration cards in the container can exert acceleration performance.
At present, a k8s cluster cannot update a dynamic link library for a container according to requirements, and Node nodes cannot be reasonably scheduled according to configuration requirements, so that an application program container is operated on the most suitable Node.
Based on at least one technical problem described above, the present application provides a method for scheduling a dynamic link library of a container group, where the method includes: receiving a request for creating a container group to be created, wherein the request comprises annotation information (connection) of the container group to be created, and a library file address field of a dynamic link library is filled in the annotation information or tag information; modifying the configuration file of the container group to be created according to the annotation information; and dispatching the container group to be created to a corresponding target node, wherein the container group to be created can call the dynamic link library based on the configuration information. According to the embodiment of the application, the configuration file of the container group to be created is modified based on the annotation file filled with the library file address field of the dynamic link library, and then the container group to be created is scheduled to the corresponding target node, so that the dynamic link library is called by the container group to be created based on the configuration information, on one hand, when the container group Pod is deployed, the dynamic link library can be updated according to the needs of a user, the problem that the container group Pod needs to generate different multiple mirror image packages in different running environments is avoided, the management work of application release is reduced, on the other hand, the refined cluster resource scheduling can be realized, the utilization efficiency of resources is improved, and the user needs are better met.
FIG. 1 shows a schematic flow chart of a data fusion based medical data management method according to an embodiment of the present application; as shown in fig. 1, a medical data management method 100 based on data fusion according to an embodiment of the present application may include the following steps S101, S102, and S103:
in step S101, a request to create a group of containers to be created is received.
The request comprises annotation information of the container group to be created, and a library file address field of a dynamic link library is filled in the annotation information. The annotation information refers to classes, constructors, methods, member variables, parameters and the like in java, which can be annotated and labeled. Therefore, the annotation information is sometimes also referred to as tag information. It is therefore within the scope of the present application that either the library file address field of the dynamically linked library is filled in the annotation information or the library file address field of the dynamically linked library is filled in the tag information.
Annotation information is used in embodiments of the present application to modify the configuration file. The annotation information has the function of tracking code dependency and realizing the replacement of the configuration file. Thus, configuration based on annotations has the effect of reducing configuration files. Annotation information is often used under the K8s framework to reduce the number of configuration files.
K8s cluster is an open source container orchestrator technology originally developed by Google for automated deployment, extension, and management of containerized applications. K8s makes it simple to deploy and manage micro-service architecture applications. It achieves this by forming an abstraction layer over the cluster, allowing the development team to smoothly deploy the application, while K8s handles mainly the following tasks: controlling and managing the use of resources by the application; requests between multiple instances of an automatic load balancing application; monitoring resource usage and resource limitations in order that the application may be automatically prevented from consuming too much resources and may be restored again; migrating an application instance from one host to another is a viable option if the host resources are exhausted or the host crashes; when a new host joins the cluster, the newly added additional resources can be automatically used. The K8s cluster has the following advantages:
(1) Portability and flexibility: the K8s cluster has strong compatibility because it can operate under a variety of infrastructure and environmental settings. Most other orchestrators do not have this flexibility. They are locked in a particular runtime or infrastructure.
(2) And (3) open source: CNCF is responsible for managing K8s clusters, a completely open source, community driven project. It has many important enterprise sponsors, but none of the companies can "control" this platform or control its direction of development.
(3) Cloudiness compatibility: the K8s cluster may not only host the workload on a single cloud, but may also distribute the workload across multiple clouds. K8s clusters can easily extend their environment from one cloud to another. While other orchestrators can also support multi-cloud architecture, K8s clusters may go beyond them completely in terms of multi-cloud compatibility.
(4) Market leaders: most companies are using K8s clusters. According to one investigation by red cap corporation, K8s is widely used by customers (88%), especially in the production environment (74%).
In the embodiment of the application, when a user needs to send a request for creating a container group to be created, the method can be realized through a component Kubectl in the cluster. Where Kubectl is the command line tool for the K8s cluster. It is used to deploy applications, monitor and control cluster resources, and view logs. From the user's perspective, kubectl corresponds to the control panel of the K8s cluster. It enables the user to perform all K8s cluster operations. From a technical point of view, kubectl is a client of the K8s API.
In addition, the container group Pod is the smallest scheduling unit in the K8s cluster, one container group Pod encapsulates one container, or may encapsulate multiple containers, where the containers in the container group Pod share storage, network, and the like. That is, the entire container group Pod can be regarded as a virtual machine, and then each container corresponds to a process running on the virtual machine. All containers in the same group Pod are uniformly arranged and scheduled.
Each container group Pod in the cluster is assigned a unique Ip address, and each container in the Pod shares a network namespace including an Ip address and a network port. The containers in the same container group Pod may communicate with each other through localhost. When a container in the container group Pod needs to communicate with an entity outside the container group Pod, communication is required through a network resource shared by a port API server or the like. All containers in the container group Pod are able to access the shared storage volume, allowing them to share data.
Notably, the dynamic link library file formats of different operating systems are slightly different, and in the Linux system, the file is called Shared Object (Shared Object), the file suffix is s.o, and in the Windows system, the file suffix is dll. For a dynamically linked library, it is required to specify which libraries it depends on when the executable file is created, when the executable file is running, if the operating system does not load the libraries, the libraries are loaded into the memory along with the loading of the executable file, for the executable program to run, if multiple executable files depend on the same dynamically linked library, then there is only one code of the dynamically linked library in the memory, and then it is shared for all processes of the relevant executable files to use, so it is also called shared library (shared library).
In step S102, the configuration file of the container group to be created is modified according to the annotation information.
In the K8S cluster, after the requested authentication operation is finished, if a write operation is requested, an admission control processing step is also performed. The K8S cluster supports dynamic admission control mechanisms. For example, the mirror policy Webhook can restrict which mirrors can be run in the container. For another example, to perform any admission control decision, a generic admission Webhook mechanism may be used, which may reject creation or update requests, and some admission controllers webhooks may modify incoming request data before it is further processed by Kubernetes.
In an embodiment of the present application, when an extended custom admission controller (Mutating Admission Webhook) (the admission controllers described in the embodiments herein are all extended custom admission controllers, and not the admission controller built in k8 s) receives a creation request to create a container group Pod for an application container, the admission controller will check the annotation information of the container group to be created in the resource object request of the container group Pod to be created to determine whether to match the requirement of the dynamic link library described in the annotation information. When the requirements of the dynamic link library recorded in the annotation information are not matched, the environment variable is injected into the configuration file, and the dynamic link library file on the shared storage volume is caused to replace the library file in the mirror package by the environment variable (the library file in the mirror package is not loaded into the memory in the process of starting the process), so that the final effect is that one environment variable is added when the process runs. The role of the environment variables here is to have the process run-time load the dynamic link library.
In step S103, the set of containers to be created is scheduled to the corresponding target node, and the set of containers to be created can call the dynamic link library based on the configuration information.
In the K8S cluster, the nodes are physical or virtual machines running the container group Pod. The control plane manages each node in the cluster that contains the services needed to run the container group Pod.
Accordingly, in this embodiment, scheduling the container group to be created to the corresponding target node means that the scheduler is responsible for assigning tasks to the respective nodes. It monitors the resource capacity and ensures that the performance of the working node remains within acceptable limits.
In one embodiment of the present application, as shown in fig. 3, step S102 includes:
and step A1, injecting environment variables into the configuration file, wherein the environment variables are used for loading the dynamic link library.
In an embodiment of the application, when the admission controller (Mutating Admission Webhook) receives a creation request to create a container group Pod for an application container, the admission controller checks annotation information of the container group to be created in a resource object request of the container group Pod to be created to determine whether the annotation information matches a requirement of a dynamic link library. When the annotation information does not match the requirements of the dynamic link library, then a new environment variable is injected to cause the dynamic link library file on the shared storage volume to replace the library file in the mirror package (the library file in the mirror package is not loaded into memory during process startup), the end effect being to add one more environment variable to the process running, such as ld_preload=/tmp/libssl.
According to the embodiment of the application, the configuration file of the container group to be created is modified based on the annotation file filled with the library file address field of the dynamic link library, and then the container group to be created is scheduled to the corresponding target node, so that the dynamic link library is called by the container group to be created based on the configuration information, on one hand, when the Pod of the container group Pod is deployed, the dynamic link library can be updated according to the needs of a user, so that the problem that the Pod needs to generate different multiple mirror packages in different running environments is avoided, the management work of application release is reduced, on the other hand, refined cluster resource scheduling can be realized, the utilization efficiency of resources is improved, and the user needs are better met.
In one embodiment of the present application, before performing step S103, as shown in fig. 4, the method further includes step B1 and step B2:
in step B1, the target node of the cluster nodes capable of meeting the resource demand is detected.
In the embodiment of the present application, on the premise of meeting the resource requirement, an optimal node in the cluster nodes may be selected as a target node, for example, a node with the minimum number of hops passed when the Map-stage intermediate data is transmitted through the network link in the cluster.
In step B2, the address field of the target node is written into the configuration file.
In the K8S cluster, each container group Pod is assigned a unique Ip address, and each container in the container group Pod shares a network namespace including an Ip address and a network port. The containers in the same container group Pod may communicate with each other through localhost. When a container in the container group Pod needs to communicate with an entity outside the container group Pod, communication is required through a network resource shared by ports or the like. All containers in the group of containers Pod have access to the shared storage volume, allowing them to share data.
In this embodiment, after the target node determines, the configuration file of the container group Pod to be created may be modified by using JSON Patch type object data. So as to bind the container group Pod to be created with the target node according to the configuration file of the Pod to be created.
Here, JSON Patch is a JSON-format document containing a series of Patch operations, and Patch operations supported by JSON Patch are "add", "remove", "replace", "move", "copy", and "test", etc. When only a portion of a document is altered, the use of JSON Patch can avoid sending the entire document. For example, when a configuration file of a container group Pod needs to be modified, then an "add" operation that requires a JSON Patch document is used to implement modifying only a portion of the configuration file, avoiding modifying the entire configuration file.
In one embodiment of the present application, as shown in fig. 5, step S103 includes step C1, step C2, and step C3:
in step C1, the to-be-created container group is received in a case where it is monitored that the to-be-created container group has been created.
K8s uses the etcd component, which is a database that provides distributed key value storage for sharing information on the overall state of the cluster, or etcd for configuring registration and discovery of sharing and services, so that registration by the component etcd is required when there is a newly created container group Pod in the cluster. In the embodiment of the present application, the scheduler may snoop whether there is a newly created container group Pod in the cluster by a snoop (watch) component etcd.
In step C2, the target node is determined according to the configuration file of the container group to be created.
Since the address field of the target node has been written to the configuration file before step S103 is performed, the target node can be determined from the record in the configuration file of the container group to be created.
In step C3, the set of containers to be created is bound to the target node.
In this embodiment, after the target node determines, the configuration file of the container group Pod to be created may be modified by using JSON Patch type object data. So as to bind the container group Pod to be created with the target node according to the configuration file of the container group Pod to be created.
Here, JSON Patch is a JSON-format document containing a series of Patch operations, and Patch operations supported by JSON Patch are "add", "remove", "replace", "move", "copy", and "test", etc. When only a portion of a document is altered, the use of JSON Patch can avoid sending the entire document. For example, when a configuration file of a container group Pod needs to be modified, then an "add" operation that requires a JSON Patch document is used to implement modifying only a portion of the configuration file, avoiding modifying the entire configuration file.
In one embodiment of the present application, as shown in fig. 6, the method further includes a step D1 and a step D2:
in step D1, a storage space in the target node for storing the dynamically linked library is detected.
For example, by determining the available storage space for the mounted storage volumes under each node in the cluster, and determining which are the storage spaces that can be scheduled to store the dynamically linked library. When enough storage space exists, the library file of the dynamic link library can be directly stored in the storage space, and the corresponding original library file address in the configuration file is modified according to the library file address of the dynamic link library. When the storage space is insufficient, then the data storage volume needs to be mounted on the target node.
In step D2, in case there is not enough storage space for the target node, the data storage volume is mounted on the target node.
In general, all containers in a group Pod are able to access a shared storage volume, allowing them to share data. When the storage space in the cluster needs to be expanded, the component kubelet in the K8S cluster also adds or removes the mount of the storage volume for the container group Pod and its container.
Where Kubelet is a component of a working node. The task of the Kubelet component is to track the operational status of the group of containers Pod and their containers. It is related to YAML or JSON description file of the container group Pod. Kubelet examines the specifications of the container group Pod and determines the available status of the container group Pod.
In one embodiment of the present application, as shown in fig. 7, the method further includes step E1: and adopting environment variables to inject the library file address of the dynamic link library into the target node.
In a K8S cluster, the container environment provides several important resources to the container, such as a file system (which contains one image and one or more volumes), information on the container itself, information on other objects in the cluster, and container information. In one specific example, the dynamic link library may be loaded by an environment variable ld_reload. The environment variable ld_preload may affect a link (run time linker) at program Runtime, which allows defining preferential loading of dynamic link libraries prior to program execution to achieve selective loading of the same function in different dynamic link libraries. Through environment variables, the dynamic link library required by the user can be loaded in the middle of the main program and the dynamic link library thereof, and even the normal function library can be covered.
In the embodiment of the application, when the to-be-created container group runs, the original dynamic link library in the mirror image is updated to the dynamic link library. For example, after the target node monitors that the to-be-created container group is bound with the target node, running a preconfigured initialization container, and pulling the dynamic link library by the initialization container according to the configuration file so as to realize replacement of the dynamic link library.
And after the to-be-created container group is operated, updating the state information of the to-be-created container group. For example, the container group Pod is in an available state or a non-available state.
According to the embodiment of the application, the configuration file of the container group to be created is modified based on the annotation file filled with the library file address field of the dynamic link library, and then the container group to be created is scheduled to the corresponding target node, so that the dynamic link library is called by the container group to be created based on the configuration information, on one hand, when the container group Pod is deployed, the dynamic link library can be updated according to the needs of a user, the problem that the container group Pod needs to generate different multiple mirror image packages in different running environments is avoided, the management work of application release is reduced, on the other hand, the refined cluster resource scheduling can be realized, the utilization efficiency of resources is improved, and the user needs are better met.
In a specific example of step S101, a custom Mutating Admission Webhook admission controller (e.g., which may be named io.name.admission-registry) is implemented using the admission extension mechanism of the K8S cluster and deployed in the K8S cluster. When creating a group of containers (Pod), rules are defined that interface call services (apis) send to the admission controller.
The apiserver in the embodiment of the application is one of the most important core components in the Kubernetes cluster, and mainly provides the following functions: (1) Providing a REST API interface for cluster management, including authentication authorization, data verification, cluster state change and the like; (2) A hub that provides data interaction and communication between other modules (other modules must query or modify data through an apiserver, must operate etcd through an apiserver).
In a specific example of step S102, the annotation information includes a library file address of a certain dynamically linked library (or a shared object file) that needs to be unmatched, where the library file address is: io.name.transmission-region/subsystem: lib=libssl.so: libcrypto.so, version=optimal, node=optimal. The library file address is substituted for the address field of the original dynamically linked library in the configuration file for which the Pod is to be created.
According to the embodiment of the application, the configuration file of the container group to be created is modified based on the annotation file filled with the library file address field of the dynamic link library, and then the container group to be created is scheduled to the corresponding target node, so that the container group to be created calls the dynamic link library based on the configuration information, on one hand, when the Pod is deployed, the dynamic link library can be updated according to the needs of a user, so that the problem that the Pod needs to generate different multiple mirror image packages in different running environments is avoided, the management work of application release is reduced, on the other hand, refined cluster resource scheduling can be realized, the utilization efficiency of resources is improved, and the user needs are better met.
In another embodiment of the present application, a timing diagram of a method for scheduling a dynamically linked library of container groups of the present application is shown in fig. 2. The process of the dynamic link library scheduling method for a container group of the present application is described below in conjunction with fig. 2.
First, a custom Mutating AdmissionWebhook admission controller is implemented using the webhook extension mechanism of the k8s cluster, as shown in the figure. Webhook was deployed in k8s clusters. And defines the rules that apiserver sends to webhook when creating the container group Pod.
In the K8s cluster, kubectl is the command line tool of the K8s cluster. It is used to deploy applications, monitor and control cluster resources, and view logs. From the user's perspective, kubectl corresponds to the control panel of K8 s. It enables the user to perform all K8s operations. From a technical point of view, kubectl is a client of the K8s API. The user may initiate a request to create a container group Pod directly based on Kubectl.
When a user (user) wants to create a container group Pod (as shown in the figure by create Pod), the user sends a request to create a Pod through the component kubectl of the k8s cluster. kubectl sends the request to apiserver, and the request type is create Pod (as shown by post ("/pods") in the figure). The request is sent to webhook by apiserver.
Since in the k8s cluster, in order to perform any admission control decision, the creation or update request needs to be rejected by the webhook, the webhook modifies the incoming request data (as shown by the modification p od spec in the figure) before the incoming request data is further processed by Kubernetes, so when the user creates Pod for the application container, the creation request is forwarded to the custom webhook. Webhook checks whether a resource object request for creating Pod contains a library file that automatically adapts to a dynamic link library, and presumes that the library file of the dynamic link library (or the sharing target file) is contained in the animation annotation of the Pod: name. Introduction-region/subsystem: lib=libssl. So: libcrypto. So, version=optimal, node=optimal. Then the optimal libssl.so and libcrypto.so are selected and the library file is scheduled to the most appropriate node.
Second, it is checked whether the cluster has a target node that meets the user's request before scheduling the library file to the most appropriate node. And sending a response to the apiserver to modify the configuration file of the container group Pod under the condition of the target node, wherein the response comprises the object data of the JSONPatch type. The apiserver may use this JSON Patch type of object data to modify the configuration of Pod (as shown by the add Pod object in the figure). The information of the target node is then injected into the custom scheduler (shown as scheduler in fig. 2) via an apiserver. For example, a custom scheduler may be named schedulerName:io.name.scheduler).
Before the library file is scheduled to the most suitable node, it is also necessary to check whether there is enough storage space in the cluster, and if so, directly store the library file using the existing storage space. If there is insufficient storage space, a storage volume may be mounted (e.g., under a/tmp directory) on the target node. The LD_PRELOAD environment variable is then injected in the target node to load the dynamic link library.
For example, the code for injecting the ld_pre environment variable is as follows:
env:
-name:LD_PRELOAD
value:/tmp/libssl.so:/tmp/libcrypto.so
Again, after the apiserver writes the Webhook modified resource object (i.e., the Pod of the modified configuration file) to the component etcd of the k8s cluster, the scheduler monitors the creation of a resource object (as shown by watch (newpod) in the figure). The Pod is matched to the scheduler for scheduling (as shown in the pick node) according to the Pod's profile. The scheduler finds a most suitable target node according to the configuration file information of the Pod, and calls the apiserver to bind the Pod with the target node (as shown by bindpod and addbinding object in the figure). K8s uses etcd, a database that provides distributed key value storage for sharing information on the overall state of the cluster, or etcd for configuring registration and discovery of sharing and services.
After kubelet on the selected target node also overhears the binding (as shown in watch (bound Pod) in the figure), the initialization proceeds by the kubelet component of the k8s cluster entering the environment preparation (as shown in preparation lib in the figure) before Pod run (run) and starting the application that preferentially runs the initialization container (Init Pod) injected by the admission controller. Here, the initialization rule of Init Pod may be a rule customized according to the requirement.
Here Init Pod is a special container that runs before the application container within the group Pod starts. Init Pod may include utilities and installation scripts that are not present in some application images. InitPod runs one after the other and must complete successfully all before the main container of Pod starts. InitPod can be used in the following scenario:
(1) Files in volumes used by the primary container. Including retrieving the certificates and private keys used by the master container from a secure certificate store, generating configuration files, downloading data, etc.
(2) An Init Pod network system. Because all containers of Pod share the same network namespace, any changes made to them by the network interface and configuration, init containers, will also affect the master container
(3) The activation of the main container of Pod is delayed until preconditions are met. For example, if the master container relies on another available service before the container starts, init Pod may block until the service is ready.
(4) The external service Pod is notified that it is about to start running. In the special case where an external system must be notified when a new instance of an application is started, init Pod may be used to deliver this notification.
In the manifest of the container group Pod, the definition of Init Pod is very simple, and Init Pod can be defined in the corresponding field of the configuration file of Pod, just as a regular container is defined in its container (InitPod) field.
During the initialization process, initPod pulls the dynamic link library according to the configuration file of the injected Pod, and stores it under the specified directory, for example,/tmp/libssl. So and/tmp/libcrypto. So.
Then, when initialization is completed, return is made to the kubelet component. And then the Pod starts to run (run), and loads the appointed dynamic link library to replace the dynamic link library in the Image mirror Image, thereby completing the updating of the dynamic link library. Finally, returning to the kubelet component, the apiserver is called to update the state information of the container group Pod (as shown in update Pod status).
Through the technical scheme of the embodiment of the application, the following technical effects can be realized:
(1) The user can update the dynamic link library of the application container according to the requirements of the actual deployment environment through the user-defined access controller and the user-defined scheduler, and can realize the automatic adaptation between the container and the target node and select proper hardware resources to execute tasks, thereby meeting the requirements of the user, and being flexible and convenient;
(2) The method can also realize the loading of the dynamic link library by using the environment variable LD_PRELOAD, and realize the non-invasive modification of the files in the mirror image container;
(3) According to the actual resources and the environment of the application container deployment, targeted updating is performed, and the management process of software release is simplified.
According to the embodiment of the application, the configuration file of the container group to be created is modified based on the annotation file filled with the library file address field of the dynamic link library, and then the container group to be created is scheduled to the corresponding target node, so that the container group to be created calls the dynamic link library based on the configuration information, on one hand, when the Pod is deployed, the dynamic link library can be updated according to the needs of a user, so that the problem that the Pod needs to generate different multiple mirror image packages in different running environments is avoided, the management work of application release is reduced, on the other hand, refined cluster resource scheduling can be realized, the utilization efficiency of resources is improved, and the user needs are better met.
The following describes a dynamically linked library scheduling apparatus for a container group of the present application with reference to fig. 8, where fig. 8 shows a schematic block diagram of a dynamically linked library scheduling apparatus 800 for a container group according to an embodiment of the present application.
As shown in fig. 8, the dynamic link library scheduling apparatus 800 of a container group includes:
the receiving module 801 receives a request for creating a container group to be created, wherein the request comprises annotation information of the container group to be created, and a dynamic link library field is filled in the annotation information.
A modifying module 802, configured to modify the configuration file of the container group to be created according to the annotation information.
Wherein the modifying module 802 is further configured to modify an address field of an original dynamic link library in the configuration file into a library file address field of the dynamic link library.
A scheduling module 803, configured to schedule the to-be-created container group to a corresponding target node, where the to-be-created container group can invoke the dynamic link library based on the configuration information.
Wherein the scheduling module 803 is further configured to receive the to-be-created container group if it is monitored that the to-be-created container group is already created; determining the target node according to the configuration file of the container group to be created; and binding the to-be-created container group with the target node.
Furthermore, according to an embodiment of the present application, there is also provided a storage medium on which program instructions are stored, which program instructions, when executed by a computer or a processor, are adapted to carry out the respective steps of the method for scheduling a dynamically linked library of container groups of the embodiments of the present application. The storage medium may include, for example, a memory card of a smart phone, a memory component of a tablet computer, a hard disk of a personal computer, read-only memory (ROM), erasable programmable read-only memory (EPROM), portable compact disc read-only memory (CD-ROM), USB memory, or any combination of the foregoing storage media.
The device and the storage medium for scheduling the dynamic link library of the container group have the same advantages as the method for scheduling the dynamic link library of the container group because the method for scheduling the dynamic link library of the container group can be realized.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above illustrative embodiments are merely illustrative and are not intended to limit the scope of the present application thereto. Various changes and modifications may be made therein by one of ordinary skill in the art without departing from the scope and spirit of the present application. All such changes and modifications are intended to be included within the scope of the present application as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, e.g., the division of the elements is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple elements or components may be combined or integrated into another device, or some features may be omitted or not performed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the present application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in order to streamline the application and aid in understanding one or more of the various inventive aspects, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof in the description of exemplary embodiments of the application. However, the method of this application should not be construed to reflect the following intent: i.e., the claimed application requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be combined in any combination, except combinations where the features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the present application and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some of the modules according to embodiments of the present application may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present application may also be embodied as device programs (e.g., computer programs and computer program products) for performing part or all of the methods described herein. Such a program embodying the present application may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
The foregoing is merely illustrative of specific embodiments of the present application and the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes or substitutions are intended to be covered by the scope of the present application. The protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method for scheduling a dynamic link library of a container group, the method comprising:
receiving a request for creating a container group to be created, wherein the request comprises annotation information of the container group to be created, and a library file address field of a dynamic link library is filled in the annotation information;
modifying the configuration file of the container group to be created according to the annotation information;
and dispatching the container group to be created to a corresponding target node, wherein the container group to be created can call the dynamic link library based on the configuration information.
2. The method according to claim 1, wherein the method further comprises:
detecting the target node which can meet the resource requirement in the cluster nodes;
and writing the address field of the target node into the configuration file.
3. The method of claim 1, wherein modifying the configuration file of the set of containers to be created based on the annotation information comprises:
and injecting environment variables into the configuration file, wherein the environment variables are used for loading the dynamic link library.
4. The method according to claim 1, wherein the method further comprises:
Detecting a storage space used for storing the dynamic link library in the target node;
and in the case that the target node does not have enough storage space, the data storage volume is mounted on the target node.
5. The method according to claim 1, wherein the method further comprises:
and adopting environment variables to inject the library file address of the dynamic link library into the target node.
6. The method of claim 1, wherein scheduling the set of containers to be created to the respective target node comprises:
receiving the to-be-created container group under the condition that the to-be-created container group is monitored to be created;
determining the target node according to the configuration file of the container group to be created;
binding the to-be-created container group with the target node.
7. The method of claim 1, wherein the set of containers to be created updates an original dynamic link library in the image to the dynamic link library at run-time.
8. The method of claim 1, wherein the state information of the group of containers to be created is updated after the group of containers to be created is run.
9. A dynamically linked library scheduling apparatus for a set of containers, the apparatus comprising:
the receiving module receives a request for creating a container group to be created, wherein the request comprises annotation information of the container group to be created, and a dynamic link library field is filled in the annotation information;
the modification module is used for modifying the configuration file of the container group to be created according to the annotation information;
and the scheduling module is used for scheduling the container group to be created to a corresponding target node, and the container group to be created can call the dynamic link library based on the configuration information.
10. A storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to perform the method of dynamic linked library scheduling of container groups according to any one of claims 1 to 8.
CN202310095946.XA 2023-01-30 2023-01-30 Method, device and storage medium for scheduling dynamic link library of container group Pending CN116028163A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310095946.XA CN116028163A (en) 2023-01-30 2023-01-30 Method, device and storage medium for scheduling dynamic link library of container group

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310095946.XA CN116028163A (en) 2023-01-30 2023-01-30 Method, device and storage medium for scheduling dynamic link library of container group

Publications (1)

Publication Number Publication Date
CN116028163A true CN116028163A (en) 2023-04-28

Family

ID=86072321

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310095946.XA Pending CN116028163A (en) 2023-01-30 2023-01-30 Method, device and storage medium for scheduling dynamic link library of container group

Country Status (1)

Country Link
CN (1) CN116028163A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117407048A (en) * 2023-12-14 2024-01-16 江西飞尚科技有限公司 Flow configuration method and system of plug-in data processing software
CN117811838A (en) * 2024-02-29 2024-04-02 博上(山东)网络科技有限公司 HAproxy server IP white list synchronization method and system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117407048A (en) * 2023-12-14 2024-01-16 江西飞尚科技有限公司 Flow configuration method and system of plug-in data processing software
CN117407048B (en) * 2023-12-14 2024-03-12 江西飞尚科技有限公司 Flow configuration method and system of plug-in data processing software
CN117811838A (en) * 2024-02-29 2024-04-02 博上(山东)网络科技有限公司 HAproxy server IP white list synchronization method and system
CN117811838B (en) * 2024-02-29 2024-05-17 博上(山东)网络科技有限公司 HAProxy server IP white list synchronization method and system

Similar Documents

Publication Publication Date Title
US10853047B2 (en) Method for virtualizing software applications
US20210349706A1 (en) Release lifecycle management system for multi-node application
US11178207B2 (en) Software version control without affecting a deployed container
CN110995473B (en) Service node control method and related equipment
US9413819B1 (en) Operating system interface implementation using network-accessible services
WO2017067016A1 (en) Extension of resource constraints for service-defined containers
US20210240489A1 (en) Firmware update patch
JP7143417B2 (en) computing device
US10594800B2 (en) Platform runtime abstraction
WO2019060228A1 (en) Systems and methods for instantiating services on top of services
CN116028163A (en) Method, device and storage medium for scheduling dynamic link library of container group
SG189385A1 (en) High availability of machines during patching
CN101326489A (en) OS mini-boot for running multiple environments
CA2637749A1 (en) Method, system, and program product for deploying a platform dependent application in a grid environment
CN111984269A (en) Method for providing application construction service and application construction platform
US20130227572A1 (en) Test device, a system, a program and a method
US20210157623A1 (en) Automated Management of Machine Images
CN111984270A (en) Application deployment method and system
US9350596B2 (en) On-demand tethered greedy virtual application appliance
CN117112122A (en) Cluster deployment method and device
WO2014145147A1 (en) Web services provided from software framework
US20180341475A1 (en) Just In Time Deployment with Package Managers
US8924963B2 (en) In-process intermediary to create virtual processes
CN113867776A (en) Method and device for publishing middle station application, electronic equipment and storage medium
Mustafa Microservices vs. Monolithic

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination