CN116436968A - Service grid communication method, system, device and storage medium - Google Patents

Service grid communication method, system, device and storage medium Download PDF

Info

Publication number
CN116436968A
CN116436968A CN202310349909.7A CN202310349909A CN116436968A CN 116436968 A CN116436968 A CN 116436968A CN 202310349909 A CN202310349909 A CN 202310349909A CN 116436968 A CN116436968 A CN 116436968A
Authority
CN
China
Prior art keywords
pod
rdma
network card
information
service grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310349909.7A
Other languages
Chinese (zh)
Inventor
董善义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Inspur Data Technology Co Ltd
Original Assignee
Jinan Inspur Data Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Inspur Data Technology Co Ltd filed Critical Jinan Inspur Data Technology Co Ltd
Priority to CN202310349909.7A priority Critical patent/CN116436968A/en
Publication of CN116436968A publication Critical patent/CN116436968A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/04Network management architectures or arrangements
    • H04L41/046Network management architectures or arrangements comprising network management agents or mobile agents therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)

Abstract

The application discloses a service grid communication method, a system, a device and a storage medium, which are applied to the field of service grid communication. And generating VFs with corresponding numbers according to the RDMA network card information by acquiring the RDMA network card information, acquiring the pod with the corresponding mark and distributing the pod to the corresponding node, and controlling the proxy terminal to distribute the VFs to the pod. Through the method, the network card with the RDMA acceleration function is distributed to the corresponding service grid, RDMA information can be loaded to the pod, after the RDMA information is loaded, a Central Processing Unit (CPU) can be directly bypassed when communication between services is performed, data is directly copied into an opposite-end memory, multiple TCP/IP protocols are not needed, acceleration of communication between services in the grid is achieved, and loss of performance is reduced.

Description

Service grid communication method, system, device and storage medium
Technical Field
The present invention relates to the field of service grid communication, and in particular, to a service grid communication method, system, device, and storage medium.
Background
For traffic transmission between service grids, service traffic between service grids is intercepted by an agent, a real destination IP (pod IP) is obtained from a proxy (proxy) according to a destination address and a traffic policy from the proxy, the traffic is intercepted by the proxy after reaching a designated pod, and the traffic enters a service B from the proxy after a series of operations. In this process, the TCP/IP protocol is actually passed multiple times, and there is a certain loss in performance compared with the case of going from service a directly to service B. If the cluster size is large, its performance loss will be high.
In view of the above technical problems, a solution to the above problems is needed by those skilled in the art.
Disclosure of Invention
The application aims to provide a service grid communication method, a system, a device and a storage medium, which mainly generate VFs with corresponding numbers according to RDMA network card information by acquiring the RDMA network card information, acquire the pod with corresponding marks and distribute the pod to corresponding nodes, and control a proxy terminal to distribute the VFs to the pod. According to the method, firstly, an existing service grid control surface is modified, the identification of the pod identification and the capability of calling the proxy end are added, the proxy end is written, RDMA information can be loaded for the pod, the RDMA information is deployed on each node of the cluster in a daemon set mode, a network card with an RDMA acceleration function is distributed to a corresponding service grid, RDMA information can be loaded for the pod, a Central Processing Unit (CPU) is directly bypassed when communication is carried out between services, and data is directly copied into an opposite-end memory, so that acceleration of communication between services in the grid is realized, and loss of performance is reduced. And the RDMA capability is applied to communication among service grids, so that the communication of inter-grid services is greatly improved.
In order to solve the above technical problems, the present application provides a service grid communication method, which includes:
acquiring RDMA network card information;
generating VFs with corresponding numbers according to the RDMA network card information;
acquiring the pod with the corresponding mark and distributing the pod to the corresponding node;
VF is assigned to pod.
Preferably, obtaining RDMA network card information includes:
starting a corresponding proxy end on the node;
the control proxy end identifies the RDMA network card and acquires information of the RDMA network card.
Preferably, the number of agent ends corresponds to the number of nodes.
Preferably, after generating the corresponding number of VFs according to the RDMA network card information, the method further comprises:
the control agent terminal identifies the information of the VF according to the information of the RDMA network card;
and the control proxy terminal reports the VF information.
Preferably, the step of acquiring the pod with the corresponding mark and distributing the pod to the corresponding node further comprises:
judging whether kubelet monitors the creation event of the pod;
if yes, corresponding identifiers are distributed to the pod; and saving the identification and the resource information to a local file.
Preferably, the control proxy assigns VFs to pod so that pod can use RDMA including:
controlling kubelet to establish network connection for each pod;
the control agent terminal obtains the mark of the pod according to the resource information;
VFs are assigned to pod according to the identification.
Preferably, the control proxy assigns VFs to pod so that pod can use RDMA and then further comprises:
judging whether the node starts RDMA service or not;
if yes, determining that the pod of the current node has a corresponding mark;
the registration of RDMA related instances is performed on the current node so that each correspondingly tagged pod directly uses RDMA services according to the instance.
In order to solve the above technical problem, the present application further provides a service grid communication system, which includes:
the first acquisition module is used for acquiring RDMA network card information;
the generating module is used for generating VFs with corresponding numbers according to the RDMA network card information;
the first distribution module is used for acquiring the pod with the corresponding mark and distributing the pod to the corresponding node;
and a second allocation module for allocating VF to pod.
The service grid communication system further includes:
the second obtaining module is configured to obtain RDMA network card information, including:
the starting module is used for starting the corresponding proxy end on the node;
the first control module is used for controlling the proxy end to identify the RDMA network card and acquire information of the RDMA network card.
The service grid communication system further includes:
the identification module is used for controlling the proxy end to identify the information of the VF according to the information of the RDMA network card;
and the second control module is used for controlling the proxy end to report the VF information.
The service grid communication system further includes:
the first judging module is used for judging whether kubelet monitors the creation event of the pod;
the storage module is used for distributing corresponding identifiers to the pod if yes; and saving the identification and the resource information to a local file.
The service grid communication system further includes:
the third control module is used for controlling kubelet to establish network connection for each pod;
the fourth control module is used for controlling the proxy end to acquire the mark of the pod according to the resource information;
and the third distribution module is used for distributing the VF to the pod according to the identification.
The service grid communication system further includes:
the second judging module is used for judging whether the node starts RDMA service or not;
the determining module is used for determining that the pod of the current node has a corresponding mark if yes;
and the registration module is used for registering the RDMA related instance of the current node so that each pod with the corresponding mark directly uses RDMA service according to the instance.
To solve the above technical problem, the present application further provides a service grid communication device, including a memory for storing a computer program;
a processor for implementing the steps of the service grid communication method as described above when executing the computer program.
To solve the above technical problem, the present application further provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the service grid communication method as described above.
According to the service grid communication method, the RDMA network card information is obtained, VFs with corresponding marks are obtained and distributed to corresponding nodes, and the control proxy terminal distributes the VFs to the pods. By the method, RDMA is carried
The network card with the acceleration function is distributed to the corresponding service grid, RDMA information can be loaded to the pod, a Central Processing Unit (CPU) is directly bypassed when communication is carried out between services, and data is directly copied to an opposite-end memory, so that acceleration of communication between services in the grid is realized, and performance loss is reduced.
The application also provides a service grid communication system, a device and a computer readable storage medium, which correspond to the method and have the same beneficial effects as the method.
Drawings
For a clearer description of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described, it being apparent that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a service grid communication method provided herein;
FIG. 2 is a flow chart of RDMA usage by pod provided herein;
FIG. 3 is a block diagram of a service grid communication system provided herein;
fig. 4 is a block diagram of a service grid communication device according to another embodiment of the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments obtained by those skilled in the art based on the embodiments herein without making any inventive effort are intended to fall within the scope of the present application.
The core of the application is to provide a service grid communication method, a system, a device and a storage medium, which mainly obtain RDMA network card information, generate VFs with corresponding numbers according to the RDMA network card information, obtain pod with corresponding marks and distribute the pod to corresponding nodes, and control a proxy terminal to distribute the VFs to the pod. By the method, the existing service grid control surface is firstly modified, the identification of the pod identifier and the capability of calling the proxy end are added, the proxy end is written, RDMA information can be loaded to the pod, the RDMA information is deployed on each node of the cluster in the form of daemon set, a network card with RDMA acceleration function is distributed to the corresponding service grid, RDMA information can be loaded to the pod, a Central Processing Unit (CPU) is directly bypassed when communication is carried out between services, and data is directly copied into the opposite end
In the storage, the acceleration of communication between services in the grid is realized, and the loss of performance is reduced. And the RDMA capability is applied to communication among service grids, so that the communication of inter-grid services is greatly improved. .
In order to provide a better understanding of the present application, those skilled in the art will now make further details of the present application with reference to the drawings and detailed description.
Fig. 1 is a flowchart of a service grid communication method provided in the present application, as shown in fig. 1, the method includes the following steps:
s10: acquiring RDMA network card information;
it should be noted that the service grid technology representative technology Istio and the like are strongly correlated with k8 s. The RDMA technology is used in a service grid, an intelligent network card supporting RDMA in K8S is needed first, the intelligent network card is used in the RDMA scheme of K8S, the kernel is required to be loaded manually, the VF total_vfs and the number of VF num_vfs are adjusted to meet requirements, K8S is short for Kubernetes, K8S is an open source and is used for managing containerized applications on a plurality of hosts in a cloud platform, the Kubernetes aims to enable the containerized applications to be deployed simply and efficiently, the Kubernetes provides a mechanism for deploying, planning, updating and maintaining the applications, the related information of the intelligent network card supporting RDMA is required to be distributed to the service grid in a mode provided by the application, the embodiment of the application does not limit a function module for acquiring network card information specifically, the embodiment of the application does not limit information specifically, and the embodiment of the application only provides a preferred implementation mode.
S11: generating VFs with corresponding numbers according to the RDMA network card information;
it should be noted that, after the RDMA network card information is acquired, a corresponding number of VFs are generated according to the acquired RDMA network card information, where the VFs are virtual functions related to each physical function, each VF may share one or more physical resources with the physical function and other VFs associated with the same physical function, and the VFs only allow to have configuration resources for their own behavior, and each VF has a PCI memory space for mapping its register set, and after creating the VF, it may be directly assigned to each application program. The number of the created VFs is not limited, and the corresponding terminal for identifying and reporting the VF information according to the network card information filtering discovery may be started on a node of the cluster, where the network card information according to the present embodiment may include, but is not limited to, information of the network card vendors, devices and drivers, etc., and the embodiment of the present application only provides a preferred implementation, and is not limited to including only the above method, and may be changed according to the actual situation.
S12: acquiring the pod with the corresponding mark and distributing the pod to the corresponding node;
it should be noted that, when a pod is created, if RDMA acceleration needs to be started, a flag is added to the pod to indicate that RDMA acceleration will be used when traffic is sent to the pod, and at this time, the K8 sscheller dispatches the pod to the VF node according to the number of the pod VFs. When a Pod creation event is monitored, a sufficient Device ID is allocated to a Pod from a memory, and the Device ID and information such as the Pod UID and Resource Name are stored in a local file, the corresponding marks of the Pod are not specifically limited in the embodiment of the present application, the number of the Pod is not specifically limited in the embodiment of the present application, and the local file storing the corresponding information is not specifically limited in the embodiment of the present application.
S13: VF is assigned to pod.
It should be noted that, the VF corresponds to the RDMA function, the VF is mounted to the corresponding pod, the pod may use the RDMA function, the embodiment of the application does not specifically limit how the VF is allocated to a plurality of modes, the embodiment of the application does not specifically limit how the pod loaded with RDMA is deployed on each node of the cluster, may, but not limited to, deploy in the form of daemon set, etc., the embodiment of the application only provides a preferred implementation, the embodiment of the application is not limited to only including the method described above, and may be changed according to the actual situation.
Therefore, according to the method provided by the embodiment of the application, the RDMA network card information is obtained, the VFs with the corresponding marks are obtained and distributed to the corresponding nodes, and the control proxy terminal distributes the VFs to the corresponding nodes. According to the method, the existing service grid control surface is firstly modified, identification of the pod identification is added, the network card with the RDMA acceleration function is distributed to the corresponding service grid, RDMA information can be loaded to the pod, a Central Processing Unit (CPU) is directly bypassed when communication between services is carried out, and data is directly copied to an opposite-end memory, so that acceleration of communication between services in the grid is achieved, and loss of performance is reduced. And the RDMA capability is applied to communication among service grids, so that the communication of inter-grid services is greatly improved.
Based on the foregoing embodiments, the present application provides a preferred embodiment, where obtaining RDMA network card information includes:
starting a corresponding proxy end on the node;
the control proxy end identifies the RDMA network card and acquires information of the RDMA network card.
It should be noted that, in the process of automatic configuration for RDMA intelligent network card, a proxy end is started on the node of each cluster, the proxy end is used to obtain the network card of the filtering host machine, the corresponding number of VF is generated by the network card information, the corresponding annotation identifier is added on the pod, and the control band is related to the network card information
Regarding the identified pod, the control plane may call the node dispatcher of k8s to dispatch the node dispatcher to the node with the intelligent network card device, and the control plane discovers that the pod has the RDMA identifier, loads RDMA to the pod through the proxy end, so that the embodiment of the application does not specifically limit the information type of obtaining the RDMA network card, the embodiment of the application does not specifically limit the number of enabled proxy ends, and only provides a preferred implementation mode, and the embodiment of the application is not limited to the implementation mode only including the above, and can change according to actual situations.
Therefore, according to the method provided by the embodiment of the application, the RDMA network card information is obtained, the VFs with the corresponding marks are obtained and distributed to the corresponding nodes, and the control proxy terminal distributes the VFs to the corresponding nodes. According to the method, firstly, an existing service grid control surface is modified, the identification of the pod identification and the capability of calling the proxy end are added, the proxy end is written, RDMA information can be loaded for the pod, the RDMA information is deployed on each node of the cluster in a daemon set mode, a network card with an RDMA acceleration function is distributed to a corresponding service grid, RDMA information can be loaded for the pod, a Central Processing Unit (CPU) is directly bypassed when communication is carried out between services, and data is directly copied into an opposite-end memory, so that acceleration of communication between services in the grid is realized, and loss of performance is reduced. And the RDMA capability is applied to communication among service grids, so that the communication of inter-grid services is greatly improved.
Based on the above embodiments, the present application provides a preferred embodiment, where the number of proxy ends corresponds to the number of nodes.
It should be noted that, for the configuration of the RDMA intelligent network card, the configuration process starts a proxy on the node of each cluster to obtain the filtering host network card, where the number of proxy corresponds to the number of nodes.
Therefore, according to the method provided by the embodiment of the application, by starting one proxy end on each cluster node, the proxy end is used for acquiring the filter host network card, acquiring the RDMA network card information, generating VFs of corresponding number according to the RDMA network card information, acquiring the pod with the corresponding mark and distributing the pod to the corresponding node, and controlling the proxy end to distribute the VFs to the pod. By the method, the existing service grid control plane is firstly modified, the identification of the pod identification and the capability of calling the proxy end are added, the proxy end is written, RDMA information can be loaded to the pod, the RDMA information is deployed on each node of the cluster in the form of daemon set, the network card with RDMA acceleration function is distributed to the corresponding service grid, and the network card with RDMA acceleration function can be distributed
RDMA information can be loaded to the pod, a Central Processing Unit (CPU) is directly bypassed when communication is carried out between services, and data is directly copied into an opposite-end memory, so that acceleration of communication between services in a grid is realized, and performance loss is reduced. And the RDMA capability is applied to communication among service grids, so that the communication of inter-grid services is greatly improved.
Based on the foregoing embodiments, the present application provides a preferred embodiment, further including, after generating a corresponding number of VFs according to RDMA network card information:
the control agent terminal identifies the information of the VF according to the information of the RDMA network card;
and the control proxy terminal reports the VF information.
It should be noted that, after the corresponding number of VFs is generated through the network card information, the proxy end filters, discovers, identifies and reports the VF information through the network card information to perform discovery and reporting of the VFs, where the network card information may, but is not limited to, information including a network card vendors, devices and drivers, etc., the embodiment of the present application does not specifically limit the manner of reporting the VF information, only provides a preferred embodiment herein, and the embodiment of the present application is not limited to the manner described above, and may be changed according to actual situations.
It can be seen that, according to the method provided by the embodiment of the application, by starting a proxy on a node of each cluster, the proxy is used for acquiring the network card of the filtering host, acquiring the information of the RDMA network card, generating VFs of corresponding number according to the information of the RDMA network card, acquiring the pod with the corresponding mark and distributing the pod to the corresponding node, controlling the proxy to distribute the VFs to the pod, controlling the proxy to identify the information of the VFs according to the information of the RDMA network card, and controlling the proxy to report the information of the VFs. According to the method, firstly, an existing service grid control surface is modified, the identification of the pod identification and the capability of calling the proxy end are added, the proxy end is written, RDMA information can be loaded for the pod, the RDMA information is deployed on each node of the cluster in a daemon set mode, a network card with an RDMA acceleration function is distributed to a corresponding service grid, RDMA information can be loaded for the pod, a Central Processing Unit (CPU) is directly bypassed when communication is carried out between services, and data is directly copied into an opposite-end memory, so that acceleration of communication between services in the grid is realized, and loss of performance is reduced. And the RDMA capability is applied to communication among service grids, so that the communication of inter-grid services is greatly improved.
Based on the foregoing embodiments, the present application provides a preferred embodiment, further including, after obtaining the pod with the corresponding mark and assigning it to the corresponding node:
judging whether kubelet monitors the creation event of the pod;
if yes, corresponding identifiers are distributed to the pod; and saving the identification and the resource information to a local file.
It should be noted that, the Kubelet monitors the Pod creation event, allocates a sufficient Device ID for the Pod from the memory, and stores the Device ID and the Pod UID, resource Name and other information in the local file, and the embodiment of the present application does not specifically limit the local file, and only provides a preferred implementation, and the embodiment of the present application is not limited to the implementation described above, and may be changed according to practical situations.
Therefore, according to the method provided by the embodiment of the application, the RDMA network card information is obtained, the VFs with the corresponding marks are obtained and distributed to the corresponding nodes, and the control proxy terminal distributes the VFs to the corresponding nodes. According to the method, the existing service grid control surface is firstly modified, identification of the pod identification is added, the network card with the RDMA acceleration function is distributed to the corresponding service grid, RDMA information can be loaded to the pod, a Central Processing Unit (CPU) is directly bypassed when communication between services is carried out, and data is directly copied to an opposite-end memory, so that acceleration of communication between services in the grid is achieved, and loss of performance is reduced. And the RDMA capability is applied to communication among service grids, so that the communication of inter-grid services is greatly improved.
Based on the above embodiments, the present application provides a preferred embodiment, where the control proxy assigns the VF to the pod, so that the pod can use RDMA includes:
controlling kubelet to establish network connection for each pod;
the control agent terminal obtains the mark of the pod according to the resource information;
VFs are assigned to pod according to the identification.
It should be noted that kubelet configures network information for pod. The proxy end queries Device ID information according to pod UID and Resource Name information stored in the local file, and sequentially calls Calico along with the entry transferred by kubelet to configure the network. The Calico generates the eth0 main network for Pod and implements network concepts such as service/network Policy defined by K8S, and simultaneously mounts the VF into the Pod, where the Pod may use RDMA function, and the embodiment of the present application only provides a preferred implementation, and the embodiment of the present application is not limited to only the above-mentioned implementation, and may be selected according to practical situations.
It can be seen that, in the method provided by the embodiment of the present application, an agent is started on a node of each cluster to obtain a filtering host network card, obtain RDMA network card information, generate a corresponding number of VFs according to the RDMA network card information, and obtain a pod with a corresponding mark and allocate the pod to a corresponding node
And the control proxy end distributes the VF to the pod, recognizes the information of the VF according to the information of the RDMA network card, and reports the information of the VF. According to the method, firstly, an existing service grid control surface is modified, the identification of the pod identification and the capability of calling the proxy end are added, the proxy end is written, RDMA information can be loaded for the pod, the RDMA information is deployed on each node of the cluster in a daemon set mode, a network card with an RDMA acceleration function is distributed to a corresponding service grid, RDMA information can be loaded for the pod, a Central Processing Unit (CPU) is directly bypassed when communication is carried out between services, and data is directly copied into an opposite-end memory, so that acceleration of communication between services in the grid is realized, and loss of performance is reduced. And the RDMA capability is applied to communication among service grids, so that the communication of inter-grid services is greatly improved.
Based on the above embodiments, the present application provides a preferred embodiment, where the control proxy assigns the VF to the pod, so that the pod may further include after using RDMA:
judging whether the node starts RDMA service or not;
if yes, determining that the pod of the current node has a corresponding mark;
the registration of RDMA related instances is performed on the current node so that each correspondingly tagged pod directly uses RDMA services according to the instance.
It should be noted that, if the current node starts the RDMA service, it is determined that the pod of the current node carries a corresponding flag, and the current node is registered with the RDMA related instance, so that each pod carrying the corresponding flag directly uses the RDMA service according to the instance, that is, the RDMA supporting instance registers the related instance in the service discovery of the control plane, and when the RDMA-started service is requested, the request of other services will be accelerated by using RDMA.
Therefore, according to the method provided by the embodiment of the application, the RDMA network card information is obtained, the VFs with the corresponding marks are obtained and distributed to the corresponding nodes, and the control proxy terminal distributes the VFs to the corresponding nodes. According to the method, the existing service grid control surface is firstly modified, identification of the pod identification is added, the network card with the RDMA acceleration function is distributed to the corresponding service grid, RDMA information can be loaded to the pod, a Central Processing Unit (CPU) is directly bypassed when communication between services is carried out, and data is directly copied to an opposite-end memory, so that acceleration of communication between services in the grid is achieved, and loss of performance is reduced. And the RDMA capability is applied to the communication among service grids, so that the communication of inter-grid services is greatly improved, and the increase is simultaneously carried out
The ability of the control plane of the service grid is strong, dynamic discovery is realized for the node capable of starting RDMA function, and automatic registration is realized for the service capable of starting RDMA.
An embodiment of an application scenario is provided in this application, as shown in fig. 2, fig. 2 is a flowchart of implementation of pod using RDMA, where a network control plane is configured to obtain RDMA information of a node loaded by a proxy (agent), and in each node, the proxy issues VF information to each pod, so that the node may use RDMA, where the embodiment of the application does not specifically limit the number of nodes, the embodiment of the application does not specifically limit the number of pods in each node, service transmission can be directly performed between pods according to RDMA, the embodiment of the application does not limit the pod performing service transmission, and the embodiment of the application only provides an embodiment of an application scenario, which is not limited to the embodiment including only the above manner.
Therefore, according to the method provided by the embodiment of the application, the RDMA network card information is obtained, the VFs with the corresponding marks are obtained and distributed to the corresponding nodes, and the control proxy terminal distributes the VFs to the corresponding nodes. By the method, the network card with the RDMA acceleration function is distributed to the corresponding service grid, RDMA information can be loaded to the pod, a Central Processing Unit (CPU) is directly bypassed when communication between services is performed, and data is directly copied to an opposite-end memory, so that acceleration of communication between services in the grid is realized, and loss of performance is reduced.
Based on the function module angle, the application further provides a service grid communication system, as shown in fig. 3, fig. 3 is a structural diagram of the service grid communication system provided by the application, and the system includes:
a first obtaining module 30, configured to obtain RDMA network card information;
the generating module 31 is configured to generate a corresponding number of VFs according to the RDMA network card information;
a first allocation module 32 for acquiring the pod with the corresponding mark and allocating it to the corresponding node;
a second allocation module 33 for allocating VFs to pod.
Based on the function module angle, the service grid communication system provided by the application further comprises:
the second obtaining module is configured to obtain RDMA network card information, including:
the starting module is used for starting the corresponding proxy end on the node;
the first control module is used for controlling the proxy end to identify the RDMA network card and acquire information of the RDMA network card.
Based on the function module angle, the service grid communication system provided by the application further comprises:
the identification module is used for controlling the proxy end to identify the information of the VF according to the information of the RDMA network card;
and the second control module is used for controlling the proxy end to report the VF information.
Based on the function module angle, the service grid communication system provided by the application further comprises:
the first judging module is used for judging whether kubelet monitors the creation event of the pod;
the storage module is used for distributing corresponding identifiers to the pod if yes; and saving the identification and the resource information to a local file.
Based on the function module angle, the service grid communication system provided by the application further comprises:
the third control module is used for controlling kubelet to establish network connection for each pod;
the fourth control module is used for controlling the proxy end to acquire the mark of the pod according to the resource information;
and the third distribution module is used for distributing the VF to the pod according to the identification.
Based on the function module angle, the service grid communication system provided by the application further comprises:
the second judging module is used for judging whether the node starts RDMA service or not;
the determining module is used for determining that the pod of the current node has a corresponding mark if yes;
and the registration module is used for registering the RDMA related instance of the current node so that each pod with the corresponding mark directly uses RDMA service according to the instance.
Since the embodiments of the system portion and the embodiments of the method portion correspond to each other, the embodiments of the system portion refer to the description of the embodiments of the method portion, which is not repeated herein.
The service grid communication system provided in the embodiment corresponds to the service grid communication method, so that the service grid communication system has the same beneficial effects as the method.
Fig. 4 is a block diagram of a service grid communication device according to another embodiment of the present application, and as shown in fig. 4, the service grid communication device includes: a memory 20 for storing a computer program;
a processor 21 for implementing the steps of the service grid communication method as mentioned in the above embodiments when executing a computer program.
The service grid communication device provided in this embodiment may include, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, or the like.
Processor 21 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc. The processor 21 may be a digital signal processor (Digital Signal Processor, DSP), a Field programmable gate array (Field-Programmable Gate Array, FPGA), a programmable logic device
At least one of the logic arrays (Programmable Logic Array, PLA) is implemented in hardware. The processor 21 may also comprise a main processor, which is a processor for processing data in an awake state, also called central processor (Central Processing Unit, CPU), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 21 may be integrated with an image processor (Graphics Processing Unit, GPU) for taking care of rendering and rendering of the content that the display screen is required to display. In some embodiments, the processor 21 may also include an artificial intelligence (Artificial Intelligence, AI) processor for processing computing operations related to machine learning.
Memory 20 may include one or more computer-readable storage media, which may be non-transitory. Memory 20 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In this embodiment, the memory 20 is at least used for storing a computer program 201, which, when loaded and executed by the processor 21, is capable of implementing the relevant steps of the service grid communication method disclosed in any of the foregoing embodiments. In addition, the resources stored in the memory 20 may further include an operating system 202, data 203, and the like, where the storage manner may be transient storage or permanent storage. The operating system 202 may include Windows, unix, linux, among others. The data 203 may include, but is not limited to, data of a service grid communication method, and the like.
In some embodiments, the service grid communication device may further include a display 22, an input-output interface 23, a communication interface 24, a power supply 25, and a communication bus 26.
Those skilled in the art will appreciate that the structure shown in fig. 4 is not limiting of the service grid communication device and may include more or fewer components than shown.
The service grid communication device provided by the embodiment of the application comprises a memory and a processor, wherein the processor can realize the following method when executing a program stored in the memory: a service grid communication method.
Finally, the present application also provides a corresponding embodiment of the computer readable storage medium. The computer-readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps as described in the method embodiments above.
It will be appreciated that the methods of the above embodiments, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored on a computer readable storage medium. With such understanding, the technical solution of the present application may be embodied essentially or partly in the form of a software product, or all or part of the technical solution, contributing to the prior art, the computer
The software product is stored in a storage medium and performs all or part of the steps of the methods of the various embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The service grid communication method, system, device and storage medium provided by the application are described above in detail. In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section. It should be noted that it would be obvious to those skilled in the art that various improvements and modifications can be made to the present application without departing from the principles of the present application, and such improvements and modifications fall within the scope of the claims of the present application.
It should also be noted that in this specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.

Claims (10)

1. A method of service grid communication, the method comprising:
acquiring RDMA network card information;
generating VFs with corresponding numbers according to the RDMA network card information;
acquiring the pod with the corresponding mark and distributing the pod to the corresponding node;
the VF is assigned to the pod.
2. The service grid communication method of claim 1, wherein the obtaining RDMA network card information comprises:
starting a corresponding proxy end on the node;
and controlling the proxy end to identify the RDMA network card and acquire the information of the RDMA network card.
3. The service grid communication method according to claim 2, wherein the number of agent ends corresponds to the number of nodes.
4. The service grid communication method according to claim 3, wherein after generating the corresponding number of VFs according to the RDMA network card information, further comprising:
the proxy terminal is controlled to identify the information of the VF according to the information of the RDMA network card;
and controlling the proxy end to report the information of the VF.
5. The service grid communication method of claim 4, wherein the acquiring and assigning the pod with the corresponding tag to the corresponding node further comprises:
judging whether kubelet monitors the creation event of the pod;
if yes, corresponding identifiers are distributed to the pod; and storing the identification and the resource information into a local file.
6. The service grid communication method of claim 5, wherein the control proxy assigning the VF to the pod so that the pod can use RDMA comprises:
controlling the kubelet to establish network connection for each pod;
controlling the proxy end to acquire the identification of the pod according to the resource information;
and distributing the VF to the pod according to the identification.
7. The service grid communication method according to any one of claims 1 to 6, wherein the control proxy assigns the VF to the pod so that the pod can use RDMA further comprises:
judging whether the node starts RDMA service or not;
if yes, determining that the pod of the current node carries a corresponding mark;
registering the RDMA related instance with the current node so that each of the pod with the corresponding tag directly uses the RDMA service according to the instance.
8. A service grid communication system, the system comprising:
the acquisition module is used for acquiring RDMA network card information;
the generating module is used for generating VFs with corresponding numbers according to the RDMA network card information;
the first distribution module is used for acquiring the pod with the corresponding mark and distributing the pod to the corresponding node;
and the second distribution module is used for distributing the VF to the pod.
9. A service grid communication device comprising a memory for storing a computer program;
a processor for implementing the steps of the service grid communication method according to any one of claims 1 to 7 when executing said computer program.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the service grid communication method according to any of claims 1 to 7.
CN202310349909.7A 2023-03-30 2023-03-30 Service grid communication method, system, device and storage medium Pending CN116436968A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310349909.7A CN116436968A (en) 2023-03-30 2023-03-30 Service grid communication method, system, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310349909.7A CN116436968A (en) 2023-03-30 2023-03-30 Service grid communication method, system, device and storage medium

Publications (1)

Publication Number Publication Date
CN116436968A true CN116436968A (en) 2023-07-14

Family

ID=87080877

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310349909.7A Pending CN116436968A (en) 2023-03-30 2023-03-30 Service grid communication method, system, device and storage medium

Country Status (1)

Country Link
CN (1) CN116436968A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117061338A (en) * 2023-08-16 2023-11-14 中科驭数(北京)科技有限公司 Service grid data processing method, device and system based on multiple network cards

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117061338A (en) * 2023-08-16 2023-11-14 中科驭数(北京)科技有限公司 Service grid data processing method, device and system based on multiple network cards
CN117061338B (en) * 2023-08-16 2024-06-07 中科驭数(北京)科技有限公司 Service grid data processing method, device and system based on multiple network cards

Similar Documents

Publication Publication Date Title
CN107566541B (en) Container network resource allocation method, system, storage medium and electronic device
JP5510556B2 (en) Method and system for managing virtual machine storage space and physical hosts
WO2017148249A1 (en) Resource configuration method and network device thereof
CN108924268B (en) Container cloud service system and pod creation method and device
US10505796B2 (en) Network function virtualization
EP3313023A1 (en) Life cycle management method and apparatus
CN114244717B (en) Configuration method and device of virtual network card resources, computer equipment and medium
EP3672314B1 (en) Network management method, device and system
CN111404753A (en) Flat network configuration method, computer equipment and storage medium
CN111880936A (en) Resource scheduling method and device, container cluster, computer equipment and storage medium
CN114172905B (en) Cluster network networking method, device, computer equipment and storage medium
CN109995552B (en) VNF service instantiation method and device
CN111258627A (en) Interface document generation method and device
CN109800261B (en) Dynamic control method and device for double-database connection pool and related equipment
WO2022056845A1 (en) A method of container cluster management and system thereof
CN114979286B (en) Access control method, device, equipment and computer storage medium for container service
CN116436968A (en) Service grid communication method, system, device and storage medium
CN110795202B (en) Resource allocation method and device of virtualized cluster resource management system
CN109905258B (en) PaaS management method, device and storage medium
CN113377499A (en) Virtual machine management method, device, equipment and readable storage medium
CN108667750B (en) Virtual resource management method and device
CN113783712A (en) Default gateway management method, gateway manager, server and storage medium
CN115665231A (en) Service creation method, device and computer-readable storage medium
CN115658332A (en) GPU (graphics processing Unit) sharing method and device, electronic equipment and storage medium
CN115618409A (en) Database cloud service generation method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination