CN115665026A - Cluster networking method and device - Google Patents

Cluster networking method and device Download PDF

Info

Publication number
CN115665026A
CN115665026A CN202211175297.6A CN202211175297A CN115665026A CN 115665026 A CN115665026 A CN 115665026A CN 202211175297 A CN202211175297 A CN 202211175297A CN 115665026 A CN115665026 A CN 115665026A
Authority
CN
China
Prior art keywords
network card
computing unit
host
computing
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211175297.6A
Other languages
Chinese (zh)
Inventor
牛丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Information Technology Co Ltd
Original Assignee
Jingdong Technology Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Information Technology Co Ltd filed Critical Jingdong Technology Information Technology Co Ltd
Priority to CN202211175297.6A priority Critical patent/CN115665026A/en
Publication of CN115665026A publication Critical patent/CN115665026A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a cluster networking method and device, and relates to the technical field of computers. One embodiment of the method comprises: responding to the operation of creating a computing unit in a computing node of a cluster, taking a physical network card of a host of the computing node as a master network card, and creating a unit slave network card for the computing unit; adding the unit slave network card into the computing unit; and applying for a subnet address for the computing unit, and issuing the subnet address to an access switch corresponding to the computing node so that the access switch generates a route from the host to the computing unit, wherein an outlet of the route is a public slave network card, and the public slave network card is a network card which is created based on the master network card and is positioned on the host. The implementation mode can be suitable for a low-version system, reduces the loss of forwarding flow between the container and the host and improves the overall forwarding performance of the network under the condition of ensuring that the system is not upgraded and the existing service is not changed.

Description

Cluster networking method and device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for cluster networking.
Background
When a kubernets (abbreviated as k8 s) cluster carries out network networking at present, the network networking mode of the used basic forwarding architecture mainly includes: veth (a virtual network technology, where Veth-type network cards appear in pairs) + BGP (Border Gateway Protocol, routing Protocol between autonomous systems), ipv (a virtual network technology, which has no relationship with vlan) + BGP, and the like.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:
based on a network networking mode of Veth + BGP, network flow in the container is forwarded once more, and the forwarding path is long, the performance loss is large, and the performance is poor;
although the network data forwarding performance is improved in the network networking mode based on the ipv and BGP compared with the network networking mode based on the Veth and BGP, a kernel with a high version (stable version > = 4.2) is required to support the ipv technology, and the network networking mode is not suitable for an older kernel version and has a limited application range.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for cluster networking, which are applicable to a low-version system, and reduce loss of forwarding traffic between a container and a host and improve overall forwarding performance of a network under the conditions that the system is not upgraded and an existing service is not changed.
To achieve the above object, according to an aspect of an embodiment of the present invention, there is provided a method for cluster networking, including:
in response to an operation of creating a computing unit in a computing node of a cluster, taking a physical network card of a host of the computing node as a master network card, and creating a unit slave network card for the computing unit;
adding the unit slave network card into the computing unit;
and applying for a subnet address for the computing unit, and issuing the subnet address to an access switch corresponding to the computing node, so that the access switch generates a route from the host to the computing unit, wherein an outlet of the route is a public slave network card, and the public slave network card is a network card which is created based on the master network card and is positioned on the host.
Optionally, the unit slave network card and the public slave network card are created according to a set network mode by calling a container network interface.
Optionally, the set network mode is a network mode based on a macvlan network technology and a routing protocol between autonomous systems.
Optionally, the method further comprises: responding to the operation of data sending of the computing node, sending the data to the master network card through a unit slave network card of the first computing unit sending the data, and sending the data to an access switch corresponding to the computing node through the master network card so as to send the data.
Optionally, the method further comprises: responding to the receiving of data by an access switch of the computing node, and acquiring a second computing unit identifier of the data to be received; searching a corresponding route from the host to the second computing unit according to the second computing unit identifier; and forwarding the data to the public slave network card according to the route, and forwarding the data to the second computing unit through the public slave network card.
Optionally, the method further comprises: and detecting the subnet gateway of the computing unit at regular time, and updating gateway information to all computing units corresponding to the host.
Optionally, the method further comprises: keeping the reverse route checking function of the system in an off state.
According to another aspect of the embodiments of the present invention, there is provided a device for cluster networking, including:
a slave network card creation module, configured to, in response to an operation of creating a computing unit in a computing node of a cluster, create a unit slave network card for the computing unit by using a physical network card of a host of the computing node as a master network card;
the subordinate network card deployment module is used for adding the unit subordinate network cards into the computing unit;
and a subnet address issuing routing module, configured to apply for a subnet address for the computing unit, and issue the subnet address to an access switch corresponding to the computing node, so that the access switch generates a route from the host to the computing unit, where an outlet of the route is a public slave network card, and the public slave network card is a network card that is created based on the master network card and is located on the host.
According to another aspect of the embodiments of the present invention, there is provided an electronic device for cluster networking, including: one or more processors; the storage device is configured to store one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors are enabled to implement the method for cluster networking provided by the embodiment of the present invention.
According to a further aspect of the embodiments of the present invention, there is provided a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the method for cluster networking provided by the embodiments of the present invention.
One embodiment of the above invention has the following advantages or benefits: establishing a unit slave network card for the computing unit by taking a physical network card of a host of the computing node as a master network card in response to an operation of establishing the computing unit in the computing node of the cluster; adding the unit slave network card into the computing unit; the method comprises the steps of applying for a subnet address for a computing unit, and issuing the subnet address to an access switch corresponding to a computing node so that the access switch generates a route from a host to the computing unit, wherein the outlet of the route is a public slave network card which is a network card established on the basis of a master network card and positioned on the host.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
fig. 1 is a schematic diagram of main steps of a method for cluster networking according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a prior art cluster networking scheme;
FIG. 3 is a schematic diagram illustrating an implementation principle of a cluster networking scheme according to an embodiment of the present invention;
fig. 4 is a control plane implementation schematic diagram of a cluster networking scheme of an embodiment of the invention;
FIG. 5 is a schematic diagram illustrating a data plane implementation principle of a cluster networking scheme according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of the main modules of an apparatus of a cluster networking according to an embodiment of the present invention;
FIG. 7 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 8 is a schematic structural diagram of a computer system suitable for implementing a terminal device or a server according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the introduction of the embodiments of the invention, the terms and their definitions are as follows:
underlay: a network of an infrastructure forwarding architecture;
kubelet: "node proxies" running on each node in kubernets;
IPAM: internet Protocol Address Management, cluster IP Address Management;
cni: container Network Interface, specifically a cni program in the embodiments of the present invention;
cniserver: container Network Interface Server, in the embodiment of the invention, it refers to cniser program;
macvlan: a virtual network technology is not related to mac and vlan. The macvlan allows a plurality of virtual network interfaces (slave) to be configured on one network interface (master) of the host, and the network interfaces have independent MAC addresses, and can also be configured with IP addresses for communication. The virtual machine or container network under the macvlan and the host are in the same network segment and share the same broadcast domain;
ipvlan: a virtual network technology has no relation to a vlan. Ipvlan and macvlan are both virtualized from one host interface to multiple virtual network interfaces. An important difference is that all virtual interfaces have the same mac address, but have different IP addresses, since all virtual interfaces share the mac address. Ipvlan is a newer property of linux kernel, linux kernel 3.19 starts to support Ipvlan, but the more stable recommended version is > =4.2;
veth: in the virtual network technology, veth type network cards appear in pairs;
BGP: border Gateway Protocol, routing Protocol between autonomous systems;
docker: the system is an open-source application container engine, so that developers can pack their applications and dependence packages into a portable mirror image and then distribute the image to any popular Linux or Windows operating system machine, and virtualization can also be realized;
and (3) POD: refers to the smallest, deployable computing unit created and managed in a kubernets cluster. A POD is a group (one or more) of containers.
At present, the network (underlay) technology of a three-layer basic forwarding architecture in a k8s cluster mainly includes: veth + BGP, ipvlan + BGP, etc. The main principle of the Veth + BGP scheme is as follows:
1. establishing a veth pair of path network cards, adding a container namespace docker ns (namespace) into one network card, and leaving the host namespace host ns on the other network card;
2. the container IP (or a computing unit POD subnet) is published to the TOR (access switch) of the local end through BGP;
3. the network flow in the container is forwarded to a path network card of the host through the path network card in the container, and then forwarded out from the TOR corresponding to the host through routing (strategy) checking;
4. and receiving and forwarding the network flow to the container from the TOR, searching a route after receiving the network flow from the veth network card of the host, sending the network flow out from the veth network card of the host, and receiving a packet through the veth network card of the container.
The main principle of the ipv and BGP scheme is:
1. establishing a host virtual network card master network card and a slave network card of the Ipvlan based on a physical network card of the host, and adding the slave network card into docker ns;
2. the container IP (or a computing unit POD subnet) is published to the TOR (access switch) of the local end through BGP;
3. the network flow in the container is directly forwarded out from a host virtual network card (master network card) through an Ipvlan technology;
4. and receiving and forwarding the network traffic to the container from the TOR, and entering a slave network card of the container through the Ipvlan technology to receive packets after receiving the network traffic from the physical network card of the host.
Compared with veth + BGP, the networking scheme of Ipvlan + BGP has the advantages that the network data forwarding performance is improved, but a kernel with a high version is required to support the Ipvlan technology, and some old machines and old operating systems (such as linux kernel 3.10. X) on the line do not support the Ipvlan technology, so that the network forwarding performance is integrally improved under the condition that the system is not upgraded and the existing service is not changed, and the technical problem to be solved by the invention is solved.
Fig. 1 is a schematic diagram of main steps of a cluster networking method according to an embodiment of the present invention. As shown in fig. 1, the method for cluster networking according to the embodiment of the present invention mainly includes the following steps S101 to S103. In an embodiment of the present invention, each cluster may include a plurality of compute nodes, each consisting of a host and a compute unit pod created by the host, each compute unit may be one or more containers. For each computing node, the computing unit in the computing node only performs data interaction with the host, and the host performs data transmission by transmitting data to the access switch corresponding to the computing node (home terminal); when the access switch receives the data, the access switch forwards the data to the corresponding computing unit through the host of the corresponding computing node to complete data reception.
Step S101: responding to the operation of creating a computing unit in a computing node of a cluster, taking a physical network card of a host of the computing node as a master network card, and creating a unit slave network card for the computing unit;
step S102: adding the unit slave network card into the computing unit;
step S103: and applying for a subnet address for the computing unit, and issuing the subnet address to an access switch corresponding to the computing node so that the access switch generates a route from the host to the computing unit, wherein an outlet of the route is a public slave network card, and the public slave network card is a network card which is created based on the master network card and is positioned on the host.
According to one embodiment of the present invention, the unit slave network cards and the common slave network card are created according to a set network mode by calling the container network interface.
According to another embodiment of the present invention, the set network mode is a network mode based on a routing protocol between macvlan network technology and the autonomous system. In order to improve the network forwarding performance in the old operating system, the embodiment of the invention performs cluster networking based on the network mode of macvlan + BGP.
According to a further embodiment of the invention, the method further comprises: responding to the operation of data transmission of the computing node, transmitting the data to the master network card through a unit slave network card of a first computing unit for transmitting the data, and transmitting the data to an access switch corresponding to the computing node through the master network card for data transmission.
According to yet another embodiment of the invention, the method further comprises: responding to the fact that an access switch of the computing node receives data, and acquiring a second computing unit identifier of the data to be received; searching a corresponding route from the host to the second computing unit according to the second computing unit identifier; and forwarding the data to the public slave network card according to the route, and forwarding the data to the second computing unit through the public slave network card.
According to yet another embodiment of the invention, the method further comprises: and detecting the subnet gateway of the computing unit at regular time, and updating gateway information to all the computing units corresponding to the host.
According to yet another embodiment of the invention, the method further comprises: keeping the reverse route checking function of the system in an off state. Since cluster networking is performed based on the macvlan + BGP network mode, the network traffic in the computing unit POD is inconsistent in the ingress/egress forwarding paths on the host, and therefore the rp _ filter check (reverse routing check function) needs to be turned off.
The cluster networking scheme of the embodiment of the invention is described below with reference to specific embodiments.
Fig. 2 is a schematic diagram of a prior art cluster networking scheme. As shown in fig. 2, in the prior art, for an old operating system, cluster networking is performed based on a network mode of veth + bgp. The specific implementation principle is as follows:
1. the container network interface program cni adopts a network mode of veth + bgp;
2. when a host creates a computing unit POD through kubelnet (a node agent running on each node in a kubernetes cluster), a network plug-in unit cni is called, according to a currently set network mode veth + bgp, a veth pair gateway pair is created by cni, one end of the veth pair is added into the computing unit POD ns, and the other end of the veth pair gateway pair is left in a host ns;
3. then cni calls the cluster IP address management IPAM, after the computing unit POD applies for the subnet address successfully, the subnet address POD IP address of the computing unit is issued to the TOR of the access switch through a bgp protocol, so that the TOR can generate a route from the host to the computing unit;
4. after the POD IP is successfully issued, network traffic to the POD of the computing unit is forwarded to the corresponding computing node host through the TOR, and then routed and forwarded to the inside of the POD. The host and the access switch are directly connected through a physical network card, so that data interaction can be directly carried out.
Fig. 3 is a schematic diagram of an implementation principle of a cluster networking scheme according to an embodiment of the present invention. In the embodiment of the invention, cluster networking is performed based on a macvlan + BGP network mode, so that the method and the device are suitable for an old operating system, and meanwhile, the data forwarding efficiency can be improved. As shown in fig. 3, the implementation principle of the cluster networking scheme according to the embodiment of the present invention is as follows:
1. the container network interface program cni adopts a network mode of macvlan + bgp, takes a physical network card eth1 of a host of a computing node as a master network card master of the macvlan, creates a public slave network card macvlan slave based on the master, is named mslave, and is deployed on the host;
2. when a host creates a computing unit POD through kubelet, calling a network plug-in unit cni, wherein the cni creates a unit dependent network card POD slave network card based on a master network card according to a currently set network mode, is named as a psive, and adds the unit dependent network card psive network card into a namespace docker ns of each container in the computing unit;
3. calling the IPAM by the cni, issuing the POD IP address to an access switch TOR corresponding to the computing node through a BGP protocol after the POD is successfully applied for the subnet address, so that the TOR can generate a route from the host to the computing unit;
4. after the POD IP is successfully issued, the network traffic to the computing unit is forwarded to the host of the corresponding computing node through the TOR, and then forwarded to the unit-dependent network card of the computing unit from the public-dependent network card mslave on the host through the route, so as to forward the data to the inside of the POD.
Fig. 4 is a schematic diagram illustrating a control plane implementation principle of a cluster networking scheme according to an embodiment of the present invention. As shown in fig. 4, in the embodiment of the present invention, cluster networking is performed based on a macvlan + BGP network mode, and the implementation principle of the control plane is mainly as follows:
1. the host computer creates a computing unit POD through kubbelet and calls a network cni plug-in (the network cni plug-in the figure comprises cni and cniserver);
2. when the cniser is initialized, according to a network mode (macvlan + BGP), a public slave network card mslave (shared by all computing units POD) is created based on a physical network card eth1 as a master network card; when a computing unit POD is created, a slave network card POD slave network card (one for each POD) is named as a pslave based on a master network card creating unit;
3. the cni adds the unit-dependent network card psive to the POD, and simultaneously, the operations such as renaming and the like can be carried out, and simultaneously, the host is added to the route of the POD of the computing unit, and the outlet is mslave. The exit refers to an intermediate node or a transit node in a route, where the exit is an intermediate node or a transit node through which data is to pass when being forwarded from the host to the computing unit;
4. since the mac address of the gateway cannot be learned in the POD in the macvlan + BGP network mode, the cniser will periodically detect the subnet gateway of the POD and update the gateway information to all containers ns of the current host (except the host network container). The gateway information includes, for example, an IP (network address) of the gateway, a mac (hardware address) of the gateway, and the like. The cniser is deployed at a computing node, the computing node regularly acquires mac (hardware address) of gateways in neighbor tables in all PODs on the current node, and if the acquired mac of the gateway is inconsistent with the mac address of the eth1 gateway in the neighbor table on the current node, the mac of the gateway in the POD is modified into the mac address of the eth1 gateway on the computing node, so that the cniser regularly detects the subnet gateway of the POD.
Fig. 5 is a schematic diagram of a data plane implementation principle of a cluster networking scheme according to an embodiment of the present invention. As shown in fig. 5, in the embodiment of the present invention, cluster networking is performed based on a macvlan + BGP network mode, and the implementation principle of the data plane is mainly as follows:
1. after distributing subnet addresses POD IPs of all computing units on a current computing node or a POD subnet of the current node through a BGP protocol, generating a route from a host to the computing unit on an access switch TOR, wherein each computing unit can be distinguished and identified by the POD IPs of the computing unit;
2. network flow in the POD of the computing unit is sent out from the pslave network card of the slave unit slave network card through routing, and the macvlan technology is adopted to directly send the data from the main network card of the host to the access switch so as to realize data sending. As illustrated by the outflow direction shown in fig. 5;
3. after network traffic entering a computing unit POD from an access switch TOR is received from a main network card eth1 of a host, because a destination mac is a mac address of the host eth1, a message enters a host network space protocol stack, a route from the host to the computing unit is found according to the destination IP address, a found outlet interface is a public slave network card mslave, the message is sent out from the mslave, and the message is sent into the destination computing unit POD through a unit slave network card pslave network card by the mackalan technology again. As illustrated by the incoming flow direction shown in fig. 5;
4. since the network traffic in the computing unit POD is not consistent in the ingress/egress forwarding path on the host, the rp _ filter check needs to be turned off.
Fig. 6 is a schematic diagram of main modules of a device for cluster networking according to an embodiment of the present invention. As shown in fig. 6, the apparatus 600 for cluster networking according to the embodiment of the present invention mainly includes a slave network card creating module 601, a slave network card deploying module 602, and a subnet address publishing routing module 603.
A slave network card creating module 601, configured to, in response to an operation of creating a computing unit in a computing node of a cluster, create a unit slave network card for the computing unit by using a physical network card of a host of the computing node as a master network card;
a slave network card deployment module 602, configured to add the unit slave network card to the computing unit;
a subnet address publishing routing module 603, configured to apply for a subnet address for the computing unit, and publish the subnet address to an access switch corresponding to the computing node, so that the access switch generates a route from the host to the computing unit, where an outlet of the route is a public slave network card, and the public slave network card is a network card that is created based on the master network card and is located on the host.
According to one embodiment of the present invention, the unit slave network card and the common slave network card are created according to a set network mode by calling a container network interface.
According to another embodiment of the present invention, the set network mode is a network mode based on a routing protocol between macvlan network technology and the autonomous system.
According to another embodiment of the present invention, the apparatus 600 for cluster networking further comprises a data sending module (not shown in the figure) configured to: responding to the operation of data transmission of the computing node, transmitting the data to the master network card through a unit slave network card of a first computing unit for transmitting the data, and transmitting the data to an access switch corresponding to the computing node through the master network card for data transmission.
According to another embodiment of the present invention, the apparatus 600 for cluster networking further comprises a data receiving module (not shown in the figure) for: responding to the receiving of data by an access switch of the computing node, and acquiring a second computing unit identifier of the data to be received; searching a corresponding route from the host to the second computing unit according to the second computing unit identifier; and forwarding the data to the public slave network card according to the route, and forwarding the data to the second computing unit through the public slave network card.
According to another embodiment of the present invention, the apparatus 600 for cluster networking further includes a network management information updating module (not shown in the figure) configured to: and detecting the subnet gateway of the computing unit at regular time, and updating gateway information to all the computing units corresponding to the host.
According to another embodiment of the present invention, the apparatus 600 for cluster networking further comprises a function management module (not shown in the figure) for: keeping the reverse route checking function of the system in an off state.
According to the technical scheme of the embodiment of the invention, the physical network card of the host of the computing node is used as a master network card to establish a unit slave network card for the computing unit by responding to the operation of establishing the computing unit in the computing node of the cluster; adding the unit slave network card into the computing unit; the method comprises the steps of applying for a subnet address for a computing unit, and issuing the subnet address to an access switch corresponding to a computing node so that the access switch generates a route from a host to the computing unit, wherein the outlet of the route is a public slave network card, and the public slave network card is a network card which is created based on a master network card and is positioned on the host. In the embodiment of the invention, the cluster network adopts a macvlan + BGP network mode, so that the limitation that macvlan can only be applied to a two-layer network is broken, and the inefficiency of forwarding flow of a three-layer network veth is replaced; replace the ipv lan technology that only the high-version kernel (stable version > = 4.2) can support.
Fig. 7 shows an exemplary system architecture 700 of a method of cluster networking or a device of cluster networking to which an embodiment of the present invention may be applied.
As shown in fig. 7, the system architecture 700 may include terminal devices 701, 702, 703, a network 704, and a server 705. The network 704 serves to provide a medium for communication links between the terminal devices 701, 702, 703 and the server 705. Network 704 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use terminal devices 701, 702, 703 to interact with a server 705 over a network 704 to receive or send messages and the like. The terminal devices 701, 702, 703 may have installed thereon various communication client applications, such as a web browser application, a search-type application, an instant messaging tool, a mailbox client, social platform software, etc. (by way of example only).
The terminal devices 701, 702, 703 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 705 may be a server providing various services, such as a background management server (for example only) providing support for shopping websites browsed by users using the terminal devices 701, 702, 703. The background management server can take a physical network card of a host of the computing node as a main network card for the received data of a computing unit creating request and the like, and creates a unit slave network card for the computing unit; adding the unit slave network card into the computing unit; applying for a subnet address for the computing unit, and issuing the subnet address to an access switch corresponding to the computing node, so that the access switch generates a route from the host to the computing unit, where an outlet of the route is a public slave network card, the public slave network card is a network card located on the host and created based on the master network card, and the like, and feeds back a processing result (e.g., trunking networking information — only an example) to a terminal device.
It should be noted that the method for cluster networking provided in the embodiment of the present invention is generally executed by the server 705, and accordingly, a device for cluster networking is generally disposed in the server 705.
It should be understood that the number of terminal devices, networks, and servers in fig. 7 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 8, shown is a block diagram of a computer system 800 suitable for use with a terminal device or server implementing embodiments of the present invention. The terminal device or the server shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 8, a computer system 800 includes a Central Processing Unit (CPU) 801 which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the system 800 are also stored. The CPU 801, ROM 802, and RAM 803 are connected to each other via a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
The following components are connected to the I/O interface 805: an input portion 806 including a keyboard, a mouse, and the like; an output section 807 including a signal such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 808 including a hard disk and the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. A drive 810 is also connected to the I/O interface 805 as necessary. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted on the storage section 808 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 809 and/or installed from the removable medium 811. The computer program executes the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 801.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present invention may be implemented by software or hardware. The described units or modules may also be provided in a processor, and may be described as: a processor comprises a slave network card creating module, a slave network card deploying module and a subnet address publishing routing module. The names of these units or modules do not in some cases constitute a limitation on the units or modules themselves, and for example, the slave network card creation module may also be described as "a module for creating a unit slave network card for a computing unit in response to an operation of creating a computing unit in a computing node of a cluster with a physical network card of a host of the computing node as a master network card".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: in response to an operation of creating a computing unit in a computing node of a cluster, taking a physical network card of a host of the computing node as a master network card, and creating a unit slave network card for the computing unit; adding the unit slave network card into the computing unit; and applying for a subnet address for the computing unit, and issuing the subnet address to an access switch corresponding to the computing node so that the access switch generates a route from the host to the computing unit, wherein an outlet of the route is a public slave network card, and the public slave network card is a network card which is created based on the master network card and is positioned on the host.
According to the technical scheme of the embodiment of the invention, the physical network card of the host of the computing node is used as a master network card to establish a unit slave network card for the computing unit by responding to the operation of establishing the computing unit in the computing node of the cluster; adding the unit slave network card into the computing unit; the method comprises the steps of applying for a subnet address for a computing unit, and issuing the subnet address to an access switch corresponding to a computing node so that the access switch generates a route from a host to the computing unit, wherein the outlet of the route is a public slave network card, and the public slave network card is a network card which is created based on a master network card and is positioned on the host.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for cluster networking, comprising:
in response to an operation of creating a computing unit in a computing node of a cluster, taking a physical network card of a host of the computing node as a master network card, and creating a unit slave network card for the computing unit;
adding the unit slave network card into the computing unit;
and applying for a subnet address for the computing unit, and issuing the subnet address to an access switch corresponding to the computing node so that the access switch generates a route from the host to the computing unit, wherein an outlet of the route is a public slave network card, and the public slave network card is a network card which is created based on the master network card and is positioned on the host.
2. The method of claim 1, wherein the unit slave network card and the common slave network card are created according to a set network mode by calling a container network interface.
3. The method according to claim 2, wherein the set network mode is a network mode based on a routing protocol between macvlan network technology and the autonomous system.
4. The method of claim 1, further comprising:
responding to the operation of data transmission of the computing node, transmitting the data to the master network card through a unit slave network card of a first computing unit for transmitting the data, and transmitting the data to an access switch corresponding to the computing node through the master network card for data transmission.
5. The method of claim 1, further comprising:
responding to the fact that an access switch of the computing node receives data, and acquiring a second computing unit identifier of the data to be received;
searching a corresponding route from the host to the second computing unit according to the second computing unit identifier;
and forwarding the data to the public slave network card according to the route, and forwarding the data to the second computing unit through the public slave network card.
6. The method of claim 1, further comprising:
and detecting the subnet gateway of the computing unit at regular time, and updating gateway information to all the computing units corresponding to the host.
7. The method of claim 1, further comprising:
keeping the reverse route checking function of the system in an off state.
8. An apparatus for cluster networking, comprising:
a slave network card creation module, configured to, in response to an operation of creating a computing unit in a computing node of a cluster, create a unit slave network card for the computing unit by using a physical network card of a host of the computing node as a master network card;
the subordinate network card deployment module is used for adding the unit subordinate network cards into the computing unit;
and a subnet address publishing routing module, configured to apply for a subnet address for the computing unit, and publish the subnet address to an access switch corresponding to the computing node, so that the access switch generates a route from the host to the computing unit, where an outlet of the route is a public slave network card, and the public slave network card is a network card that is created based on the master network card and is located on the host.
9. An electronic device for cluster networking, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202211175297.6A 2022-09-26 2022-09-26 Cluster networking method and device Pending CN115665026A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211175297.6A CN115665026A (en) 2022-09-26 2022-09-26 Cluster networking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211175297.6A CN115665026A (en) 2022-09-26 2022-09-26 Cluster networking method and device

Publications (1)

Publication Number Publication Date
CN115665026A true CN115665026A (en) 2023-01-31

Family

ID=84985715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211175297.6A Pending CN115665026A (en) 2022-09-26 2022-09-26 Cluster networking method and device

Country Status (1)

Country Link
CN (1) CN115665026A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115801733A (en) * 2023-02-02 2023-03-14 天翼云科技有限公司 Network address allocation method and device, electronic equipment and readable medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115801733A (en) * 2023-02-02 2023-03-14 天翼云科技有限公司 Network address allocation method and device, electronic equipment and readable medium

Similar Documents

Publication Publication Date Title
CN112470436B (en) Systems, methods, and computer-readable media for providing multi-cloud connectivity
US11563669B2 (en) Method for implementing network virtualization and related apparatus and communications system
US8743894B2 (en) Bridge port between hardware LAN and virtual switch
CA2968964C (en) Source ip address transparency systems and methods
CN105610632B (en) Virtual network equipment and related method
US11398956B2 (en) Multi-Edge EtherChannel (MEEC) creation and management
CN102577256A (en) Method and apparatus for transparent cloud computing with a virtualized network infrastructure
US20220303335A1 (en) Relaying network management tasks using a multi-service receptor network
CN112398687B (en) Configuration method of cloud computing network, cloud computing network system and storage medium
WO2019100266A1 (en) Mobile edge host-machine service notification method and apparatus
CN108574613B (en) Two-layer intercommunication method and device for SDN data center
CN103631652A (en) Method and system for achieving virtual machine migration
CN110505074B (en) Application modularization integration method and device
US20220166715A1 (en) Communication system and communication method
CN115379010A (en) Container network construction method, device, equipment and storage medium
CN115665026A (en) Cluster networking method and device
CN109728926B (en) Communication method and network device
CN111130978B (en) Network traffic forwarding method and device, electronic equipment and machine-readable storage medium
EP4236270A2 (en) Software defined access fabric without subnet restriction to a virtual network
CN111866100A (en) Method, device and system for controlling data transmission rate
CN111786888B (en) Interface isolation method and device
CN108965494A (en) Data transmission method and device in data system
WO2023169364A1 (en) Routing generation method and apparatus, and data message forwarding method and apparatus
CN116319514B (en) Data processing method and related device
US11243722B2 (en) System and method of providing universal mobile internet proxy printing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination