CN112104566B - Processing method and device for load balancing - Google Patents

Processing method and device for load balancing Download PDF

Info

Publication number
CN112104566B
CN112104566B CN202010989058.9A CN202010989058A CN112104566B CN 112104566 B CN112104566 B CN 112104566B CN 202010989058 A CN202010989058 A CN 202010989058A CN 112104566 B CN112104566 B CN 112104566B
Authority
CN
China
Prior art keywords
message
instance
computing node
target
load balancing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010989058.9A
Other languages
Chinese (zh)
Other versions
CN112104566A (en
Inventor
陈佳业
肖福龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010989058.9A priority Critical patent/CN112104566B/en
Publication of CN112104566A publication Critical patent/CN112104566A/en
Application granted granted Critical
Publication of CN112104566B publication Critical patent/CN112104566B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers

Abstract

The application discloses a processing method and device for load balancing. The method comprises the following steps: acquiring a first message, wherein the first message is used for executing load balancing access service on a first computing node, and the first message comprises link information; when the first message is determined to come from the first computing node and no link information exists on the first computing node, determining a target back-end instance according to the first message, wherein the target back-end instance is used for responding to a request of an instance in the first computing node; when the target back-end instance is determined to run on the first computing node, determining a source address and a destination address of the first message according to the target back-end instance, and storing the source address and the destination address into the link information to obtain a second message; and sending the second message to the target back-end instance. By the method and the device, the problem of overhigh load of the load balancing network node in the related technology is solved.

Description

Processing method and device for load balancing
Technical Field
The application relates to the technical field of load balancing, in particular to a processing method and device for load balancing.
Background
The current technical schemes for realizing the four-layer load balancing flow forwarding in the cloud mainly comprise two types: firstly, providing load balancing service through a centralized load balancing gateway, and carrying out centralized processing on traffic of an instance in the cloud, which needs to access the load balancing service; and secondly, realizing distributed load balancing flow processing through traditional load balancing physical equipment. For the centralized load balancing technical scheme, any instance with access load balancing authority needs to introduce load balancing flow into the corresponding centralized load balancing for load balancing, which leads to the need of a load balancing node to process a large amount of flow, and the load of the node is very high, so that the performance is reduced, the time delay is too high, the throughput is reduced, the problems of packet loss and the like occur, the expandability is also deteriorated, and the access of fault influence is wider.
Compared with a centralized load balancing processing scheme, the distributed load balancing processing scheme based on the physical equipment uses distributed load balancing to perform load balancing processing, so that the fault risk of the centralized load balancing is reduced, and the processing performance is improved. However, because the traditional physical load balancing cannot perform centralized control, the forwarding strategy of the whole network is set and adjusted with great difficulty, so that the flexibility of load balancing adjustment is greatly reduced, and the reasonable expansion and contraction of the gateway are difficult to perform according to the traffic condition.
Aiming at the problem of overhigh load of load balancing network nodes in the related technology, no effective solution is proposed at present.
Disclosure of Invention
The main purpose of the application is to provide a processing method and a device for load balancing, so as to solve the problem of excessive load of a load balancing network node in the related technology.
To achieve the above object, according to one aspect of the present application, there is provided a processing method of load balancing, the processing method being applied to at least one computing node of a distributed system, the method comprising: acquiring a first message, wherein the first message is used for executing load balancing access service on a first computing node, and the first message comprises link information; when the first message is determined to come from a first computing node and the link information does not exist on the first computing node, determining a target back-end instance according to the first message, wherein the target back-end instance is used for responding to a request of the first computing node; when the target back-end instance is determined to run on the first computing node, determining a source address and a destination address of the first message according to the target back-end instance, and storing the source address and the destination address into the link information to obtain a second message; and sending the second message to the target back-end instance.
Further, after the first message is acquired, the method includes: when the first message is determined to come from the first computing node and the first computing node has the link information in the first message, determining the target back-end instance according to the back-end instance information in the link information, and storing the MAC layer information in the link information to the MAC layer of the first message to obtain a third message; and sending the third message to the target back-end instance.
Further, determining the target backend instance according to the first message includes: calculating a hash value of the first message; calculating the hash value to obtain a back-end instance selection value; and determining the target back-end example according to the back-end example selection value.
Further, determining the target backend instance according to the backend instance selection value comprises: if the back-end instance selection value is matched with a first preset value, determining that the target back-end instance of the first message is a local instance; and if the back-end instance selection value is matched with a second preset value, determining that the target back-end instance of the first message is a far-end instance.
Further, after determining that the target backend instance of the first message is a distal instance, the method includes: determining a source address and a destination address according to the target back-end example on the far-end example, and storing the source address and the destination address into the link information of the first message to obtain a fourth message; and sending the fourth message to the target back-end instance.
Further, after the first message is acquired, the method includes: judging whether the IP address in the first message is matched with a preset address, and if the IP address in the first message is not matched with the preset address, stopping processing the load balancing access service.
Further, after the first message is acquired, the method includes: if the first message is initiated by the second computing node and the first computing node has the link information in the first message, storing the back-end instance information in the link information into the first message to obtain a fifth message; and sending the fifth message to the target back-end instance.
To achieve the above object, according to another aspect of the present application, there is provided a processing apparatus for load balancing. The apparatus is applied to at least one computing node of a distributed system, and comprises: an obtaining unit, configured to obtain a first packet, where the first packet is used to execute a load balancing access service on a first computing node, and the first packet includes link information; the first determining unit is used for determining a target back-end instance according to the first message when the first message is from a first computing node and the link information does not exist on the first computing node, wherein the target back-end instance is used for responding to a request of the first computing node; the second determining unit is configured to determine, when the target backend instance runs on the first computing node, a source address and a destination address of the first message according to the target backend instance, and store the source address and the destination address in the link information to obtain a second message; and the sending unit is used for sending the second message to the target back-end instance.
To achieve the above object, according to another aspect of the present application, there is provided a computer-readable storage medium including a stored program, wherein the program performs the processing method of load balancing as set forth in any one of the above.
To achieve the above object, according to another aspect of the present application, there is provided a processor for executing a program, wherein the program executes the processing method for load balancing according to any one of the above.
Through the application, the following steps are adopted: acquiring a first message, wherein the first message is used for executing load balancing access service on a first computing node, and the first message comprises link information; when the first message is determined to come from a first computing node and the link information does not exist on the first computing node, determining a target back-end instance according to the first message, wherein the target back-end instance is used for responding to a request of the first computing node; when the target back-end instance is determined to run on the first computing node, determining a source address and a destination address of the first message according to the target back-end instance, and storing the source address and the destination address into the link information to obtain a second message; and sending the second message to the target back-end instance, so that the problem of overhigh load of the load balancing network node in the related technology is solved, and the effect of reducing the load of the load balancing network node is further achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application, illustrate and explain the application and are not to be construed as limiting the application. In the drawings:
FIG. 1 is a flow chart of a processing method for load balancing provided according to an embodiment of the present application;
FIG. 2 is a flow chart of an alternative load balancing processing method provided in accordance with an embodiment of the present application; and
fig. 3 is a schematic diagram of a processing apparatus for load balancing according to an embodiment of the present application.
Detailed Description
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate in order to describe the embodiments of the present application described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an embodiment of the application, a processing method for load balancing is provided.
Fig. 1 is a flowchart of a processing method of load balancing according to an embodiment of the present application. As shown in fig. 1, the method comprises the steps of:
step S101, a first message is acquired, where the first message is used to execute a load balancing access service on a first computing node, and the first message includes link information.
In the distributed network system, a plurality of computing nodes exist, each computing node is provided with an instance program, the instance programs are respectively used for providing service for corresponding load balancing access requests, when one node in the network receives the load balancing access requests, the corresponding processing is executed by acquiring the message information of the requests and initiating the requests from a local end or a remote end according to the requests.
Optionally, in the processing method for load balancing provided in the embodiment of the present application, after obtaining the first packet, the method includes: when the first message is determined to come from the first computing node and the first computing node has the link information in the first message, determining a target back-end example according to the back-end example information in the link information, and storing the MAC layer information in the link information to the MAC layer of the first message to obtain a third message; and sending the third message to the target back-end instance.
Because each node in the distributed network is deployed with an instance program for providing load balancing access service processing, when a certain node receives a load balancing access request, the node obtains the message information of the request, judges whether the request is initiated locally or from a remote node, if the request is initiated by the local node, judges whether the link information of the request exists locally, namely, judges whether the request is a new request or a request existing in a history record, if the link information of the request exists in the history record of the local node, the back end instance information in the locally searched link information is used as the target back end instance information of the request, the back end instance information in the link information is stored in the MAC layer of the message of the request, and the message after storing the back end instance information is sent to the corresponding target back end instance. For example, there are 3 compute nodes in a distributed network system: node 1, node 2 and node 3, when node 1 receives the load balancing access request and confirms that the request is initiated by node 1 itself through analysis, node 1 first looks up locally whether there is link information in the request, if there is link information in the request locally, that is, there is the same load balancing access request record as the request in node 1. In this case, the node 1 uses the locally found back-end instance information in the link as the target back-end instance information of the current request, stores the locally found back-end instance information in the link information into the message MAC layer of the current request, and then sends the message after storing the instance information to the corresponding target back-end instance.
Optionally, in the processing method for load balancing provided in the embodiment of the present application, after obtaining the first packet, the method includes: judging whether the IP address in the first message is matched with a preset address, and if the IP address in the first message is not matched with the preset address, stopping processing the load balancing access service.
The load balancing access service processing instance program deployed in each network node matches the flow of which the destination IP is the virtual service IP, and before the instance program further analyzes and processes the message information of the load balancing access service request, whether the destination IP of the request is the virtual service IP preset by the instance program is firstly judged, if not, the received message is lost, and the processing of the request is stopped.
Optionally, in the processing method for load balancing provided in the embodiment of the present application, after obtaining the first packet, the method includes: if the first message is initiated by the second computing node and the link information in the first message exists in the first computing node, determining a target back-end instance according to the back-end instance information in the link information; storing the back-end instance information into a first message to obtain a fifth message; and sending the fifth message to the target back-end instance.
The load balancing service access request sent from the load balancing gateway or other computing node to the local computing node can be identified by this feature due to the encapsulation of the vxlan. If the request is not from the local computing node, but the link information in the request exists in the local computing node, the fact that the message of the request is not the first message of the link is already created before, the local computing node directly carries out NAT operation, the rear end in the local link information is used as target rear end instance information, the target rear end instance information is stored in the message of the request, and then the message after the target rear end instance information is stored is routed to the corresponding target rear end instance.
Step S102, when it is determined that the first message is from the first computing node and no link information exists on the first computing node, determining a target back-end instance according to the first message, where the target back-end instance is configured to respond to a request of the first computing node.
When a node receives a load balancing access request, the local computing node determines a target back-end instance according to the message information of the request by acquiring the message information of the request and judging that the request is initiated by the local node, but the local node does not have the link information in the message of the request, namely the request is a new request. For example, there are 3 compute nodes in a distributed network system: node 1, node 2 and node 3, when node 1 receives the load balancing access request and confirms that the request is initiated by node 1 itself through analysis, node 1 first looks up locally whether there is link information in the request, if there is no link information in the request locally, that is, the request is a new request for node 1. In this case, the node 1 determines the target backend instance information according to the message information of the current request, for example, the determined target backend instance information is the instance information in the node 2, and then the node 1 takes the instance in the node 2 as the target backend instance of the current request.
Optionally, in the processing method for load balancing provided in the embodiment of the present application, determining, according to the first packet, the target backend instance includes: calculating a hash value of the first message; calculating the hash value to obtain a back-end instance selection value; and determining the target back-end example according to the back-end example selection value.
When judging that the link information of the load balancing service access request does not exist locally, the local computing node determines a target back-end instance according to the message information of the current request, calculates a hash value of the message of the current request, stores the hash value of the message in a designated register, performs calculation processing on the value in the register, for example, performs AND calculation processing on the hash value in the register, takes a result obtained by calculation processing as a back-end instance selection value, and determines the target back-end instance of the current request according to the back-end instance selection value.
Optionally, in the processing method for load balancing provided in the embodiment of the present application, determining the target backend instance according to the backend instance selection value includes: if the back-end instance selection value is matched with the first preset value, determining that the target back-end instance of the first message is a local instance; if the back-end instance selection value is matched with the second preset value, determining that the target back-end instance of the first message is a far-end instance.
The specific judging method is that if the back-end instance selection value is matched with the first preset value, the target back-end instance of the current request is determined to be on the local computing node; if the back-end instance selection value is matched with the second preset value, determining that the target back-end instance of the current request is a far-end instance, namely, the target back-end instance of the current request is on other computing nodes.
Optionally, in the processing method for load balancing provided in the embodiment of the present application, after determining that the target backend instance of the first packet is a remote instance, the method includes: determining a source address and a destination address according to the target back-end example, and storing the source address and the destination address into the link information of the first message to obtain a fourth message; and sending the fourth message to the target back-end instance.
If the target back-end instance corresponding to the request is not located on the local computing node and is located on the remote computing node, the local computing node determines the source address and the destination address of the message according to the information of the selected target back-end instance, simultaneously executes a link submitting action, stores back-end instance information on a designated label of the link, facilitates the direct use of the information by the subsequent message of the link, and finally sends the message storing the link information to the corresponding target back-end instance through routing.
Step S103, when the target back-end instance is determined to run on the first computing node, determining a source address and a destination address of the first message according to the target back-end instance, and storing the source address and the destination address into the link information to obtain a second message;
if the target backend instance corresponding to the request is on the local computing node, that is, the instance for initiating the load balancing access request and the backend instance for scheduling selection are both on the same computing node, the local computing node only needs to determine the source address and the destination address of the message according to the selected target backend instance information and set connection information (without executing a link submitting action), and stores the backend instance information on a linked designated label, so that the information can be directly used by the subsequent message linked with the local computing node.
Step S104, the second message is sent to the target back-end instance.
And when the target back-end instance is on the local computing node, sending the message after the link information is stored to the target back-end instance of the local computing node.
Preferably, in the present application, the example program deployed on each computing node is implemented by unifying a multi-level flow table designed based on an Openflow protocol, where Openflow is a protocol for connection between a controller and SDN devices, and through the unified Openflow protocol, there is no need to develop an additional connection component for each node, so that unified centralized management and control on network nodes are implemented. Fig. 2 is a flowchart of a load balancing access service processing method according to an embodiment of the present application, where the flow is implemented based on seven flow tables of the Openflow protocol, including UNICAST, EGRESS, INGRESS, VIP, MULTIPATH, DNAT and DVR 7 flow tables, and the design scenario mainly has the following three aspects: (1) The source computing node initiates access to the load balancing service, and the backend instance selected by the scheduler is located on top of the present computing node. (2) The source computing node initiates access to the load balancing service, and the backend instance selected by the scheduler is on the remote computing node. (3) And (3) performing DNAT on the destination computing node according to the back-end instance selected by the source computing node, and converting the virtual service IP into the IP of the specific destination back-end instance. The following are specific rules designed in the respective flow tables:
a) UNICAST flow table specific rules
Rule 1: the traffic from the remote computing node or the load balancing gateway node comes in through the encapsulation vxlan header, so that the local computing node can consider the traffic to be the traffic from the remote computing node or the load balancing gateway node only by judging that the message inlet is the vxlan port, and the traffic is directly sent to the INGRESS flow table for processing.
Rule 2: in addition to the incoming traffic from the vxlan port, other traffic is default to the load balancing access service initiated by the local virtual machine or container instance, so the default rule is to send the local traffic to the EGRESS table stream processing.
b) InGRESS flow table specific rules
Rule 1: matching the destination IP in the INGRESS flow table is the flow of the virtual service IP, and executing the ct (nat) action, wherein the premise of successful execution is that a link exists locally, if no link exists locally, the action is not executed, but is submitted to the DNAT flow table to execute the DNAT operation while submitting the link, and the next time the rule can execute the ct (nat) action.
Rule 2: other processes are not discussed herein.
c) Specific rules of EGRESS flow table
Rule 1: and if the matching destination IP in the EGRESS flow table is the flow of the preset virtual service IP, carrying out local link searching processing, and sending the searched link or the non-searched link into the VIP flow table for processing.
Rule 2: and discarding the traffic of the default non-access preset virtual service IP.
d) VIP flow table concrete rule
Rule 1: the specified label (ct_label) is equal to 0 and is the flow for accessing the load balancing service, which indicates that no link is found locally, the hash value is calculated according to the message information and loaded onto the specified register, and finally submitted to the multi flow table.
Rule 2: the specified tag (ct_label) is equal to 0 and traffic that is not accessing the load balancing service is abnormal traffic, and the discard action is directly performed.
Rule 3: the default rule indicates that the specified label (ct_label) is not equal to 0, that is, the linked traffic is found locally, for this part of traffic, the back-end instance is not required to be selected for scheduling, and only the back-end instance information stored on the specified label (ct_label) is required to be loaded into the MAC layer of the message and then submitted to the DVR flow table.
e) Specific rules of MULTIPATH flow table
Rule 1: the hash value of the calculated message is stored on a designated register in the VIP flow table, and is calculated according to the designated register and 65535 in the multi flow table, if the result is equal to x, the operation of setting the source MAC address and the destination MAC address is performed, the source MAC address and the destination MAC address are stored on the link and submitted to the link, and then submitted to the DVR flow table, which is used for the rule that the target back end instance is not on the source computing node.
Rule 2: the hash value of the message is calculated in the VIP flow table and stored on the designated register, and the and calculation is performed according to the designated register and 65535 in the multi flow table, if the result is equal to y, the target back end instance is indicated to be on the current calculation node, the setting of the source MAC address and the destination MAC address is performed, and the setting is submitted to the DNAT flow table.
f) DNAT flow Table specification rules
Rule 1: and when the MAC address of the selected back-end instance, the accessed virtual service IP, the virtual service port and the appointed state variable (ct_state) are all met at the same time according to the scheduling, performing a dnat operation, and storing the MAC address of the selected back-end instance onto a link so that the data on an appointed label (ct_label) is directly used when a subsequent message with the link searches the link, and finally submitting the data to a DVR flow table.
Rule 2: default rules are submitted directly to the DVR stream table.
From the rule descriptions of the flow tables, the specific steps of the computing node accessing the load balancing service under the same cluster can be as follows:
for load balancing service access initiated by a local computing node, firstly, entering an EGRESS flow table from the UNICAST flow table, and executing local link searching in the EGRESS flow table, wherein the steps are divided into two cases: 1. if there is no link, indicating that the message is the first message of the link, the message is sent to the VIP flow table, the value of the calculated message Wen Haxi is stored on the designated register according to the five-tuple information of the message, then the message is sent to the multi flow table, and the specific back end example is hit according to the value on the register, wherein the back end example is divided into two types: (1) the selected back-end instance is not on the computing node, and at the far-end computing node, the source MAC address and the destination MAC address are set, the link submitting action is executed at the same time, the back-end instance information is stored on a link appointed label (ct_label), the information is convenient to be directly used by a subsequent message linked with the back-end instance information, and finally the subsequent message is sent to a distributed routing flow table DVR to carry out routing operation. (2) The selected back-end instance is just at the same computing node, and the initiated instance and the back-end instance of the scheduling selection are both described as being at the same computing node, so that the link information is not required to be submitted in the MULTIPATH flow table, only the source and destination MAC addresses are required to be set, then the back-end instance is sent to the DNAT flow table for the DNAT operation, meanwhile, the back-end instance information is stored on a linked appointed label (ct_label), and finally, the back-end instance information is also sent to the distributed routing flow table DVR for the routing operation. 2. If the link exists, the message is not the first message of the link, the link is created before, the selected back-end instance information is stored on a designated label (ct_label), the VIP flow table only needs to load the link information into source and destination MAC address fields, and finally the link information is also sent to the distributed routing flow table DVR for routing operation.
For requests from load balancing gateways or other computing nodes to access load balancing services, since vxlan is encapsulated, the ingess flow table is entered from the UNICAST flow table, where ct (nat) operations are performed, in which there are two cases: if there is no link locally, it indicates that this is the first message of the link, it will send it to DNAT flow table to perform DNAT action, and save back-end instance information to link, send it to distributed routing flow table DVR to perform routing operation. If the local link exists, the message is not the first message of the link, the link is created before, the nat operation is directly carried out, the DNAT stream table hit default rule is sent to the distributed routing stream table DVR, and the routing operation is carried out.
Through the design of a distributed load balancing scheme of a multi-level flow table based on an Openflow protocol, load balancing traffic in the east-west direction in the cloud is not forwarded to a network node, and the processing of load balancing service traffic is realized on a computing node, so that the load of the network node is reduced; secondly, load balancing is carried out, the DNAT links are distributed to the calculation nodes where each back-end instance is located, the fault influence range is reduced, the number of the whole cluster connections is increased, and the time delay of checking the link table is not reduced; the method has great significance for centralized management and control of load balancing in the cloud again, and no other set of control components of protocols are required to be developed to manage load balancing related resources; finally, the distributed load balance realized by the multi-stage flow table can be completely controlled by a user to achieve horizontal expansion and contraction.
In summary, in the processing method for load balancing provided in the embodiments of the present application, a first message is obtained, where the first message is used to execute a load balancing access service on a first computing node, and the first message includes link information; when the first message is determined to come from the first computing node and no link information exists on the first computing node, determining a target back-end instance according to the first message, wherein the target back-end instance is used for responding to a request of the first computing node; when the target back-end instance is determined to run on the first computing node, determining a source address and a destination address of the first message according to the target back-end instance, and storing the source address and the destination address into the link information to obtain a second message; and sending the second message to the target back-end instance. The problem that the load of the load balancing network node is too high in the related art is solved. Thereby achieving the effect of reducing the load of the load balancing network nodes.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
The embodiment of the application also provides a processing device for load balancing, and the processing device for load balancing can be used for executing the processing method for load balancing. The following describes a processing device for load balancing provided in an embodiment of the present application.
Fig. 3 is a schematic diagram of a processing device for load balancing according to an embodiment of the present application. As shown in fig. 3, the apparatus is applied to at least one computing node of a distributed system, and includes: an acquisition unit 301, a first determination unit 302, a second determination unit 303, a transmission unit 304.
Specifically, the obtaining unit 301 is configured to obtain a first packet, where the first packet is used to execute a load balancing access service on a first computing node, and the first packet includes link information;
a first determining unit 302, configured to determine, according to the first message, a target backend instance if the first message is from the first computing node and there is no link information on the first computing node, where the target backend instance is configured to respond to a request of the first computing node;
a second determining unit 303, configured to determine, according to the target backend instance, a source address and a destination address of the first message, and store the source address and the destination address in the link information, to obtain a second message, where the target backend instance is running on the first computing node;
And the sending unit 304 is configured to send the second packet to the target backend instance.
In summary, in the processing device for load balancing provided in the embodiment of the present application, a first packet is obtained through an obtaining unit 301, where the first packet is used to execute a load balancing access service on a first computing node, and the first packet includes link information; the first determining unit 302 determines a target backend instance according to the first message when the first message is from the first computing node and no link information exists on the first computing node, wherein the target backend instance is used for responding to a request of the first computing node; the second determining unit 303 determines a source address and a destination address of the first message according to the target back-end instance when the target back-end instance is running on the first computing node, and stores the source address and the destination address into the link information to obtain a second message; the sending unit 304 sends the second message to the target backend instance. The problem that the load of the load balancing network node is too high in the related art is solved. Thereby achieving the effect of reducing the load of the load balancing network nodes.
Optionally, in a processing apparatus for load balancing provided in an embodiment of the present application, the apparatus includes: the third determining unit is used for determining a target back-end example according to back-end example information in the link information after the first message is acquired, and storing the back-end example information in the link information to the MAC layer of the first message to obtain a third message when the first message is from the first computing node and the first computing node has the link information in the first message; and the second sending unit sends the third message to the target back-end instance.
Optionally, in the processing apparatus for load balancing provided in the embodiment of the present application, the first determining unit includes: the first calculation module is used for calculating the hash value of the first message; the second calculation module is used for calculating the hash value to obtain a back-end instance selection value; and the determining module is used for determining the target back-end instance according to the back-end instance selection value.
Optionally, in the processing apparatus for load balancing provided in the embodiment of the present application, the determining module includes: a fourth determining submodule, configured to determine, when the back-end instance selection value matches the first preset value, that the target back-end instance of the first packet is a local instance; and a fifth determining submodule, configured to determine that the target backend instance of the first packet is a distal instance if the backend instance selection value matches with the second preset value.
Optionally, in a processing apparatus for load balancing provided in an embodiment of the present application, the apparatus includes: a sixth determining unit, configured to determine, after determining that the target backend instance of the first packet is a remote instance, a source address and a destination address according to the target backend instance, and store the source address and the destination address into link information of the first packet to obtain a fourth packet; and the third sending unit is used for sending the fourth message to the target back-end instance.
Optionally, in a processing apparatus for load balancing provided in an embodiment of the present application, the apparatus includes: and the judging unit is used for judging whether the IP address in the first message is matched with the preset address after the first message is acquired, and stopping processing of the load balancing access service if the IP address in the first message is not matched with the preset address.
Optionally, in a processing apparatus for load balancing provided in an embodiment of the present application, the apparatus includes: a seventh determining unit, configured to determine, after the first packet is acquired, a target backend instance according to backend instance information in the link information when the first packet is initiated by the second computing node and the link information in the first packet exists in the first computing node; the storage unit is used for storing the back-end instance information into the first message to obtain a fifth message; and the fourth sending unit is used for sending the fifth message to the target back-end instance.
The processing device for load balancing includes a processor and a memory, where the acquiring unit 301, the first determining unit 302, the second determining unit 303, the transmitting unit 304, and the like are stored as program units, and the processor executes the program units stored in the memory to implement corresponding functions.
The processor includes a kernel, and the kernel fetches the corresponding program unit from the memory. The kernel may be provided with one or more kernel parameters to reduce load balancing network node loads.
The memory may include volatile memory, random Access Memory (RAM), and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM), among other forms in computer readable media, the memory including at least one memory chip.
The embodiment of the invention provides a storage medium, on which a program is stored, which when executed by a processor, implements the processing method of load balancing.
The embodiment of the invention provides a processor which is used for running a program, wherein the processing method for load balancing is executed when the program runs.
The embodiment of the invention provides equipment, which comprises a processor, a memory and a program stored in the memory and capable of running on the processor, wherein the processor realizes the following steps when executing the program: acquiring a first message, wherein the first message is used for executing load balancing access service on a first computing node, and the first message comprises link information; when the first message is determined to come from a first computing node and the link information does not exist on the first computing node, determining a target back-end instance according to the first message, wherein the target back-end instance is used for responding to a request of the first computing node; when the target back-end instance is determined to run on the first computing node, determining a source address and a destination address of the first message according to the target back-end instance, and storing the source address and the destination address into the link information to obtain a second message; and sending the second message to the target back-end instance.
The processor also realizes the following steps when executing the program: after the first message is acquired, the method comprises the following steps: when the first message is determined to come from the first computing node and the first computing node has the link information in the first message, determining the target back-end instance according to the back-end instance information in the link information, and storing the back-end instance information in the link information to the MAC layer of the first message to obtain a third message; and sending the third message to the target back-end instance.
The processor also realizes the following steps when executing the program: the determining the target back-end example according to the first message comprises: calculating a hash value of the first message; calculating the hash value to obtain a back-end instance selection value; and determining the target back-end example according to the back-end example selection value.
The processor also realizes the following steps when executing the program: determining the target backend instance according to the backend instance selection value comprises: if the back-end instance selection value is matched with a first preset value, determining that the target back-end instance of the first message is a local instance; and if the back-end instance selection value is matched with a second preset value, determining that the target back-end instance of the first message is a far-end instance.
The processor also realizes the following steps when executing the program: after determining that the target backend instance of the first message is a remote instance, the method comprises: determining a source address and a destination address according to the target back-end example on the far-end example, and storing the source address and the destination address into the link information of the first message to obtain a fourth message; and sending the fourth message to the target back-end instance.
The processor also realizes the following steps when executing the program: after the first message is acquired, the method comprises the following steps: judging whether the IP address in the first message is matched with a preset address, and if the IP address in the first message is not matched with the preset address, stopping processing the load balancing access service.
The processor also realizes the following steps when executing the program: after the first message is acquired, the method comprises the following steps: if the first message is initiated by the second computing node and the link information in the first message exists in the first computing node, determining a target back-end instance according to the back-end instance information in the link information; storing the back-end instance information in the link information into the first message to obtain a fifth message; and sending the fifth message to the target back-end instance. The device herein may be a server, PC, PAD, cell phone, etc.
The present application also provides a computer program product adapted to perform, when executed on a data processing device, a program initialized with the method steps of: acquiring a first message, wherein the first message is used for executing load balancing access service on a first computing node, and the first message comprises link information; when the first message is determined to come from a first computing node and the link information does not exist on the first computing node, determining a target back-end instance according to the first message, wherein the target back-end instance is used for responding to a request of the first computing node; when the target back-end instance is determined to run on the first computing node, determining a source address and a destination address of the first message according to the target back-end instance, and storing the source address and the destination address into the link information to obtain a second message; and sending the second message to the target back-end instance.
When executed on a data processing device, is further adapted to carry out a program initialized with the method steps of: after the first message is acquired, the method comprises the following steps: when the first message is determined to come from the first computing node and the first computing node has the link information in the first message, determining the target back-end instance according to the back-end instance information in the link information, and storing the back-end instance information in the link information to the MAC layer of the first message to obtain a third message; and sending the third message to the target back-end instance.
When executed on a data processing device, is further adapted to carry out a program initialized with the method steps of: the determining the target back-end example according to the first message comprises: calculating a hash value of the first message; calculating the hash value to obtain a back-end instance selection value; and determining the target back-end example according to the back-end example selection value.
When executed on a data processing device, is further adapted to carry out a program initialized with the method steps of: determining the target backend instance according to the backend instance selection value comprises: if the back-end instance selection value is matched with a first preset value, determining that the target back-end instance of the first message is a local instance; and if the back-end instance selection value is matched with a second preset value, determining that the target back-end instance of the first message is a far-end instance.
When executed on a data processing device, is further adapted to carry out a program initialized with the method steps of: after determining that the target backend instance of the first message is a remote instance, the method comprises: determining a source address and a destination address according to the target back-end example on the far-end example, and storing the source address and the destination address into the link information of the first message to obtain a fourth message; and sending the fourth message to the target back-end instance.
When executed on a data processing device, is further adapted to carry out a program initialized with the method steps of: after the first message is acquired, the method comprises the following steps: judging whether the IP address in the first message is matched with a preset address, and if the IP address in the first message is not matched with the preset address, stopping processing the load balancing access service.
When executed on a data processing device, is further adapted to carry out a program initialized with the method steps of: after the first message is acquired, the method comprises the following steps: if the first message is initiated by the second computing node and the link information in the first message exists in the first computing node, determining a target back-end instance according to the back-end instance information in the link information; storing the back-end instance information in the link information into the first message to obtain a fifth message; and sending the fifth message to the target back-end instance.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (10)

1. A method of processing load balancing, the method being applied to at least one computing node of a distributed system, comprising:
acquiring a first message, wherein the first message is used for executing load balancing access service on a first computing node, and the first message comprises link information;
when the first message is determined to come from a first computing node and the link information does not exist on the first computing node, determining a target back-end instance according to the first message, wherein the target back-end instance is used for responding to a request of the first computing node, the target back-end instance is determined based on a back-end instance selection value corresponding to the first message, and the back-end instance selection value is used for determining that the target back-end instance is a local instance or a far-end instance of the first computing node;
when the target back-end instance is determined to run on the first computing node, determining a source address and a destination address of the first message according to the target back-end instance, and storing the source address and the destination address into the link information to obtain a second message;
and sending the second message to the target back-end instance.
2. The method of claim 1, wherein after the first message is acquired, the method comprises:
when the first message is determined to come from the first computing node and the first computing node has the link information in the first message, determining the target back-end instance according to the back-end instance information in the link information, and storing the MAC layer information in the link information to the MAC layer of the first message to obtain a third message;
and sending the third message to the target back-end instance.
3. The method of claim 1, wherein determining a target backend instance from the first message comprises:
calculating a hash value of the first message;
calculating the hash value to obtain a back-end instance selection value;
and determining the target back-end example according to the back-end example selection value.
4. The method of claim 3, wherein determining the target backend instance from the backend instance selection values comprises:
if the back-end instance selection value is matched with a first preset value, determining that the target back-end instance of the first message is a local instance;
and if the back-end instance selection value is matched with a second preset value, determining that the target back-end instance of the first message is a far-end instance.
5. The method of claim 4, wherein after determining that the target backend instance of the first message is a remote instance, the method comprises:
determining a source address and a destination address according to the target back-end example on the far-end example, and storing the source address and the destination address into the link information of the first message to obtain a fourth message;
and sending the fourth message to the target back-end instance.
6. The method of claim 1, wherein after the first message is acquired, the method comprises:
judging whether the IP address in the first message is matched with a preset address, and if the IP address in the first message is not matched with the preset address, stopping processing the load balancing access service.
7. The method of claim 1, wherein after the first message is acquired, the method comprises:
if the first message is initiated by the second computing node, and the first computing node has the link information in the first message;
storing the back-end instance information in the link information into the first message to obtain a fifth message;
And sending the fifth message to the target back-end instance.
8. A processing apparatus for load balancing, the processing apparatus being for use in at least one computing node of a distributed system, comprising:
an obtaining unit, configured to obtain a first packet, where the first packet is used to execute a load balancing access service on a first computing node, and the first packet includes link information;
the first determining unit is configured to determine, according to the first message, a target backend instance if the first message is from a first computing node and the link information does not exist on the first computing node, where the target backend instance is configured to respond to a request of the first computing node, and the target backend instance is determined based on a backend instance selection value corresponding to the first message, where the backend instance selection value is configured to determine that the target backend instance is a local instance or a remote instance of the first computing node;
the second determining unit is configured to determine, when the target backend instance runs on the first computing node, a source address and a destination address of the first message according to the target backend instance, and store the source address and the destination address in the link information to obtain a second message;
And the sending unit is used for sending the second message to the target back-end instance.
9. A computer-readable storage medium, characterized in that the storage medium includes a stored program, wherein the program performs the processing method of load balancing of any one of claims 1 to 7.
10. A processor, characterized in that the processor is configured to run a program, wherein the program, when run, performs the load balancing processing method according to any one of claims 1 to 7.
CN202010989058.9A 2020-09-18 2020-09-18 Processing method and device for load balancing Active CN112104566B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010989058.9A CN112104566B (en) 2020-09-18 2020-09-18 Processing method and device for load balancing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010989058.9A CN112104566B (en) 2020-09-18 2020-09-18 Processing method and device for load balancing

Publications (2)

Publication Number Publication Date
CN112104566A CN112104566A (en) 2020-12-18
CN112104566B true CN112104566B (en) 2024-02-27

Family

ID=73758886

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010989058.9A Active CN112104566B (en) 2020-09-18 2020-09-18 Processing method and device for load balancing

Country Status (1)

Country Link
CN (1) CN112104566B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114237883A (en) * 2021-12-10 2022-03-25 北京天融信网络安全技术有限公司 Security service chain creation method, message transmission method, device and equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016180188A1 (en) * 2015-10-09 2016-11-17 中兴通讯股份有限公司 Distributed link establishment method, apparatus and system
CN106797405A (en) * 2016-12-14 2017-05-31 华为技术有限公司 Distributed load equalizing system, health examination method and service node
CN107846364A (en) * 2016-09-19 2018-03-27 阿里巴巴集团控股有限公司 A kind for the treatment of method and apparatus of message
WO2018077184A1 (en) * 2016-10-26 2018-05-03 新华三技术有限公司 Traffic scheduling
CN108449282A (en) * 2018-05-29 2018-08-24 华为技术有限公司 A kind of load-balancing method and its device
CN108476243A (en) * 2016-01-21 2018-08-31 华为技术有限公司 For the distributed load equalizing of network service function link
CN109587062A (en) * 2018-12-07 2019-04-05 北京金山云网络技术有限公司 Load-balancing information synchronous method, apparatus and processing equipment
CN110753072A (en) * 2018-07-24 2020-02-04 阿里巴巴集团控股有限公司 Load balancing system, method, device and equipment
US10554604B1 (en) * 2017-01-04 2020-02-04 Sprint Communications Company L.P. Low-load message queue scaling using ephemeral logical message topics
CN110995656A (en) * 2019-11-06 2020-04-10 深信服科技股份有限公司 Load balancing method, device, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10491704B2 (en) * 2016-11-07 2019-11-26 General Electric Company Automatic provisioning of cloud services

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016180188A1 (en) * 2015-10-09 2016-11-17 中兴通讯股份有限公司 Distributed link establishment method, apparatus and system
CN108476243A (en) * 2016-01-21 2018-08-31 华为技术有限公司 For the distributed load equalizing of network service function link
CN107846364A (en) * 2016-09-19 2018-03-27 阿里巴巴集团控股有限公司 A kind for the treatment of method and apparatus of message
WO2018077184A1 (en) * 2016-10-26 2018-05-03 新华三技术有限公司 Traffic scheduling
CN106797405A (en) * 2016-12-14 2017-05-31 华为技术有限公司 Distributed load equalizing system, health examination method and service node
US10554604B1 (en) * 2017-01-04 2020-02-04 Sprint Communications Company L.P. Low-load message queue scaling using ephemeral logical message topics
CN108449282A (en) * 2018-05-29 2018-08-24 华为技术有限公司 A kind of load-balancing method and its device
CN110753072A (en) * 2018-07-24 2020-02-04 阿里巴巴集团控股有限公司 Load balancing system, method, device and equipment
CN109587062A (en) * 2018-12-07 2019-04-05 北京金山云网络技术有限公司 Load-balancing information synchronous method, apparatus and processing equipment
CN110995656A (en) * 2019-11-06 2020-04-10 深信服科技股份有限公司 Load balancing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN112104566A (en) 2020-12-18

Similar Documents

Publication Publication Date Title
US11962501B2 (en) Extensible control plane for network management in a virtual infrastructure environment
US11843657B2 (en) Distributed load balancer
US10999184B2 (en) Health checking in a distributed load balancer
EP3355553B1 (en) Reliable load-balancer using segment routing and real-time application monitoring
US11463511B2 (en) Model-based load balancing for network data plane
US9432245B1 (en) Distributed load balancer node architecture
US9485183B2 (en) System and method for efectuating packet distribution among servers in a network
US9559961B1 (en) Message bus for testing distributed load balancers
US20140310417A1 (en) Connection publishing in a distributed load balancer
US20140310390A1 (en) Asymmetric packet flow in a distributed load balancer
US9871712B1 (en) Health checking in a distributed load balancer
WO2015073190A1 (en) Shortening of service paths in service chains in a communications network
EP2316206A2 (en) Distributed load balancer
WO2015114473A1 (en) Method and apparatus for locality sensitive hash-based load balancing
US20190312811A1 (en) Stateless distributed load-balancing
US20190173790A1 (en) Method and system for forwarding data, virtual load balancer, and readable storage medium
JP2018518925A (en) Packet forwarding
CN112104566B (en) Processing method and device for load balancing
WO2021083375A1 (en) Method and apparatus for detecting link states
CN108124021B (en) Method, device and system for obtaining Internet Protocol (IP) address and accessing website
CN112087382B (en) Service routing method and device
CN116016448A (en) Service network access method, device, equipment and storage medium
CN114866544A (en) Containerized micro-service load balancing method for CPU heterogeneous cluster in cloud edge environment
CN113765805B (en) Calling-based communication method, device, storage medium and equipment
KR20200051196A (en) Electronic device providing fast packet forwarding with reference to additional network address translation table

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant