CN113886072A - Load balancing system, method and equipment - Google Patents

Load balancing system, method and equipment Download PDF

Info

Publication number
CN113886072A
CN113886072A CN202111101891.6A CN202111101891A CN113886072A CN 113886072 A CN113886072 A CN 113886072A CN 202111101891 A CN202111101891 A CN 202111101891A CN 113886072 A CN113886072 A CN 113886072A
Authority
CN
China
Prior art keywords
node
computing node
identifier
forwarding message
mapping table
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111101891.6A
Other languages
Chinese (zh)
Inventor
刘宏俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Alibaba Cloud Feitian Information Technology Co ltd
Original Assignee
Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Damo Institute Hangzhou Technology Co Ltd filed Critical Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority to CN202111101891.6A priority Critical patent/CN113886072A/en
Publication of CN113886072A publication Critical patent/CN113886072A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load

Abstract

The embodiment of the application provides a load balancing system, method and device. The load balancing system comprises a management node and a plurality of computing nodes used for bearing BBU instances, wherein a first computing node in the computing nodes is used for being connected with the RRU, a first kernel-state program runs on the first computing node, the first kernel-state program has a mapping table, a node manager is used for updating the mapping table according to the resource occupancy rates of the computing nodes, the first kernel-state program is used for intercepting messages entering from a network card of the first computing node, filtering forward messages from the RRU to the BBU from the messages, determining a balance dimension identifier of the forward messages, and performing load balancing processing according to the balance dimension identifier of the forward messages and the mapping table so as to determine a target computing node used for processing the forward messages from the computing nodes. The load balancing system can save cost and has higher flexibility.

Description

Load balancing system, method and equipment
Technical Field
The present application relates to the field of communications technologies, and in particular, to a load balancing system, method, and device.
Background
The Cloud radio access network (Cloud RAN) is characterized in that software protocol stacks such as LTE (long term evolution), 5G (the third generation telecommunication) and the like are carried on a universal server in a virtualization mode, dynamic scaling (scale in/out) is executed on protocol stack software instances according to load conditions by utilizing the elasticity of Cloud computing, and therefore the maximum utilization of resources is achieved. The Cloud RAN can comprise a clouded BBU and an RRU, and an eCPRI protocol can be adopted between the clouded BBU and the RRU.
At present, when the computing power of a single computing node cannot meet the requirement, multiple instances providing BBU functions need to be distributed to multiple computing nodes, and therefore, the load of forwarding from the RRU to the computing nodes needs to be balanced. Generally, load balancing of the computing nodes is performed by setting a specially-customized load balancer device supporting an eccri protocol between the RRU and the computing nodes. However, this method has problems of high cost and poor flexibility.
Disclosure of Invention
The embodiment of the application provides a load balancing system, method and device, which are used for solving the problems of high cost and poor flexibility when load balancing is performed on a computing node in the prior art.
In a first aspect, an embodiment of the present application provides a load balancing system, including: the RRU comprises a node manager and a plurality of computing nodes, wherein the computing nodes are used for bearing BBU instances, a first computing node in the computing nodes is used for being connected with the RRU, a first kernel-state program runs on the first computing node, the first kernel-state program has a mapping table, and the mapping table records mapping relations among node identifiers, node resource occupancy rates and equilibrium dimension identifiers;
the node manager is used for updating a mapping table according to the resource occupancy rates of the plurality of computing nodes;
the first kernel program is used for intercepting messages entering from a network card of the first computing node, filtering forward messages from the RRU to the BBU from the messages, and determining a balance dimension identifier of the forward messages;
the first kernel mode program is further configured to perform load balancing processing according to the balancing dimension identifier of the forwarding message and the mapping table, so as to determine a target computing node for processing the forwarding message from the plurality of computing nodes.
In a second aspect, an embodiment of the present application provides a load balancing method, applied to a first computing node, including:
intercepting a message entering from a network card of the first computing node, filtering a forwarding message from the RRU to the BBU from the message, and determining a balanced dimension identifier of the forwarding message;
and performing load balancing processing according to the balance dimension identification of the forwarding message and a mapping table to determine a target computing node for processing the forwarding message from the plurality of computing nodes, wherein the mapping table records the mapping relationship among the node identification, the node resource occupancy rate and the balance dimension identification.
In a third aspect, an embodiment of the present application provides a load balancing apparatus, applied to a first computing node, including:
the interception filtering module is used for intercepting messages entering from the network card of the first computing node, filtering a forwarding message from the RRU to the BBU from the messages, and determining a balanced dimension identifier of the forwarding message;
and the load balancing module is used for carrying out load balancing processing according to the balancing dimension identification of the forwarding message and a mapping table so as to determine a target computing node for processing the forwarding message from the plurality of computing nodes, wherein the mapping table records the mapping relation among the node identification, the node resource occupancy rate and the balancing dimension identification.
In a fourth aspect, an embodiment of the present application provides a computer device, including: a memory, a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the method of any of the second aspects.
In a fifth aspect, the present application provides a computer program comprising computer program instructions which, when executed by a processor, implement the method according to any one of the second aspect.
In a sixth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed, the method according to any one of the second aspects is implemented.
In the embodiment of the application, a first computing node in a plurality of computing nodes is used for being connected with the RRU, other computing nodes are not used for being connected with the RRU, and load balancing of the computing nodes is performed through a first kernel program running on the first computing node, so that load balancing among the computing nodes is achieved through software newly added on the first computing node in the computing nodes.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic architecture diagram of a Cloud RAN provided in an embodiment of the present application;
fig. 2 is a schematic diagram of an architecture for implementing load balancing in a Cloud RAN according to the related art of the present application;
fig. 3 is a schematic structural diagram of a load balancing system according to an embodiment of the present application;
fig. 4 is a schematic diagram of forwarding a forward message by a first kernel-state program according to an embodiment of the present application;
fig. 5 is a schematic diagram of an architecture for implementing load balancing in the Cloud RAN according to an embodiment of the present application;
fig. 6 is a schematic flowchart of a load balancing method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a load balancing apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a" and "an" typically include at least two, but do not exclude the presence of at least one.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a commodity or system that includes the element.
In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited.
For the convenience of those skilled in the art to understand the technical solutions provided in the embodiments of the present application, a technical environment for implementing the technical solutions is described below.
In a wireless communication system, a terminal may communicate with one or more Core networks (Core networks, abbreviated as CNs) through a Radio Access Network (RAN), which may include a base station.
A Terminal (Terminal) may also be referred to as a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (Mobile Terminal), an access Terminal, a Terminal device, a subscriber unit, a subscriber Station, a Mobile Station, a remote Terminal, a Mobile device, a User Terminal, a wireless communication device, a User agent, or a User Equipment. The terminal may be a cellular phone, a cordless phone, a Session Initiation Protocol (SIP) phone, a Wireless Local Loop (WLL) station, a Personal Digital Assistant (PDA), a handheld device with a Wireless communication function, a computer device or a vehicle-mounted device, a wearable device, a terminal device in a future 5G network, and the like.
The base station, i.e. the public mobile communication base station, refers to a radio transceiver station for information transmission between a mobile communication switching center and a terminal in a certain radio coverage area. The base band part and the radio frequency part of the base station can be separated, a base band signal is transmitted between the base band part and the radio frequency part, and a base band optical signal is converted into a radio frequency signal at a far end to be amplified and transmitted. The baseband part may be referred to as a baseband processing Unit (BBU), the Radio frequency part may be referred to as a Remote Radio Unit (RRU), the baseband processing Unit and the Remote Radio Unit may be connected by an optical fiber, and one baseband processing Unit may support multiple Remote Radio units. The remote radio unit can also receive the radio frequency signal through the antenna, convert the received radio frequency signal into a baseband signal and send the baseband signal to the baseband processing unit.
It should be understood that the specific manner in which the base station divides the baseband processing Unit and the Radio remote Unit may be different in different communication systems, for example, in a 5th Generation (5G) communication system, the base station may be divided into a Central Unit (CU), a Distributed Unit (DU), and a Radio Unit (RU), where the Central Unit + the Distributed Unit may be understood as a baseband processing Unit and the Radio Unit may be understood as a Radio remote Unit.
In the Cloud RAN, the BBU may be clouded to obtain a clouded BBU. As shown in fig. 1, in the wireless communication system, the clouding BBU11 may be connected to the RRU12 in the downlink direction, and the clouding BBU11 may be connected to the core network 13 in the uplink direction, where the RRU may communicate with the terminal 14 through radio frequency signals. In this embodiment, a message from the RRU to the BBU may be understood as a forwarding message, in an embodiment, a communication Interface between the RRU and the BBU may specifically be an enhanced Common Radio Interface (eCPRI) protocol, and the forwarding message may specifically be an eCPRI message.
In the clouded BBU11, BBU functionality may be provided by an instance carried on a computing node, such instance may be referred to as a BBU instance, the computing node carrying the BBU instance may be considered a BBU server, and one rectangular box in fig. 1 may represent one BBU instance. In order to achieve maximum utilization of resources, dynamic capacity expansion and reduction processing can also be performed on BBU instances, for example, in fig. 1, when the load is low, BBU instances can be reduced from 3 to 2.
In practical application, when the computing power of a single computing node cannot meet the requirement, BBU instances can be dispersed to a plurality of computing nodes, and when a plurality of BBU instances providing the same function are dispersed to a plurality of computing nodes, load balancing of the plurality of computing nodes can be performed.
Generally, load balancing can be implemented in a manner as shown in fig. 2, specifically, a specially customized load balancer device 15 supporting an eccri protocol is provided between the RRU12 and a computing node in the clouded BBU11, and load balancing of the computing node is performed by the load balancer device 15. However, the method of setting a specially customized load balancer device requires additional equipment, which is costly, and the balancing method is inconvenient to adjust in time due to a long change period of the customized method, which is also a problem of poor flexibility. It should be noted that a rectangular box directly above a computing node in fig. 2 may represent a BBU instance carried on the computing node, and the RRU12 extends outward in 3 ellipses, which may represent three sectors, in which the terminal 14 may be located in any one of the sectors.
In order to solve the technical problems of high cost and poor flexibility when load balancing is performed on computing nodes, in the embodiment of the application, a first computing node in a plurality of computing nodes is used for being connected with an RRU, and other computing nodes are not used for being connected with the RRU, and the load balancing of the computing nodes is performed through a first kernel-state program running on the first computing node, so that the load balancing among the computing nodes is realized through software newly added on the first computing node in the computing nodes.
It should be noted that, because the processing speed of the kernel mode program is very fast, the load balancing among the plurality of computing nodes is performed by the first kernel mode program, the balancing speed may be very fast, and the delay caused by the load balancing may be very small.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Fig. 3 is a schematic structural diagram of a load balancing system according to an embodiment of the present application, and as shown in fig. 3, the load balancing system may include: the node management device 31 and the multiple computing nodes 32, the multiple computing nodes 32 are used for bearing BBU instances, a first computing node 321 in the multiple computing nodes 32 is used for being connected with the remote radio unit x, a first kernel-state program runs on the first computing node 321, the first kernel-state program has a mapping table, and a mapping relationship among node identifiers, node resource occupancy rates, and balanced dimension identifiers is recorded in the mapping table. It should be noted that the number of the remote radio units x in fig. 3 is merely an example, and the node manager 31 may operate in any one of the plurality of computing nodes 32, or may operate in any other node besides the plurality of computing nodes 32.
The node identifier is an identifier of a computing node, and the identifier of the computing node may be, for example, an IP address of the computing node. The node resource occupancy rate is the resource occupancy rate of a computing node, the resource occupancy rate of a computing node may represent the load condition of the computing node, and a higher resource occupancy rate may represent a higher load. The balance dimension identification is identification of a dimension based on which load balancing is performed, and selection of the balance dimension identification can be flexibly realized according to balance requirements.
Illustratively, when the forwarding messages from the RRUs to the BBU in the same cell need to be processed by the same computing node, and the forwarding messages in different cells can be dispersed in different computing nodes for processing, the balance dimension may be a cell dimension, and the balance dimension identifier may be a cell identifier. For example, when the forward-forwarding messages of the same sector need to be processed by the same computing node and the forward-forwarding messages of different sectors can be dispersed in different computing nodes for processing, the balance dimension may be a sector dimension, and the balance dimension identifier may be a sector identifier. For example, when the forwarding messages of the same carrier need to be processed by the same computing node and the forwarding messages of different carriers can be dispersed in different computing nodes for processing, the equalization dimension may be a carrier dimension, and the equalization dimension identifier may be a carrier identifier.
Taking the equilibrium dimension id as the sector id, for example, the mapping table at a certain time may be as shown in table 1 below.
TABLE 1
Node identification Node resource occupancy rate Sector identification
IP address of computing node 1 *** Sector 1
Computing node 2 IP Address *** Sector 1
IP address of the computing node 3 *** Sector 1
It should be noted that the number of the sector identifiers in table 1 is 1 for example only, and in other embodiments, the sector identifiers may also include other identifiers. The computing nodes corresponding to the other identifiers may include any one or more of computing nodes 1 to 3, and/or computing nodes other than computing nodes 1 to 3.
In the embodiment of the present application, the node manager 31 may be configured to update the mapping table according to the resource occupancy rates of the plurality of computing nodes 32. For example, the node manager 31 may obtain resource occupancy rates from a plurality of computing nodes, respectively, and update the obtained resource occupancy rates into the mapping table.
Optionally, the node Manager 31 may also be configured to perform dynamic capacity expansion and reduction processing on the BBU instance carried on the computing node 32, in which case, the node Manager may specifically be understood as an Elastic Manager (Elastic Manager). It should be understood that, when the BBU instance is subjected to dynamic capacity expansion processing, if a new computing node needs to be added or a computing node needs to be released as a result of the dynamic capacity expansion, the mapping table needs to be updated accordingly. For the implementation manner of the node manager performing the dynamic scaling of the BBU instance, reference may be made to the detailed description in the related art, and details are not described herein again.
For example, assuming that the resource occupancy rate of the computing node 3 in the mapping relationship shown in table 1 is lower than the threshold TH _ min, the node manager may further evaluate whether the computing node 1 and the computing node 2 can share the load of the computing node 3, if so, the node manager may release the BBU instance carried on the computing node 3 to release the resource of the computing node, and meanwhile, the node manager may also correspondingly update the mapping table, where the updated mapping table may be as shown in table 2 below.
TABLE 2
Node identification Node resource occupancy rate Sector identification
IP address of computing node 1 *** Sector 1
Computing node 2 IP Address *** Sector 1
For another example, assuming that the resource occupancy of the computing node 1 in the mapping relationship shown in table 2 is higher than the threshold TH _ max, the node manager may further evaluate whether a new BBU instance needs to be created, and if so, and there is no more carrier headroom on the computing node 2 to carry the new BBU instance, the computing node may be newly added and a new BBU instance is created on the newly added computing node, taking the newly added computing node as the computing node 3 as an example, and the updated mapping table may be as shown in table 1.
In this embodiment, the mapping table may be used for load balancing of the first kernel-state program. The first Kernel-mode program is a program located in an operating system Kernel (Kernel) of the first computing node 321, and the first Kernel-mode program is used to provide a load balancing function for the computing node 32. The load balancing function may be understood as a function added by a user to the kernel of the operating system of the first computing node 321, and the type of the first kernel-mode program may be different according to different specific technologies for adding functions to the kernel by the user, and for example, the first kernel-mode program may include an eBPF program.
As shown in fig. 3, the first kernel can be used to intercept messages coming in from the network card of the first computing node 321. It should be understood that, since the first computing node 321 is connected to the remote radio unit x, the message entering from the network card of the first computing node 321 may include a forward message from the remote radio unit x to the BBU, and therefore, the forward message from the remote radio unit x to the BBU may be obtained by intercepting the message entering from the network card of the first computing node 321. For example, a hook-specific event may be used to intercept a message coming from the network card of the first computing node 321.
Since the message entering from the network card of the first computing node 321 may include other types of messages, such as a synchronization message, in addition to the forward message, after intercepting the message entering from the network card of the first computing node 321, as shown in fig. 3, the first kernel mode program may also filter the forward message from the intercepted message. For example, the forwarding message may be filtered from the intercepted message according to a message header structure of the eccri protocol.
After filtering out the fronthaul message, the first kernel mode program may also determine a balanced dimension identification for the fronthaul message, as shown in fig. 3. For example, the equilibrium dimension identification of the forward message may be determined according to information carried in the forward message. For example, assuming that the equalization dimension identifier is a carrier identifier and the forwarding message is an eccri message, since the CC _ ID subfield is carried in the eccri message, the content of the CC _ ID subfield carried in the eccri can be used as the carrier identifier.
In this embodiment of the application, after the first kernel mode program determines the balance dimension identifier of the forwarding message, as shown in fig. 3, the first kernel mode program may further perform load balancing processing according to the balance dimension identifier of the forwarding message and the mapping table, so as to determine a target computing node for processing the forwarding message from the plurality of computing nodes 32.
Optionally, the first kernel mode program may determine candidate computing nodes first, and then select a target computing node from the candidate computing nodes, based on this, in an embodiment, the first kernel mode program may determine, from the multiple computing nodes, at least two candidate computing nodes capable of processing a fronthaul message according to a balanced dimension identifier of the fronthaul message and a mapping relationship between a node identifier and a balanced dimension identifier in a mapping table, and perform load balancing processing according to a mapping relationship between a node identifier and a node resource occupancy rate in the mapping table, so as to select a target computing node for processing the fronthaul message from the at least two candidate computing nodes. The load balancing policy adopted when the first kernel-state program performs the load balancing processing may be flexibly implemented according to the requirement, which is not limited in the present application.
For example, assuming that the equilibrium dimension identifier of the forwarding message is sector 1, and the mapping table is shown in table 1, the first kernel mode program may determine candidate computing nodes capable of processing the forwarding message to be computing node 1, computing node 2, and computing node 3 according to the equilibrium dimension identifier of the forwarding message and the mapping relationship between the node identifier and the equilibrium dimension identifier in the mapping table. Further, assuming that the load balancing policy is a Round Robin (Round Robin) policy, the polling sequence is compute node 1 → compute node 2 → compute node 3, and the last time the poll is to compute node 2, then this time compute node 3 may be selected to process the forward message, i.e., compute node 3 may be selected as a target compute node for processing the forward message.
Optionally, the load balancing policy may be set by the node manager 31, and based on this, in an embodiment, the node manager 31 may also be configured to set a load balancing policy that is used when the first kernel-state program performs load balancing.
It should be understood that when the number of candidate computing nodes capable of processing the forwarding message is 1, load balancing may not be performed, and the forwarding message may be processed by the candidate computing nodes.
In this embodiment of the application, after the first kernel mode program determines the target computing node, the first kernel mode program may further forward the forwarding message. Based on this, in an embodiment, the first kernel-state program may be further configured to, when the target computing node is a node other than the first computing node, forward the forward message to the target computing node according to the identifier of the target computing node in the mapping table, so that the BBU instance carried on the target computing node processes the forward message; and when the target computing node is the first computing node, forwarding the forwarding message to the BBU instance carried on the first computing node so as to process the forwarding message by the BBU instance carried on the first computing node.
It should be understood that the program corresponding to the BBU instance carried on the compute node is a User mode program, the User mode program is located in a User Space (User Space) outside an operating system kernel of the first compute node 321, and the User mode program corresponding to the BBU instance carried on the first compute node may be denoted as the first User mode program.
If the first computing node is the computing node 1 in table 1, as shown in fig. 4, if the target computing node selected by the first kernel mode program in the operating system kernel of the computing node 1 for processing the forward message is the computing node 1, the forward message may be forwarded to the first user mode program of the computing node 1 through the path 1; if the target computing node selected by the first kernel mode program and used for processing the forwarding message is the computing node 2, the forwarding message can be forwarded to the computing node 2 through the path 2, so that the forwarding message is processed by a BBU instance carried on the computing node 2; if the target computing node selected by the first kernel-mode program for processing the fronthaul message is the computing node 3, the fronthaul message may be forwarded to the computing node 3 through the path 3 so as to be processed by the BBU instance disposed on the computing node 3.
In the embodiment of the present application, the number of BBU instances carried on the same compute node may be multiple, and the equalization dimension identifications corresponding to the multiple BBU instances may be the same or different. When the balance dimension identifiers corresponding to the multiple BBU instances are the same, forwarding the forwarding message may not consider the correspondence between the balance dimension identifiers and the BBU instances. When the balance dimension identifications corresponding to the multiple BBU instances are different, the forwarding of the forwarding message may consider a correspondence between the balance dimension identifications and the BBU instances.
Optionally, information for distinguishing a correspondence between the balance dimension identifier and the BBU instance may be recorded in the mapping table. Based on this, in an embodiment, for a case where the same computing node carries multiple BBU instances corresponding to different balanced dimension identifiers, a mapping relationship between the instances and the identifiers related to the instances may also be recorded in the mapping table. The instance related identifier may specifically be any type of identifier that can be used to forward the forwarding message to the BBU instance corresponding to the balance dimension identifier thereof, and illustratively, the instance related identifier includes a port number or a virtual network card address.
The forwarding, by the first kernel-state program, the forwarding message to the target computing node according to the identifier of the target computing node in the mapping table may specifically include: and forwarding the forwarding message to the target computing node according to the identifier of the target computing node and the target instance related identifier in the mapping table, so that the forwarding message is processed by the BBU instance which is loaded on the target computing node and corresponds to the target instance related identifier, wherein the target instance related identifier is an instance related identifier corresponding to the identifier of the target computing node and the equilibrium dimension identifier of the forwarding message.
Taking the example-related identifier as a port number, and a plurality of BBUs corresponding to different balanced dimension identifiers are carried on the computing nodes 3 in the computing nodes 1 to 3 as an example, the mapping table may be as shown in table 3 below, for example.
TABLE 3
Node identification Node resource occupancy rate Sector identification Port number
IP address of computing node 1 *** Sector 1 20
Computing node 2 IP Address *** Sector 1 20
IP address of the computing node 3 *** Sector 1 20
IP address of the computing node 3 *** Sector 2 30
For example, assuming that the sector identifier of the forwarding message is sector 1, and the target computing node selected by the first kernel-state program from the computing node 1 to the computing node 3 for the forwarding message is the computing node 3, the first kernel-state program may forward the forwarding message to the computing node 3 according to the IP address and the port number 20 of the computing node 3, so as to process the forwarding message by the BBU instance corresponding to the sector 1 carried on the computing node 3.
Taking the first kernel-state program as eBPF, the number of compute nodes is 3, and load balancing is performed from the sector dimension as an example, the architecture for performing load balancing in the Cloud RAN by using the load balancing system provided in the embodiment of the present application may be as shown in fig. 5, for example. In fig. 5, the node manager 31 may acquire the node occupancy rates of the computing node 1, the computing node 2, and the computing node 3, and update the mapping table according to the node occupancy rates. The rectangular boxes directly above compute node 1, compute node 2, compute node 3 represent BBU instances carried thereon.
The computing node 1 is connected with an RRUx, which may correspond to three sectors, sector 1, sector 2, and sector 3, respectively, and the terminal may be located in any one of the sectors. The eCPRI flow from the RRUx to the BBU can be sent to the computing node 1 through the connection between the RRUx and the computing node 1, an eBPF program running in the computing node 1 can intercept messages from a network card, the eCPRI messages are filtered from the intercepted messages, and a target computing node for processing the eCPRI messages is selected. It should be understood that prior to the capacity reduction compute node 3, the target compute node may be compute node 1, compute node 2, or compute node 3; after the capacity reduction compute node 3, the target compute node may be compute node 1 or compute node 2.
The computing node 1 may also be connected with the computing node 2 and the computing node 3, respectively, so that when the target computing node selected for the eCPRI message is the computing node 2 or the computing node 3, the eBPF program may forward the eCPRI message through the connection with the computing node. It should be understood that the path of the eBPF program to compute node 3 exists before the capacity reduction compute node 3 and the path of the eBPF program to compute node 3 does not exist after the capacity reduction compute node 3.
The load balancing system provided by this embodiment includes a management node and a plurality of computing nodes for carrying BBU instances, where a first computing node in the plurality of computing nodes is used to connect to a radio remote unit, the first computing node runs a first kernel-state program, the first kernel-state program has a mapping table, the node manager is used to update the mapping table according to resource occupancy rates of the plurality of computing nodes, the first kernel-state program is used to intercept a message entering from a network card of the first computing node, filter a fronthaul message from the message, determine a balance dimension identifier of the fronthaul message, and perform load balancing processing according to the balance dimension identifier of the fronthaul message and the mapping table, so as to determine a target computing node for processing the fronthaul message from the plurality of computing nodes, thereby implementing load balancing among the plurality of computing nodes by software added on the first computing node in the plurality of computing nodes, costs can be saved due to the savings in the overhead of having to purchase additional custom-made load balancer devices, and flexibility is higher due to the more flexible changing of software on the compute nodes compared to custom-made devices.
Fig. 6 is a schematic flow diagram of a load balancing method according to an embodiment of the present application, where the load balancing method may be applied to the load balancing system described in the foregoing embodiment, and specifically may be applied to a first computing node in the load balancing system, as shown in fig. 6, the method provided in this embodiment may include:
step 61, intercepting the message entering from the network card of the first computing node, filtering the forwarding message from the RRU to the BBU from the message, and determining the equilibrium dimension identification of the forwarding message;
and 62, performing load balancing processing according to the balance dimension identifier of the forwarding message and a mapping table to determine a target computing node for processing the forwarding message from a plurality of computing nodes, wherein the mapping table records a mapping relation among the node identifier, the node resource occupancy rate and the balance dimension identifier.
Optionally, the determining, according to the equilibrium dimension identifier of the forwarding message and the mapping table, a target computing node for processing the forwarding message from the plurality of computing nodes may specifically include:
determining at least two candidate computing nodes capable of processing the forwarding message from the plurality of computing nodes according to the balance dimension identification of the forwarding message and the mapping relation between the node identification and the balance dimension identification in the mapping table, and performing load balance processing according to the mapping relation between the node identification and the node resource occupancy rate in the mapping table to select a target computing node for processing the forwarding message from the at least two candidate computing nodes.
Optionally, the method provided in the embodiment of the present application may further include: when the target computing node is a node other than the first computing node, forwarding the forwarding message to the target computing node according to the identifier of the target computing node in the mapping table, so that the BBU instance carried on the target computing node processes the forwarding message; and when the target computing node is the first computing node, forwarding the forwarding message to the BBU instance carried on the first computing node, so that the BBU instance processes the forwarding message.
Optionally, a plurality of BBU instances corresponding to different balanced dimension identifiers are loaded on the same computing node, and a mapping relationship between identifiers related to the instances is also recorded in the mapping table; in step 62, forwarding the forwarding message to the target computing node according to the identifier of the target computing node in the mapping table, which may specifically include: and forwarding the forwarding message to the target computing node according to the identifier of the target computing node and the target instance related identifier in the mapping table, so that the forwarding message is processed by a BBU instance which is loaded on the target computing node and corresponds to the target instance related identifier, wherein the target instance related identifier is an instance related identifier corresponding to the identifier of the target computing node and the equilibrium dimension identifier of the forwarding message.
Optionally, the instance related identifier includes a port number or a virtual network card address.
Optionally, the equalization dimension identifier includes a cell identifier, a sector identifier, or a carrier identifier.
It should be noted that, regarding the specific manner of load balancing performed by the first computing node, reference may be made to the relevant description in the embodiment shown in fig. 3, and details are not described here again.
According to the load balancing method provided by the embodiment of the application, the message entering from the network card of the first computing node is intercepted, the forwarding message is filtered from the message, the balancing dimension identification of the forwarding message is determined, load balancing processing is performed according to the balancing dimension identification of the forwarding message and the mapping table, the target computing node used for processing the forwarding message is determined from the multiple computing nodes, load balancing among the multiple computing nodes is achieved through newly added software on the first computing node, and specially customized load balancer equipment does not need to be purchased, so that the cost of load balancing can be reduced, and the flexibility is high.
Fig. 7 is a schematic structural diagram of a load balancing apparatus according to an embodiment of the present application; referring to fig. 7, the present embodiment provides a load balancing apparatus, which may perform the load balancing method shown in fig. 6, and specifically, the apparatus may include:
an interception and filtering module 71, configured to intercept a message entering from the network card of the first computing node, filter a fronthaul message from the RRU to the BBU from the message, and determine a balanced dimension identifier of the fronthaul message;
and a load balancing module 72, configured to perform load balancing processing according to the balanced dimension identifier of the forwarding message and a mapping table, so as to determine a target computing node for processing the forwarding message from the multiple computing nodes, where a mapping relationship among the node identifier, the node resource occupancy rate, and the balanced dimension identifier is recorded in the mapping table.
Optionally, the load balancing module 72 may be specifically configured to: determining at least two candidate computing nodes capable of processing the forwarding message from the plurality of computing nodes according to the balance dimension identification of the forwarding message and the mapping relation between the node identification and the balance dimension identification in the mapping table, and performing load balance processing according to the mapping relation between the node identification and the node resource occupancy rate in the mapping table to select a target computing node for processing the forwarding message from the at least two candidate computing nodes.
Optionally, the load balancing module 72 may be further configured to: when the target computing node is a node other than the first computing node, forwarding the forwarding message to the target computing node according to the identifier of the target computing node in the mapping table, so that the BBU instance carried on the target computing node processes the forwarding message; and when the target computing node is the first computing node, forwarding the forwarding message to the BBU instance carried on the first computing node, so that the BBU instance processes the forwarding message.
Optionally, a plurality of BBU instances corresponding to different balanced dimension identifiers are loaded on the same computing node, and a mapping relationship between identifiers related to the instances is also recorded in the mapping table; the load balancing module 72 is configured to forward the forwarding message to the target computing node according to the identifier of the target computing node in the mapping table, and specifically may include: and forwarding the forwarding message to the target computing node according to the identifier of the target computing node and the target instance related identifier in the mapping table, so that the forwarding message is processed by a BBU instance which is loaded on the target computing node and corresponds to the target instance related identifier, wherein the target instance related identifier is an instance related identifier corresponding to the identifier of the target computing node and the equilibrium dimension identifier of the forwarding message.
Optionally, the instance related identifier includes a port number or a virtual network card address.
Optionally, the equalization dimension identifier includes a cell identifier, a sector identifier, or a carrier identifier.
The apparatus shown in fig. 7 can perform the method of the embodiment shown in fig. 6, and reference may be made to the related description of the embodiment shown in fig. 6 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 6, and are not described herein again.
In one possible implementation, the structure of the apparatus shown in FIG. 7 may be implemented as a computer device. As shown in fig. 8, the computer apparatus may include: a processor 81 and a memory 82. Wherein the memory 82 is used for storing a program for supporting a computer device to execute the method provided in the embodiment shown in fig. 6, and the processor 81 is configured for executing the program stored in the memory 82.
The program comprises one or more computer instructions which, when executed by the processor 81, are capable of performing the steps of:
intercepting a message entering from a network card of the first computing node, filtering a forwarding message from the RRU to the BBU from the message, and determining a balanced dimension identifier of the forwarding message;
and carrying out load balancing processing according to the balance dimension identification of the forward message and a mapping table to determine a target computing node for processing the forward message from a plurality of computing nodes, wherein the mapping table records the mapping relation among the node identification, the node resource occupancy rate and the balance dimension identification.
Optionally, the processor 81 is further configured to perform all or part of the steps in the foregoing embodiment shown in fig. 6.
The computer device may further include a communication interface 83 for the computer device to communicate with other devices or a communication network.
In addition, the embodiment of the present application also provides a computer program, which includes computer program instructions, and when the instructions are executed by a processor, the method provided by the embodiment shown in fig. 6 is implemented.
The embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed, the method provided by the embodiment shown in fig. 6 is implemented.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement such a technique without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described technical solutions and/or portions thereof that contribute to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein (including but not limited to disk storage, CD-ROM, optical storage, etc.).
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, linked lists, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (14)

1. A load balancing system, comprising: the RRU comprises a node manager and a plurality of computing nodes, wherein the computing nodes are used for bearing BBU instances, a first computing node in the computing nodes is used for being connected with the RRU, a first kernel-state program runs on the first computing node, the first kernel-state program has a mapping table, and the mapping table records mapping relations among node identifiers, node resource occupancy rates and equilibrium dimension identifiers;
the node manager is used for updating the mapping table according to the resource occupancy rates of the plurality of computing nodes;
the first kernel program is used for intercepting messages entering from a network card of the first computing node, filtering forward messages from the RRU to the BBU from the messages, and determining a balance dimension identifier of the forward messages;
the first kernel mode program is further configured to perform load balancing processing according to the balancing dimension identifier of the forwarding message and the mapping table, so as to determine a target computing node for processing the forwarding message from the plurality of computing nodes.
2. The system according to claim 1, wherein the first kernel-state program is configured to perform load balancing processing according to the balanced dimension identifier of the forwarding message and the mapping table, so as to determine a target computing node for processing the forwarding message from the plurality of computing nodes, and specifically includes:
determining at least two candidate computing nodes capable of processing the forwarding message from the plurality of computing nodes according to the balance dimension identification of the forwarding message and the mapping relation between the node identification and the balance dimension identification in the mapping table, and performing load balance processing according to the mapping relation between the node identification and the node resource occupancy rate in the mapping table to select a target computing node for processing the forwarding message from the at least two candidate computing nodes.
3. The system of claim 1, wherein the first kernel-state program is further configured to:
when the target computing node is a node other than the first computing node, forwarding the forwarding message to the target computing node according to the identifier of the target computing node in the mapping table, so that the BBU instance carried on the target computing node processes the forwarding message;
and when the target computing node is the first computing node, forwarding the forwarding message to the BBU instance carried on the first computing node so as to process the forwarding message by the BBU instance.
4. The system according to claim 3, wherein a plurality of BBU instances corresponding to different balanced dimension identifiers are carried on a same compute node, and a mapping relationship between identifiers related to the instances is further recorded in the mapping table;
the forwarding, by the first kernel-state program, the forwarding message to the target computing node according to the identifier of the target computing node in the mapping table specifically includes: and forwarding the forwarding message to the target computing node according to the identifier of the target computing node and the target instance related identifier in the mapping table, so that the forwarding message is processed by a BBU instance which is loaded on the target computing node and corresponds to the target instance related identifier, wherein the target instance related identifier is an instance related identifier corresponding to the identifier of the target computing node and the equilibrium dimension identifier of the forwarding message.
5. The system according to any of claims 1-4, wherein the node manager is further configured to set a load balancing policy to be used by the first kernel-state program for load balancing.
6. The system according to any of claims 1-4, wherein the equalization dimension identifier comprises a cell identifier, a sector identifier, or a carrier identifier.
7. A load balancing method applied to a first computing node comprises the following steps:
intercepting a message entering from a network card of the first computing node, filtering a forwarding message from the RRU to the BBU from the message, and determining a balanced dimension identifier of the forwarding message;
and carrying out load balancing processing according to the balance dimension identification of the forward message and a mapping table to determine a target computing node for processing the forward message from a plurality of computing nodes, wherein the mapping table records the mapping relation among the node identification, the node resource occupancy rate and the balance dimension identification.
8. The method of claim 7, wherein determining a target computing node from the plurality of computing nodes for processing the forwarding message according to the balanced dimension identifier of the forwarding message and a mapping table comprises:
determining at least two candidate computing nodes capable of processing the forwarding message from the plurality of computing nodes according to the balance dimension identification of the forwarding message and the mapping relation between the node identification and the balance dimension identification in the mapping table, and performing load balance processing according to the mapping relation between the node identification and the node resource occupancy rate in the mapping table to select a target computing node for processing the forwarding message from the at least two candidate computing nodes.
9. The method of claim 7, further comprising:
when the target computing node is a node other than the first computing node, forwarding the forwarding message to the target computing node according to the identifier of the target computing node in the mapping table, so that the BBU instance carried on the target computing node processes the forwarding message;
and when the target computing node is the first computing node, forwarding the forwarding message to the BBU instance carried on the first computing node so as to process the forwarding message by the BBU instance.
10. The method according to claim 9, wherein a plurality of BBU instances corresponding to different balanced dimension identifiers are carried on a same compute node, and a mapping relationship between identifiers related to the instances is further recorded in the mapping table;
the forwarding message to the target computing node according to the identifier of the target computing node in the mapping table includes: and forwarding the forwarding message to the target computing node according to the identifier of the target computing node and the target instance related identifier in the mapping table, so that the forwarding message is processed by a BBU instance which is loaded on the target computing node and corresponds to the target instance related identifier, wherein the target instance related identifier is an instance related identifier corresponding to the identifier of the target computing node and the equilibrium dimension identifier of the forwarding message.
11. The method according to any of claims 7-10, wherein the equalization dimension identifier comprises a cell identifier, a sector identifier, or a carrier identifier.
12. A computer device, comprising: a memory, a processor; wherein the memory is to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the method of any of claims 7 to 11.
13. A computer program comprising computer program instructions which, when executed by a processor, implement the method of any one of claims 7 to 11.
14. A computer-readable storage medium, having stored thereon a computer program which, when executed, implements the method of any of claims 7 to 11.
CN202111101891.6A 2021-09-18 2021-09-18 Load balancing system, method and equipment Pending CN113886072A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111101891.6A CN113886072A (en) 2021-09-18 2021-09-18 Load balancing system, method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111101891.6A CN113886072A (en) 2021-09-18 2021-09-18 Load balancing system, method and equipment

Publications (1)

Publication Number Publication Date
CN113886072A true CN113886072A (en) 2022-01-04

Family

ID=79009989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111101891.6A Pending CN113886072A (en) 2021-09-18 2021-09-18 Load balancing system, method and equipment

Country Status (1)

Country Link
CN (1) CN113886072A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115242798A (en) * 2022-06-30 2022-10-25 阿里巴巴(中国)有限公司 Task scheduling method based on edge cloud, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115242798A (en) * 2022-06-30 2022-10-25 阿里巴巴(中国)有限公司 Task scheduling method based on edge cloud, electronic equipment and storage medium
CN115242798B (en) * 2022-06-30 2023-09-26 阿里巴巴(中国)有限公司 Task scheduling method based on edge cloud, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US8670778B2 (en) Dynamic sectors in a wireless communication system
US10178653B2 (en) Wireless access node and method for signaling aggregation of a plurality of UE devices through a hub UE device
WO2021008377A1 (en) Communication method and apparatus
US9351309B1 (en) Dynamic allocation of carrier aggregation resources based on device requests
US9001767B1 (en) Selection of wireless backhaul links based on uptime factors
CN109561450B (en) Interaction method and device for load information
US9220000B1 (en) Temporary device numbers for text messaging
EP3644646A1 (en) Load-balancing method and apparatus
US11032861B2 (en) Device to device-based communication method and terminal
CN113886072A (en) Load balancing system, method and equipment
US20170034710A1 (en) Method, Apparatus and System
US20230336441A1 (en) Flexible model to provision wireless communication services directly to network function or network support system
US9491721B1 (en) Dynamically modifying power headroom prohibit timers based on signaling load
KR20230125844A (en) Random access including sending or receiving Msg3 messages
US20220124837A1 (en) Extended frequency division duplex (fdd) carrier-based iab coverage
US9125167B1 (en) System and method of increasing paging accuracy
CN107534479A (en) Management frame antenna selecting method and device based on master-slave type network
CN110831089B (en) Channel switching method and device
CN114731567A (en) Method, apparatus and computer readable medium for controlling D2D routing
CN114071683B (en) Data transmission method and device and electronic equipment
CN115086230B (en) Method, device, equipment and storage medium for determining computing network route
CN112751593B (en) Resource allocation method, device, communication system and storage medium
CN102695233A (en) Access method of different system network, access network and terminal
CN117121560A (en) Dynamic network slice management
EP4319247A1 (en) Resource attribute configuration method, resource attribute determination method, communication node and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240130

Address after: Room 553, 5th Floor, Building 3, No. 969 Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province, 311121

Applicant after: Hangzhou Alibaba Cloud Feitian Information Technology Co.,Ltd.

Country or region after: China

Address before: 310023 Room 516, floor 5, building 3, No. 969, Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Applicant before: Alibaba Dharma Institute (Hangzhou) Technology Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right