WO2023155550A1 - Message sending methods, message sending apparatus and storage medium - Google Patents

Message sending methods, message sending apparatus and storage medium Download PDF

Info

Publication number
WO2023155550A1
WO2023155550A1 PCT/CN2022/136961 CN2022136961W WO2023155550A1 WO 2023155550 A1 WO2023155550 A1 WO 2023155550A1 CN 2022136961 W CN2022136961 W CN 2022136961W WO 2023155550 A1 WO2023155550 A1 WO 2023155550A1
Authority
WO
WIPO (PCT)
Prior art keywords
pfu
omu
active
load
message sending
Prior art date
Application number
PCT/CN2022/136961
Other languages
French (fr)
Chinese (zh)
Inventor
黄亮亮
杨延亮
张秀丽
葛昊
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2023155550A1 publication Critical patent/WO2023155550A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/09Management thereof
    • H04W28/0925Management thereof using policies
    • H04W28/0933Management thereof using policies based on load-splitting ratios
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/082Load balancing or load distribution among bearers or channels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/09Management thereof
    • H04W28/0925Management thereof using policies

Definitions

  • the present application relates to but not limited to the technical field of communication, and in particular relates to a method for sending a message, a device for sending a message and a storage medium.
  • the current load balancing strategy used in the core network is mainly based on the IP (Internet Protocol, User Equipment Internet Protocol) address of the UE (User Equipment, user equipment) for load balancing, by performing Hash (hash) on the IP address, port, etc. After processing, it is distributed to each back-end load, so as to achieve relative load balance between each service processor of the back-end load.
  • This load balancing method can have a certain effect when the traffic is small and the mutation is not very large.
  • 5G technology 5th Generation Mobile Communication Technology, the fifth generation mobile communication technology
  • users' demand for traffic is increasing day by day.
  • the traffic of the entire core network increases geometrically, and the cluster capacity of the entire core network increases dramatically.
  • the currently used load balancing strategy can no longer meet the needs of the current core network.
  • the working status of a certain back-end load changes frequently.
  • the service processor restarts abnormally, resulting in service switching , affecting user usage.
  • Embodiments of the present application provide a message sending method, a message sending device, and a storage medium.
  • the embodiment of the present application provides a message sending method, which is applied to a directly connected routing load balancer DRLB, and the DRLB includes an operation management unit OMU and a plurality of data packet forwarding units PFU; the OMU is set to The PFU is managed, and each of the PFUs is set to send a message to a corresponding back-end load; the message sending method includes: the PFU obtains the first state information of the back-end load, and according to the The first state information dynamically adjusts a load balancing strategy for distributing traffic to the backend load, and sends packets to the backend load according to the load balancing strategy.
  • the embodiment of the present application also provides a message sending method, which is applied to the active PFU, where the active PFU is one of multiple PFUs, and each of the PFUs is set to send a message to the corresponding backend
  • the load sends a message
  • the message sending method includes: sending a first status query request to all the back-end loads according to a preset request sending cycle; receiving the back-end load according to the first status query request According to the first state information sent, the load balancing result of the backend load is determined according to the first state information; the first load balancing strategy is determined according to the load balancing result, and the corresponding load balancing strategy is sent to the corresponding
  • the back-end load is used to send the message; the load balancing result is synchronized to the non-active data packet forwarding unit, so that the non-active data packet forwarding unit determines the second load balancing strategy according to the load balancing result and according to The second load balancing policy sends messages to the corresponding back-end
  • the embodiment of the present application also provides a message sending device, including a memory and a processor, the memory stores a computer program, and the processor implements the first aspect or the second aspect when executing the computer program The message sending method.
  • the embodiment of the present application also provides a computer-readable storage medium, the storage medium stores a program, and when the program is executed by the processor, the message sending method described in the first aspect or the second aspect is implemented .
  • FIG. 1 is a schematic diagram of an overall network architecture provided by an embodiment of the present application.
  • FIG. 2 is a partial structural schematic diagram of the DRLB and the back-end load provided by the embodiment of the present application;
  • FIG. 3 is a schematic flow diagram of a message sending method provided in an embodiment of the present application.
  • FIG. 4 is another schematic flowchart of a message sending method provided in an embodiment of the present application.
  • FIG. 5 is a schematic flow diagram of determining the active PFU provided by the embodiment of the present application.
  • FIG. 6 is another schematic flow diagram of determining the active PFU provided by the embodiment of the present application.
  • FIG. 7 is a schematic diagram of the main PFU election process provided by the embodiment of the present application.
  • FIG. 8 is a schematic diagram of the primary PFU election process provided by the embodiment of the present application when a PFU is abnormal
  • FIG. 9 is a schematic diagram of the workflow of the DRLB implementing the back-end load balancing strategy provided by the embodiment of the present application.
  • FIG. 10 is a schematic diagram of the uplink and downlink traffic workflow of network elements based on the DRLB core network service chain provided by the embodiment of the present application;
  • FIG. 11 is a schematic structural diagram of a message sending device provided by an embodiment of the present application.
  • orientation descriptions such as “up”, “down”, etc. indicated orientations or positional relationships are based on the orientations or positional relationships shown in the drawings, and are only for the convenience of describing this application.
  • the application and simplified description do not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operate in a specific orientation, and thus should not be construed as limiting the application.
  • the current load balancing strategy used in the core network is mainly based on the IP (Internet Protocol, User Equipment Internet Protocol) address of the UE (User Equipment, user equipment) for load balancing, by performing Hash (hash) on the IP address, port, etc. After processing, it is distributed to each back-end load, so as to achieve relative load balance between each service processor of the back-end load.
  • This load balancing method can have a certain effect when the traffic is small and the mutation is not very large.
  • 5G technology 5th Generation Mobile Communication Technology, the fifth generation mobile communication technology
  • users' demand for traffic is increasing day by day.
  • the traffic of the entire core network increases geometrically, and the cluster capacity of the entire core network increases dramatically.
  • the currently used load balancing strategy can no longer meet the needs of the current core network.
  • the working status of a certain back-end load changes frequently.
  • the service processor restarts abnormally, resulting in service switching , affecting user usage.
  • the embodiments of the present application provide a message sending method, a message sending device, and a storage medium, which can reduce the impact of a sudden change in the working state of the back-end load and improve the reliability of message sending.
  • FIG. 1 is a schematic diagram of the overall network architecture provided by the embodiment of the present application, wherein:
  • the xGW can represent the core network media plane network element, such as UPF (User Port Function, user port function).
  • UPF User Port Function, user port function
  • SW1 and SW2 may represent gateway switches in the data center.
  • DRLB Direct Route Load Balancer, Direct Route Load Balancer
  • DRLB Direct Route Load Balancer
  • LOAD can represent a backend load device, such as a virtualized load of a current mobile network, such as a virtual firewall (vFW), a TCP optimizer (TCPO), and the like.
  • vFW virtual firewall
  • TCPO TCP optimizer
  • the PDN may represent the Internet, for example, a website where users usually use wireless mobile networks to watch videos and search for information.
  • Fig. 2 is a partial structural schematic diagram of the DRLB and the back-end load provided by the embodiment of the present application, in which:
  • DRLB It is composed of OMU (Operation and Management Unit, operation management unit) and PFU (Packet forwarding Unit, data packet forwarding unit).
  • OMU can be a dual machine or an N+1 cluster, and is mainly responsible for the management of DRLB;
  • PFU consists of Composed of N+1 clusters, it is mainly responsible for the business processing of DRLB, and N is a positive integer.
  • LOAD mainly refers to the virtualized load of the current mobile network, which is composed of OMU and load.
  • OMU can be a dual machine or an N+1 cluster, and is mainly responsible for the management of LOAD; load is composed of an N+1 cluster, which is mainly responsible for LOAD business processing.
  • multiple PFUs can be integrated in the same device or installed in different discrete devices.
  • the OMU and PFU can be integrated in the same device or installed in different discrete devices.
  • the OMU and multiple PFUs are integrated in the same device to form a DRLB.
  • a DRLB network element is added to the core network link, and the load detection process is performed on the back-end load through the DRLB network element. Dynamically adjust the load conditions among the loads through the detection results to achieve the effect of dynamic load balancing.
  • the embodiment of the present application provides a message sending method, which is applied to DRLB, and DRLB includes OMU and multiple PFUs; the OMU is set to manage the PFU, and each PFU is set To send messages to the corresponding backend load; message sending methods include:
  • the PFU obtains the first state information of the backend load, dynamically adjusts the load balancing strategy for distributing traffic to the backend load according to the first state information, and sends packets to the backend load according to the load balancing strategy.
  • the PFU dynamically adjust the load balancing strategy for distributing traffic to the back-end load according to the first state information, and send packets to the back-end load according to the load balancing strategy, and can according to the state of the back-end load Dynamically adjust the load conditions among the back-end loads to achieve the effect of dynamic load balancing, thereby reducing the impact of sudden changes in the working status of the back-end loads and improving the reliability of message sending.
  • FIG. 3 is a schematic flowchart of a message sending method provided by an embodiment of the present application.
  • the message sending method is applied to DRLB.
  • the message sending method includes but is not limited to the following steps 301 to 303 .
  • Step 301 The active OMU determines the active PFU from multiple PFUs
  • Step 302 The active PFU sends a first status query request to all back-end loads according to the preset request sending cycle, receives the first status information sent by the back-end load according to the first status query request, and determines according to the first status information
  • the load balancing result of the end load, and the load balancing result is synchronized to the non-active PFU;
  • Step 303 Each PFU determines a load balancing strategy according to the load balancing result, and sends a message to the corresponding backend load according to the load balancing strategy.
  • non-active PFU refers to other PFUs in the plurality of PFUs except the active PFU.
  • the preset request sending period may be set according to the actual situation, for example, it may be 1 minute, 5 minutes, 10 minutes, etc., which is not limited in this embodiment of the present application.
  • the backend load may be a virtualized load of the current mobile network, such as a virtual firewall (vFW), a TCP optimizer (TCPO), etc., which is not limited in this embodiment of the present application.
  • vFW virtual firewall
  • TCPO TCP optimizer
  • the first status query request may be an ICMP (Internet Control Message Protocol, Internet Control Message Protocol) query request, etc., which is not limited in the embodiment of the present application.
  • ICMP Internet Control Message Protocol, Internet Control Message Protocol
  • the first state information may be the CPU (Central Processing Unit) occupancy rate of the back-end load, session proportion, throughput, etc., which are not limited in this embodiment of the present application.
  • CPU Central Processing Unit
  • the load balancing policy is used to realize the traffic load balancing of message sending.
  • the load balancing policy is the load balancing policy corresponding to the active PFU
  • the second load balancing policy is the load balancing policy corresponding to the non-active PFU.
  • Different PFUs will execute Corresponds to its own load balancing strategy.
  • the DRLB network element is obtained by integrating the primary OMU and multiple PFUs.
  • the DRLB network element is introduced based on the service chain scenario, and the DRLB network element is sent to all subsequent requests according to the preset request sending cycle.
  • the end load sends the first status query request, which can detect and process the load balancing of the back-end load, and dynamically adjust the load situation among the back-end loads through the detection results to achieve the effect of dynamic load balancing, thereby reducing the The impact of sudden changes in the working status of the terminal load can be avoided, and the reliability of message sending can be improved.
  • the load balancing result is synchronized to multiple PFUs, and each PFU sends a message to the corresponding back-end load, which can better adapt to the message sending in a large traffic scenario and improve the stability of message sending.
  • FIG. 4 is another schematic flow chart of the message sending method provided by the embodiment of the present application.
  • the message sending method is applied to the active PFU, and there are multiple PFUs in the entire network architecture, and each PFU is It is set to send a message to the corresponding back-end load, and the active PFU is one of the multiple PFUs.
  • the above message sending method includes the following steps 401 to 404.
  • Step 401 Send a first status query request to all backend loads according to a preset request sending cycle
  • Step 402 Receive the first state information sent by the backend load according to the first state query request, and determine the load balancing result of the backend load according to the first state information;
  • Step 403 Determine the first load balancing strategy according to the load balancing result, and send the message to the corresponding backend load according to the first load balancing strategy;
  • Step 404 Synchronize the load balancing result to the non-active PFU, so that the non-active PFU determines a second load balancing strategy according to the load balancing result and sends a message to the corresponding backend load according to the second load balancing strategy.
  • multiple PFUs may be integrated in the same device, or may be set in different discrete devices, which is not limited in this embodiment of the present application.
  • the above steps 401 to 404 send the first status query request to all backend loads according to the preset request sending cycle, so that the backend load can be detected and processed for load balancing, and each backend can be dynamically adjusted according to the detection results Load conditions between loads to achieve the effect of dynamic load balancing, thereby reducing the impact of sudden changes in the working status of back-end loads and improving the reliability of message sending.
  • the load balancing result is synchronized to multiple PFUs, and each PFU sends a message to the corresponding back-end load, which can better adapt to the message sending in a large traffic scenario and improve the stability of message sending.
  • the active PFU is set with a target flag. Since there are multiple PFUs, before the above step 401, the active PFU is first determined from the multiple PFUs. For the active PFU Referring to FIG. 5 , FIG. 5 is a schematic flowchart of determining a primary PFU provided by an embodiment of the present application.
  • the above message sending method may further include the following steps 501 to 503 .
  • Step 501 receiving a second status query request sent by the OMU
  • Step 502 Send the second status information to the OMU according to the second status query request, so that the OMU can determine the active PFU from multiple PFUs according to the second status information;
  • Step 503 Receive the determination result of the active PFU sent by the OMU, set the target flag bit according to the determination result, and send the setting result to the OMU.
  • the OMU is set to manage the PFU, and the OMU and the PFU may be integrated in the same device, or may be set in different separate devices, which is not limited in this embodiment of the present application.
  • the second state information may be the power-on time and the virtual machine ID.
  • the active PFU before becoming the active PFU, it will receive the second state query request from the OMU. At this time, it needs to report its own status to the OMU.
  • the power-on time and virtual machine ID The OMU will determine the active PFU according to the power-on time of all PFUs, and record the corresponding virtual machine ID. For example, the PFU with the earliest power-on time can be used as the active PFU, or the The PFU with the latest power-on time is used as the active PFU, which is not limited in this embodiment of the present application.
  • the active PFU will receive the determination result notified from the OMU, set the target flag according to the determination result, and return the setting result to the OMU, and the OMU determines that the setting of the active PFU is completed according to the setting result.
  • the identity of the active PFU can be clarified by setting the target flag bit.
  • the PFU may fail.
  • the active PFU When one or more of the multiple PFUs is in an abnormal state, if the active PFU is currently in a normal state, it will receive a third state query request sent by the OMU, and then The active PFU sends the third state information to the OMU according to the third state query request, so that the OMU can determine the PFU in the normal state from the multiple PFUs according to the receiving result of the third state information, and re-determine from the PFU in the normal state The main PFU.
  • the third status information may be the virtual machine identifier and the status of the virtual machine, and the status of the virtual machine may be the virtual machine CPU usage rate, memory usage rate, hard disk status, number and level of virtual machine alarms, etc., which are not limited in the embodiment of the present application .
  • the OMU can determine the PFU in the normal state, and then re-determine the active PFU from the PFU in the normal state, so as to maintain the dynamic adjustment effect of load balancing and improve the stability of message sending. .
  • the load balancing result of the backend load is determined according to the first state information, and the accuracy of the first state information may be verified.
  • the first state information is discarded.
  • Re-send the first status query request to the back-end load when the accuracy indicates that the first status information is accurate, perform combined calculation processing on the first status information, and determine the load balancing result of the back-end load according to the first status information after the combined calculation process .
  • the first status information may include multiple different types of parameters, and the first status information is combined and calculated, and multiple different types of parameters involved in the first status information may be combined through preset calculation rules.
  • the pre-stored basic state information of the back-end load may be obtained, the basic state information is compared with the first state information, and the first state information is determined according to the comparison result.
  • Accuracy of Status Information may be the basic data of each back-end load saved when the active PFU is initialized.
  • FIG. 6 is a schematic flowchart of another determination of the active PFU provided by the embodiment of the present application.
  • the active OMU determines the active PFU from multiple PFUs, which may include the following steps 601 to Step 603.
  • Step 601 The active OMU sends a second status query request to each PFU;
  • Step 602 The PFU sends the second status information to the active OMU according to the second status query request;
  • Step 603 The active OMU determines the active PFU from the multiple PFUs according to the second state information.
  • the DRLB also includes a standby OMU, and the PFU is set with a target flag.
  • the active OMU determines the active PFU from multiple PFUs according to the second state information, it also sends the determination result of the active PFU to each PFU; each PFU Set the target flag bit, and send the set result to the active OMU; the active OMU synchronizes the set result to the standby OMU.
  • the backup effect of the OMU is achieved.
  • the active OMU fails, it can quickly switch to the standby OMU to manage each PFU, improving the reliability and stability of DRLB work. performance, thereby improving the stability of message sending.
  • the active OMU regularly detects the fault status of the PFUs.
  • the active OMU detects that one or more PFUs are in an abnormal state, it sends a third status query request to each PFU;
  • the active OMU sends the third state information;
  • the active OMU determines the PFU in the normal state from the multiple PFUs according to the receiving result of the third state information, and re-determines the active PFU from the PFUs in the normal state.
  • the active OMU when the active OMU detects that the PFU in an abnormal state returns to normal, it sends a determination result of re-determining the active PFU to the PFU that returns to normal.
  • the determination result of re-determining the active PFU By sending the determination result of re-determining the active PFU to the normal PFU, so that the normal PFU performs a bit setting operation, thereby determining the current identity of the normal PFU, and re-joining the message sending architecture.
  • sending the message to the corresponding backend load according to the load balancing policy may be receiving the first media plane message sent from the uplink, and sending the first media plane message according to the load balancing policy to the corresponding backend load; or receive the second media plane message sent from the downlink, and send the second media plane message to the corresponding backend load according to the load balancing policy. That is, the packets of the uplink flow and the packets of the downlink flow can both be sent through the load balancing policy, which is beneficial to improving the overall reliability and stability of sending the packets.
  • FIG. 7 is a schematic diagram of the main PFU election process provided by the embodiment of the present application, wherein OMU(1+1) indicates that the OMU adopts a 1+1 master-backup mode, one of which is the master and the other is the backup, mainly including the following step:
  • Step 701 The active OMU virtual machine sends a query power-on request to all PFUs;
  • Step 702 After receiving the request from the active OMU, each PFU sends its system power-on time and virtual machine ID to the active OMU;
  • Step 703 The active OMU enters the data returned by each PFU into the database, elects the active PFU according to the power-on time, and synchronizes the judgment result to the standby OMU;
  • Step 704 The active OMU sends the election result to each PFU;
  • Step 705 After receiving the election result, each PFU sets its active/standby flag bit, and at the same time returns the set result to the active OMU;
  • Step 706 After receiving the setting results of each PFU, the active OMU updates the database, and synchronizes the updated results to the standby OMU.
  • Figure 8 is a schematic diagram of the primary PFU election process provided by the embodiment of the present application when a PFU is abnormal, which mainly includes the following steps:
  • Step 801 The active OMU detects that one or more PFUs have virtual machine abnormalities
  • Step 802 The active OMU sends a virtual machine status query request to each PFU, and confirms the status of each virtual machine for a second time;
  • Step 803 Each PFU returns the virtual machine ID and virtual machine status, and the abnormal virtual machine does not respond to the request and does not return data;
  • Step 808 The active OMU updates the PFU virtual machine data into the database according to the data returned by each PFU. For the PFU virtual machines that have not returned the status, the active OMU directly enters the detected PFU virtual machine The result is synchronized to the standby OMU;
  • Step 805 The active OMU sends a request to the normal PFU to query the identification of each virtual machine, the CPU usage rate of the virtual machine, the memory, the hard disk status, the number and level of virtual machine alarms;
  • Step 806 After receiving the request from the active OMU, each PFU sends its virtual machine ID, virtual machine CPU usage, memory, hard disk status, number and level of virtual machine alarms to the active OMU;
  • Step 807 The active OMU enters the data returned by each PFU into the database, selects the active PFU according to the returned PFU indicators, and synchronizes the judgment result to the standby OMU;
  • Step 808 The active OMU sends the election result to each PFU
  • Step 809 After each PFU receives the election result, its active and standby flag is set, and the set result is returned to the active OMU;
  • Step 810 After the active OMU receives the result of setting each PFU, it updates the database, and synchronizes the updated result to the standby OMU;
  • Step 811 When the active OMU detects that the previous abnormal PFU returns to normal, the previously saved election result is sent to the PFU;
  • Step 812 After the normal PFU receives the election result message sent by the active OMU, the active and standby flags are set, and at the same time, the set result is returned to the active OMU;
  • Step 813 After receiving the PFU setting result of returning to normal, the active OMU updates the database, and synchronizes the updated result to the standby OMU.
  • one or more PFU failures may include failures of the main PFU.
  • the main OMU scans and detects the status of each PFU is a continuous process, and the scanning is performed at regular intervals, and the timing time is configurable.
  • the active OMU can also sort the normal PFU power-on time , select a PFU with the second power-on time to be elected as the active PFU, thereby improving the efficiency of re-determining the active PFU.
  • FIG. 9 is a schematic diagram of the workflow of the DRLB implementation of the back-end load balancing strategy provided by the embodiment of the present application, wherein the server and the plug-in are both working components in the active PFU, and mainly include the following steps:
  • Step 901 the server sends the current CPU usage, session proportion and throughput query request of each backend load to the plug-in;
  • Step 902 the plug-in sends an ICMP query request to each backend load
  • Step 903 After receiving the request, each backend load sends the virtual machine status of each load to the plug-in;
  • Step 904 The plug-in returns the received load virtual machine ID, CPU usage rate, session number ratio, throughput ratio and other information to the server;
  • Step 905 After receiving the response, the server first checks the correctness of the returned data according to the previously saved load information (mainly using the basic data of each back-end virtual machine initialized and saved before the server, and the data received in the response For comparison), if the received response data is judged to be incorrect, the received data will be discarded, and the server will repeat step 901; if the received response data is judged to be correct, the response data will be calculated and combined, and the combined The final data can be used to determine the load balance of each backend;
  • the server After receiving the response, the server first checks the correctness of the returned data according to the previously saved load information (mainly using the basic data of each back-end virtual machine initialized and saved before the server, and the data received in the response For comparison), if the received response data is judged to be incorrect, the received data will be discarded, and the server will repeat step 901; if the received response data is judged to be correct, the response data will be calculated and combined, and the combined The final data can be used to determine the load balance of each
  • Step 906 the server sends the load balancing judgment result to the plug-in;
  • Step 907 The plug-in updates the backend load data
  • Step 908 The update result is returned to the server
  • Step 909 the server enters the update result flag returned by the plug-in into the database for update
  • Step 910 The active PFU server sends the load balancing result to each normal non-active PFU;
  • Step 911 The non-active PFU updates the load balancing result into the database
  • Step 912 The non-active PFU returns the update result to the active PFU server
  • Step 913 The active PFU server sends an adjustment load balancing policy request message to the non-active PFU;
  • Step 914 The active PFU server sends a load balancing policy adjustment request message to its own plug-in;
  • Step 915 The plug-in adjusts the load balancing strategy for sending media plane packets to the backend load
  • Step 916 The non-active PFU adjusts the load balancing strategy for sending media plane packets to the backend load
  • Step 917 The server updates its load balancing sending policy data in the database
  • Step 918 The non-active PFU updates the data sent by its load balancing policy in the database.
  • step 902 the ICMP request carries special characters agreed upon with each backend load
  • step 902 is a step of sending a query regularly, after setting the timing query time, it is sent to each backend load regularly.
  • Fig. 10 is a schematic diagram of the uplink and downlink traffic workflow of network elements based on the DRLB core network service chain provided by the embodiment of the present application, where the back-end load takes vFW as an example, xGW represents the core network media plane network element, and VR1 and VR2 represent Route 1 and Route 2, SSC represents the network element part of the service chain of the core network, and PDN represents the internet service that the user browses.
  • step 1001-step 1005 represents the upstream process
  • step 1011-step 1015 represents the downstream process, wherein:
  • Step 1001 xGW pushes the received TCP or UDP traffic to VR1 according to the pre-set routing rules
  • Step 1002 VR1 pushes the media plane message to DRLB according to the routing rules
  • Step 1003 DRLB sends the media plane message to the vFW according to the load balancing policy
  • Step 1004 vFW processes the media plane message and sends it to VR2 according to the routing rules
  • Step 1005 VR2 sends the media plane message to the PDN according to the routing rules; so far, the online process ends;
  • Step 1011 The PDN receives the user's media surface Internet access request, and returns a response message to VR2 after processing the request;
  • Step 1012 VR2 sends the received media plane message to DRLB according to routing rules
  • Step 1013 DRLB sends the received media plane message to the corresponding vFW according to the online message distribution rule
  • Step 1014 The vFW forwards the received media plane message to VR1 according to the routing rules
  • Step 1015 VR1 forwards the received media plane message to xGW according to routing rules. So far, the downlink process ends.
  • the vFW may include multiple LOADs.
  • a DRLB network element is added to the core network traffic processing link, and the dynamic load balancing strategy is adjusted for the back-end network element through this network element, so that the core network service chain media plane link can Realize dynamic load balancing.
  • DRLB dynamically adjusts the distribution of traffic to each service processor of the back-end service module according to the CPU of the back-end load, session occupation and throughput feedback, so as to ensure that the service traffic undertaken by each service processor of the back-end network element can be dynamically balanced in real time .
  • the DRLB network element sets the detection flag mode when the network element is instantiated and deployed, and sets the initial threshold value of the CPU of the back-end network element service processor, session ratio, and throughput.
  • the OMU sends the relevant configuration to all PFUs of the DRLB.
  • One of the PFUs is the main one, through which all back-end loads are obtained (the back-end loads are displayed on the DRLB in the form of VS (Virtual Service, virtual service) and RS (Routing Service, routing service), and a VS symbol contains a back-end A set of service links of network elements, one RS marks a real service processor) load of backend network elements, and the PFU periodically (configurable) initiates ICMP requests to all RSs under the specified VS, ICMP has special mark; all back-end network element service processors query the CPU usage, session number ratio, and throughput ratio, and return the active PFU in the ICMP response message.
  • VS Virtual Service, virtual service
  • RS Real Service, routing service
  • the active PFU summarizes the load status of the back-end network elements, and The status of the RS whose load status exceeds the threshold is notified to the non-active PFU; if a certain RS detects that the threshold exceeds the threshold for N consecutive times, the active PFU judges that the RS is in an overloaded state, and synchronizes the RS status to other non-active PFUs through tipc messages.
  • the active PFU dynamically adjusts the DRLB distribution algorithm. The newly distributed media plane traffic will not be distributed to the RS, and the old media plane traffic falling on the RS remains unchanged until the active PFU checks the RS load for N consecutive times. When the value is lower than the threshold, all PFUs are notified to restore their status.
  • DRLB setting exception handling scenario If RS returns a response timeout, the active PFU of DRLB needs to resend the request. If the backend load returns the timeout for N consecutive times, the active PFU records the load data obtained before returning the timeout; if the initialization phase If there is no previous load data record on the active PFU of DRLB, the record is NULL; the link relationship between PFU and RS on DRLB is set for health check. For the RS whose health check status is down, there is no need to send a request again, and the load data They are all recorded as NULL.
  • a DRLB network element is introduced based on the service chain scenario.
  • Send a special ICMP request to the back-end load service processor by sending a special ICMP request to the back-end load service processor, and compare the obtained data with the initial threshold value, and dynamically adjust the comparison result DRLB establishes sessions and distributes traffic to each back-end service processor to achieve dynamic balance of back-end loads in the overall service chain.
  • FIG. 11 is a schematic structural diagram of a message sending device provided by an embodiment of the present application.
  • the message sending device 1100 includes: a memory 1101, a processor 1102, and a computer program stored in the memory 1101 and operable on the processor 1102.
  • the computer program is used to execute the above message sending method when running.
  • the processor 1102 and the memory 1101 may be connected through a bus or in other ways.
  • the memory 110 as a non-transitory computer-readable storage medium, can be used to store non-transitory software programs and non-transitory computer-executable programs, such as the message sending method described in the embodiment of the present application.
  • the processor 1102 executes the non-transitory software programs and instructions stored in the memory 1101 to implement the above message sending method.
  • the memory 1101 may include a program storage area and a data storage area, wherein the program storage area may store an operating system and an application program required by at least one function; the data storage area may store and execute the above message sending method.
  • the memory 1101 may include a high-speed random access memory 1101, and may also include a non-transitory memory 1101, such as at least one storage device, a flash memory device or other non-transitory solid-state storage devices.
  • the memory 1101 may include memory 1101 remotely located relative to the processor 1102, and these remote memories 1101 may be connected to the message sending device 1100 through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the non-transitory software programs and instructions required to implement the above message sending method are stored in the memory 1101, and when executed by one or more processors 1102, the above message sending method is executed.
  • the embodiment of the present application also provides a computer-readable storage medium storing computer-executable instructions, and the computer-executable instructions are used to execute the above method for sending a message.
  • the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are executed by one or more control processors, so as to realize the above-mentioned message sending method.
  • the embodiments of the present application have at least the following beneficial effects: obtain the first state information of the back-end load through the PFU, dynamically adjust the load balancing strategy for distributing traffic to the back-end load according to the first state information, and according to the
  • the above load balancing strategy sends messages to the back-end loads, and the load status between the back-end loads can be dynamically adjusted according to the status of the back-end loads to achieve the effect of dynamic load balancing, thereby reducing the working status of the back-end loads
  • the impact of sudden changes can improve the reliability of message sending.
  • the device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • Computer storage media including, but not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, tape, storage device storage or other magnetic storage devices, or Any other medium that can be used to store desired information and that can be accessed by a computer.
  • communication media typically embody computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .

Abstract

Provided in the embodiments of the present application are message sending methods, a message sending apparatus and a storage medium. A message sending method comprises: PFUs acquiring first state information of rear-end loads; according to the first state information, dynamically adjusting a load balancing policy for distributing traffics to the rear-end loads; and sending messages to the rear-end loads according to the load balancing policy.

Description

报文发送方法、报文发送装置及存储介质Message sending method, message sending device and storage medium
相关申请的交叉引用Cross References to Related Applications
本申请基于申请号为202210150418.5、申请日为2022年02月18日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。This application is based on a Chinese patent application with application number 202210150418.5 and a filing date of February 18, 2022, and claims the priority of this Chinese patent application. The entire content of this Chinese patent application is hereby incorporated by reference into this application.
技术领域technical field
本申请涉及但不限于通信技术领域,尤其涉及一种报文发送方法、报文发送装置及存储介质。The present application relates to but not limited to the technical field of communication, and in particular relates to a method for sending a message, a device for sending a message and a storage medium.
背景技术Background technique
当前核心网中所使用的负载均衡策略主要是基于UE(User Equipment,用户设备)的IP(Internet Protocol,用户设备互联网协议)地址进行负载均衡,通过对IP地址、端口等进行Hash(哈希)处理后分发到各后端负载,以达到后端负载各业务处理机之间的负载相对均衡。该负载均衡方式在流量较小且突变不是非常大时能够有一定的效果,但是随着5G技术(5th Generation Mobile Communication Technology,第五代移动通信技术)的飞速发展,用户对流量的需求日益增长,整个核心网流量呈几何倍数增长,整个核心网集群容量暴增。The current load balancing strategy used in the core network is mainly based on the IP (Internet Protocol, User Equipment Internet Protocol) address of the UE (User Equipment, user equipment) for load balancing, by performing Hash (hash) on the IP address, port, etc. After processing, it is distributed to each back-end load, so as to achieve relative load balance between each service processor of the back-end load. This load balancing method can have a certain effect when the traffic is small and the mutation is not very large. However, with the rapid development of 5G technology (5th Generation Mobile Communication Technology, the fifth generation mobile communication technology), users' demand for traffic is increasing day by day. , the traffic of the entire core network increases geometrically, and the cluster capacity of the entire core network increases dramatically.
基于此,当前所使用的负载均衡策略已经无法满足当前核心网的需求,在现实应用中也频繁出现某个后端负载的工作状态出现突变,严重时出现业务处理机异常重启,导致业务出现切换,影响用户使用。Based on this, the currently used load balancing strategy can no longer meet the needs of the current core network. In real applications, the working status of a certain back-end load changes frequently. In serious cases, the service processor restarts abnormally, resulting in service switching , affecting user usage.
发明内容Contents of the invention
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。The following is an overview of the topics described in detail in this article. This summary is not intended to limit the scope of the claims.
本申请实施例提供了一种报文发送方法、报文发送装置及存储介质。Embodiments of the present application provide a message sending method, a message sending device, and a storage medium.
第一方面,本申请实施例提供了一种报文发送方法,应用于直连路由负载均衡器DRLB,所述DRLB包括操作管理单元OMU和多个数据包转发单元PFU;所述OMU被设置为对所述PFU进行管理,各个所述PFU被设置为向对应的后端负载进行报文发送;所述报文发送方法包括:所述PFU获取所述后端负载的第一状态信息,根据所述第一状态信息动态调整向所述后端负载分发流量的负载均衡策略,根据所述负载均衡策略向所述后端负载进行报文发送。In the first aspect, the embodiment of the present application provides a message sending method, which is applied to a directly connected routing load balancer DRLB, and the DRLB includes an operation management unit OMU and a plurality of data packet forwarding units PFU; the OMU is set to The PFU is managed, and each of the PFUs is set to send a message to a corresponding back-end load; the message sending method includes: the PFU obtains the first state information of the back-end load, and according to the The first state information dynamically adjusts a load balancing strategy for distributing traffic to the backend load, and sends packets to the backend load according to the load balancing strategy.
第二方面,本申请实施例还提供了一种报文发送方法,应用于主用PFU,所述主用PFU为多个PFU中的其中一个,各个所述PFU被设置为向对应的后端负载进行报文发送,所述报文发送方法包括:根据预设的请求发送周期向所有的所述后端负载发送第一状态查询请求;接收所述后端负载根据所述第一状态查询请求发送的第一状态信息,根据所述第一状态信息确定所述后端负载的负载均衡结果;根据所述负载均衡结果确定第一负载均衡策略,根据所述第一负载均衡策略向对应的所述后端负载进行报文发送;将所述负载均衡结果同步至非主用数据包转发单元,以使所述非主用数据包转发单元根据所述负载均衡结果确定第二负载均衡策略并根据所述第二负载均衡策略向对应的所述后端负载进行报文发送,其中,所述非主 用数据包转发单元为多个数据包转发单元中除了所述主用数据包转发单元以外其余的数据包转发单元。In the second aspect, the embodiment of the present application also provides a message sending method, which is applied to the active PFU, where the active PFU is one of multiple PFUs, and each of the PFUs is set to send a message to the corresponding backend The load sends a message, and the message sending method includes: sending a first status query request to all the back-end loads according to a preset request sending cycle; receiving the back-end load according to the first status query request According to the first state information sent, the load balancing result of the backend load is determined according to the first state information; the first load balancing strategy is determined according to the load balancing result, and the corresponding load balancing strategy is sent to the corresponding The back-end load is used to send the message; the load balancing result is synchronized to the non-active data packet forwarding unit, so that the non-active data packet forwarding unit determines the second load balancing strategy according to the load balancing result and according to The second load balancing policy sends messages to the corresponding back-end load, wherein the non-active data packet forwarding unit is a plurality of data packet forwarding units except the active data packet forwarding unit packet forwarding unit.
第三方面,本申请实施例还提供了一种报文发送装置,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现第一方面或者第二方面所述的报文发送方法。In the third aspect, the embodiment of the present application also provides a message sending device, including a memory and a processor, the memory stores a computer program, and the processor implements the first aspect or the second aspect when executing the computer program The message sending method.
第四方面,本申请实施例还提供了一种计算机可读存储介质,所述存储介质存储有程序,所述程序被处理器执行时实现第一方面或者第二方面所述的报文发送方法。In the fourth aspect, the embodiment of the present application also provides a computer-readable storage medium, the storage medium stores a program, and when the program is executed by the processor, the message sending method described in the first aspect or the second aspect is implemented .
本申请的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本申请而了解。本申请的目的和其他优点可通过在说明书、权利要求书以及附图中所特别指出的结构来实现和获得。Additional features and advantages of the application will be set forth in the description which follows, and, in part, will be obvious from the description, or may be learned by practice of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
附图说明Description of drawings
附图用来提供对本申请技术方案的进一步理解,并且构成说明书的一部分,与本申请的实施例一起用于解释本申请的技术方案,并不构成对本申请技术方案的限制。The accompanying drawings are used to provide a further understanding of the technical solution of the present application, and constitute a part of the specification, and are used together with the embodiments of the present application to explain the technical solution of the present application, and do not constitute a limitation to the technical solution of the present application.
图1为本申请实施例提供的总体网络架构示意图;FIG. 1 is a schematic diagram of an overall network architecture provided by an embodiment of the present application;
图2为本申请实施例提供的DRLB和后端负载的部分结构示意图;FIG. 2 is a partial structural schematic diagram of the DRLB and the back-end load provided by the embodiment of the present application;
图3为本申请实施例提供的报文发送方法的一种流程示意图;FIG. 3 is a schematic flow diagram of a message sending method provided in an embodiment of the present application;
图4为本申请实施例提供的报文发送方法的另一种流程示意图;FIG. 4 is another schematic flowchart of a message sending method provided in an embodiment of the present application;
图5为本申请实施例提供的确定主用PFU的流程示意图;FIG. 5 is a schematic flow diagram of determining the active PFU provided by the embodiment of the present application;
图6为本申请实施例提供的另一种确定主用PFU的流程示意图;FIG. 6 is another schematic flow diagram of determining the active PFU provided by the embodiment of the present application;
图7为本申请实施例提供的主用PFU选举流程示意图;FIG. 7 is a schematic diagram of the main PFU election process provided by the embodiment of the present application;
图8为本申请实施例提供的出现PFU异常时主用PFU选举流程示意图;FIG. 8 is a schematic diagram of the primary PFU election process provided by the embodiment of the present application when a PFU is abnormal;
图9为本申请实施例提供的DRLB进行后端负载均衡策略工作流程示意图;FIG. 9 is a schematic diagram of the workflow of the DRLB implementing the back-end load balancing strategy provided by the embodiment of the present application;
图10为本申请实施例提供的基于DRLB核心网业务链网元上下行流量工作流程示意图;FIG. 10 is a schematic diagram of the uplink and downlink traffic workflow of network elements based on the DRLB core network service chain provided by the embodiment of the present application;
图11为本申请实施例提供的报文发送装置的结构示意图。FIG. 11 is a schematic structural diagram of a message sending device provided by an embodiment of the present application.
具体实施方式Detailed ways
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的实施例仅用以解释本申请,并不用于限定本申请。In order to make the purpose, technical solution and advantages of the present application clearer, the present application will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the embodiments described here are only used to explain the present application, not to limit the present application.
在本申请的描述中,需要理解的是,涉及到方位描述,例如“上”、“下”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本申请和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请的限制。In the description of the present application, it should be understood that the orientation descriptions, such as "up", "down", etc. indicated orientations or positional relationships are based on the orientations or positional relationships shown in the drawings, and are only for the convenience of describing this application. The application and simplified description do not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operate in a specific orientation, and thus should not be construed as limiting the application.
需要说明的是,虽然在装置示意图中进行了功能模块划分,在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于装置中的模块划分,或流程图中的顺序执行所示出或描述的步骤。术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。在本申请的描述中,除非另有说明,“多个”的含义是两个或两个以上。It should be noted that although the functional modules are divided in the schematic diagram of the device, and the logical sequence is shown in the flowchart, in some cases, it can be executed in a different order than the module division in the device or the flowchart in the flowchart. steps shown or described. The terms "first", "second", etc. are used to distinguish similar items and are not necessarily used to describe a specific order or sequence. In the description of the present application, unless otherwise specified, "plurality" means two or more.
本申请的描述中,需要说明的是,除非另有明确的限定,术语“安装”、“连接”等词 语应做广义理解,所属技术领域技术人员可以结合技术方案的内容合理确定上述词语在本申请中的含义。In the description of this application, it should be noted that, unless otherwise clearly defined, terms such as "installation" and "connection" should be understood in a broad sense, and those skilled in the art can reasonably determine the meaning of the above-mentioned words in this application by combining the contents of the technical solution. implication in the application.
当前核心网中所使用的负载均衡策略主要是基于UE(User Equipment,用户设备)的IP(Internet Protocol,用户设备互联网协议)地址进行负载均衡,通过对IP地址、端口等进行Hash(哈希)处理后分发到各后端负载,以达到后端负载各业务处理机之间的负载相对均衡。该负载均衡方式在流量较小且突变不是非常大时能够有一定的效果,但是随着5G技术(5th Generation Mobile Communication Technology,第五代移动通信技术)的飞速发展,用户对流量的需求日益增长,整个核心网流量呈几何倍数增长,整个核心网集群容量暴增。The current load balancing strategy used in the core network is mainly based on the IP (Internet Protocol, User Equipment Internet Protocol) address of the UE (User Equipment, user equipment) for load balancing, by performing Hash (hash) on the IP address, port, etc. After processing, it is distributed to each back-end load, so as to achieve relative load balance between each service processor of the back-end load. This load balancing method can have a certain effect when the traffic is small and the mutation is not very large. However, with the rapid development of 5G technology (5th Generation Mobile Communication Technology, the fifth generation mobile communication technology), users' demand for traffic is increasing day by day. , the traffic of the entire core network increases geometrically, and the cluster capacity of the entire core network increases dramatically.
基于此,当前所使用的负载均衡策略已经无法满足当前核心网的需求,在现实应用中也频繁出现某个后端负载的工作状态出现突变,严重时出现业务处理机异常重启,导致业务出现切换,影响用户使用。Based on this, the currently used load balancing strategy can no longer meet the needs of the current core network. In real applications, the working status of a certain back-end load changes frequently. In serious cases, the service processor restarts abnormally, resulting in service switching , affecting user usage.
基于此,本申请实施例提供了一种报文发送方法、报文发送装置及存储介质,能够降低后端负载的工作状态出现突变所带来的影响,提高报文发送的可靠性。Based on this, the embodiments of the present application provide a message sending method, a message sending device, and a storage medium, which can reduce the impact of a sudden change in the working state of the back-end load and improve the reliability of message sending.
参照图1,图1为本申请实施例提供的总体网络架构示意图,其中:Referring to FIG. 1, FIG. 1 is a schematic diagram of the overall network architecture provided by the embodiment of the present application, wherein:
xGW可以表示核心网媒体面网元,如UPF(User Port Function,用户端口功能)等。The xGW can represent the core network media plane network element, such as UPF (User Port Function, user port function).
SW1、SW2可以表示数据中心中的网关交换机。SW1 and SW2 may represent gateway switches in the data center.
DRLB(Direct Route Load Balancer,直连路由负载均衡器)主要用于业务链场景中对后端负载进行负载均衡使用。DRLB (Direct Route Load Balancer, Direct Route Load Balancer) is mainly used for load balancing of back-end loads in business chain scenarios.
LOAD可以表示后端负载设备,例如当前移动网络的虚拟化负载,如虚拟防火墙(vFW)、TCP优化器(TCPO)等。LOAD can represent a backend load device, such as a virtualized load of a current mobile network, such as a virtual firewall (vFW), a TCP optimizer (TCPO), and the like.
PDN可以表示Internet,例如平时用户使用无线移动网络进行看视频、查资料的网站。The PDN may represent the Internet, for example, a website where users usually use wireless mobile networks to watch videos and search for information.
参照图2,图2为本申请实施例提供的DRLB和后端负载的部分结构示意图,其中:Referring to Fig. 2, Fig. 2 is a partial structural schematic diagram of the DRLB and the back-end load provided by the embodiment of the present application, in which:
DRLB:由OMU(Operation and Management Unit,操作管理单元)和PFU(Packet forwarding Unit,数据包转发单元)组成,OMU可以是双机,也可以是N+1集群,主要负责DRLB的管理;PFU由N+1集群组成,主要负责DRLB的业务处理,N为正整数。DRLB: It is composed of OMU (Operation and Management Unit, operation management unit) and PFU (Packet forwarding Unit, data packet forwarding unit). OMU can be a dual machine or an N+1 cluster, and is mainly responsible for the management of DRLB; PFU consists of Composed of N+1 clusters, it is mainly responsible for the business processing of DRLB, and N is a positive integer.
LOAD:主要是指当前移动网络的虚拟化负载,由OMU和load组成,OMU可以是双机,也可以是N+1集群,主要负责LOAD的管理;load由N+1集群组成,主要负责LOAD的业务处理。LOAD: mainly refers to the virtualized load of the current mobile network, which is composed of OMU and load. OMU can be a dual machine or an N+1 cluster, and is mainly responsible for the management of LOAD; load is composed of an N+1 cluster, which is mainly responsible for LOAD business processing.
在一种实现方式中,多个PFU可以集成于同一设备中,也可以设置于不同的分立设备中,OMU与PFU可以集成于同一设备中,也可以设置于不同的分立设备中,而在一种实现方式中,OMU与多个PFU集成于同一设备中,构成DRLB,本申请实施例是在核心网链路中增加一个DRLB网元,通过该DRLB网元对后端负载进行负载检测处理并通过检测结果进行动态调整各负载之间的负荷情况,以达到动态负载均衡的效果。In one implementation, multiple PFUs can be integrated in the same device or installed in different discrete devices. The OMU and PFU can be integrated in the same device or installed in different discrete devices. In the first implementation, the OMU and multiple PFUs are integrated in the same device to form a DRLB. In this embodiment of the application, a DRLB network element is added to the core network link, and the load detection process is performed on the back-end load through the DRLB network element. Dynamically adjust the load conditions among the loads through the detection results to achieve the effect of dynamic load balancing.
基于图1和图2所示的网络架构,本申请实施例提供了一种报文发送方法,应用于DRLB,DRLB包括OMU和多个PFU;OMU被设置为对PFU进行管理,各个PFU被设置为向对应的后端负载进行报文发送;报文发送方法包括:Based on the network architecture shown in Figure 1 and Figure 2, the embodiment of the present application provides a message sending method, which is applied to DRLB, and DRLB includes OMU and multiple PFUs; the OMU is set to manage the PFU, and each PFU is set To send messages to the corresponding backend load; message sending methods include:
PFU获取后端负载的第一状态信息,根据第一状态信息动态调整向后端负载分发流量的负载均衡策略,根据负载均衡策略向后端负载进行报文发送。The PFU obtains the first state information of the backend load, dynamically adjusts the load balancing strategy for distributing traffic to the backend load according to the first state information, and sends packets to the backend load according to the load balancing strategy.
通过PFU获取后端负载的第一状态信息,根据第一状态信息动态调整向后端负载分发流量的负载均衡策略,根据负载均衡策略向后端负载进行报文发送,可以根据后端负载的状态 动态调整各后端负载之间的负荷情况,以达到动态负载均衡的效果,从而降低后端负载的工作状态出现突变所带来的影响,提高报文发送的可靠性。Obtain the first state information of the back-end load through the PFU, dynamically adjust the load balancing strategy for distributing traffic to the back-end load according to the first state information, and send packets to the back-end load according to the load balancing strategy, and can according to the state of the back-end load Dynamically adjust the load conditions among the back-end loads to achieve the effect of dynamic load balancing, thereby reducing the impact of sudden changes in the working status of the back-end loads and improving the reliability of message sending.
参照图3,图3为本申请实施例提供的报文发送方法的一种流程示意图,该报文发送方法应用于DRLB,该报文发送方法包括但不限于以下步骤301至步骤303。Referring to FIG. 3 , FIG. 3 is a schematic flowchart of a message sending method provided by an embodiment of the present application. The message sending method is applied to DRLB. The message sending method includes but is not limited to the following steps 301 to 303 .
步骤301:主用OMU从多个PFU中确定主用PFU;Step 301: The active OMU determines the active PFU from multiple PFUs;
步骤302:主用PFU根据预设的请求发送周期向所有的后端负载发送第一状态查询请求,接收后端负载根据第一状态查询请求发送的第一状态信息,根据第一状态信息确定后端负载的负载均衡结果,将负载均衡结果同步至非主用PFU;Step 302: The active PFU sends a first status query request to all back-end loads according to the preset request sending cycle, receives the first status information sent by the back-end load according to the first status query request, and determines according to the first status information The load balancing result of the end load, and the load balancing result is synchronized to the non-active PFU;
步骤303:各个PFU根据负载均衡结果确定负载均衡策略,根据负载均衡策略向对应的后端负载进行报文发送。Step 303: Each PFU determines a load balancing strategy according to the load balancing result, and sends a message to the corresponding backend load according to the load balancing strategy.
其中,非主用PFU为多个PFU中除了主用PFU以外其余的PFU。Wherein, the non-active PFU refers to other PFUs in the plurality of PFUs except the active PFU.
其中,预设的请求发送周期可以根据实际情况设置,例如可以是1分钟、5分钟、10分钟等等,本申请实施例不做限定。Wherein, the preset request sending period may be set according to the actual situation, for example, it may be 1 minute, 5 minutes, 10 minutes, etc., which is not limited in this embodiment of the present application.
其中,后端负载可以是当前移动网络的虚拟化负载,如虚拟防火墙(vFW)、TCP优化器(TCPO)等等,本申请实施例不做限定。Wherein, the backend load may be a virtualized load of the current mobile network, such as a virtual firewall (vFW), a TCP optimizer (TCPO), etc., which is not limited in this embodiment of the present application.
其中,第一状态查询请求可以是ICMP(Internet Control Message Protocol,网络控制报文协议)查询请求等等,本申请实施例不做限定。Wherein, the first status query request may be an ICMP (Internet Control Message Protocol, Internet Control Message Protocol) query request, etc., which is not limited in the embodiment of the present application.
其中,第一状态信息可以是后端负载的CPU(Central Processing Unit)占用率、会话占比以及吞吐量等等,本申请实施例不做限定。Wherein, the first state information may be the CPU (Central Processing Unit) occupancy rate of the back-end load, session proportion, throughput, etc., which are not limited in this embodiment of the present application.
其中,负载均衡策略用于实现报文发送的流量负载均衡,负载均衡策略即主用PFU对应的负载均衡策略,第二负载均衡策略即非主用PFU对应的负载均衡策略,不同的PFU都会执行对应自身的负载均衡策略。可以通过将第一状态信息与预设的门限值进行对比,通过对比结果来动态调整主用PFU向后端负载建立会话和分发流量的情况,来达到整体业务链中各后端负载的动态均衡。Among them, the load balancing policy is used to realize the traffic load balancing of message sending. The load balancing policy is the load balancing policy corresponding to the active PFU, and the second load balancing policy is the load balancing policy corresponding to the non-active PFU. Different PFUs will execute Corresponds to its own load balancing strategy. By comparing the first state information with the preset threshold value, and dynamically adjusting the status of establishing sessions and distributing traffic to the back-end load by the active PFU through the comparison result, the dynamic status of each back-end load in the overall service chain can be achieved. balanced.
本申请实施例通过集成主用OMU和多个PFU得到DRLB网元,在当前传统核心网媒体面基础上,基于业务链场景引入该DRLB网元,通过根据预设的请求发送周期向所有的后端负载发送第一状态查询请求,可以对后端负载进行负载均衡的检测处理,并可以通过检测结果进行动态调整各后端负载之间的负荷情况,以达到动态负载均衡的效果,从而降低后端负载的工作状态出现突变所带来的影响,提高报文发送的可靠性。并且,将负载均衡结果同步至多个PFU,各个PFU向对应的后端负载进行报文发送,能够较好地适应大流量场景下的报文发送,提高报文发送的稳定性。In the embodiment of this application, the DRLB network element is obtained by integrating the primary OMU and multiple PFUs. On the basis of the current traditional core network media plane, the DRLB network element is introduced based on the service chain scenario, and the DRLB network element is sent to all subsequent requests according to the preset request sending cycle. The end load sends the first status query request, which can detect and process the load balancing of the back-end load, and dynamically adjust the load situation among the back-end loads through the detection results to achieve the effect of dynamic load balancing, thereby reducing the The impact of sudden changes in the working status of the terminal load can be avoided, and the reliability of message sending can be improved. In addition, the load balancing result is synchronized to multiple PFUs, and each PFU sends a message to the corresponding back-end load, which can better adapt to the message sending in a large traffic scenario and improve the stability of message sending.
参照图4,图4为本申请实施例提供的报文发送方法的另一种流程示意图,该报文发送方法应用于主用PFU,并且,在整个网络架构中存在多个PFU,各个PFU被设置为向对应的后端负载进行报文发送,主用PFU为多个PFU中的其中一个PFU,上述报文发送方法包括以下步骤401至步骤404。Referring to FIG. 4, FIG. 4 is another schematic flow chart of the message sending method provided by the embodiment of the present application. The message sending method is applied to the active PFU, and there are multiple PFUs in the entire network architecture, and each PFU is It is set to send a message to the corresponding back-end load, and the active PFU is one of the multiple PFUs. The above message sending method includes the following steps 401 to 404.
步骤401:根据预设的请求发送周期向所有的后端负载发送第一状态查询请求;Step 401: Send a first status query request to all backend loads according to a preset request sending cycle;
步骤402:接收后端负载根据第一状态查询请求发送的第一状态信息,根据第一状态信息确定后端负载的负载均衡结果;Step 402: Receive the first state information sent by the backend load according to the first state query request, and determine the load balancing result of the backend load according to the first state information;
步骤403:根据负载均衡结果确定第一负载均衡策略,根据第一负载均衡策略向对应的 后端负载进行报文发送;Step 403: Determine the first load balancing strategy according to the load balancing result, and send the message to the corresponding backend load according to the first load balancing strategy;
步骤404:将负载均衡结果同步至非主用PFU,以使非主用PFU根据负载均衡结果确定第二负载均衡策略并根据第二负载均衡策略向对应的后端负载进行报文发送。Step 404: Synchronize the load balancing result to the non-active PFU, so that the non-active PFU determines a second load balancing strategy according to the load balancing result and sends a message to the corresponding backend load according to the second load balancing strategy.
其中,多个PFU可以集成于同一设备中,也可以设置于不同的分立设备中,本申请实施例不做限定。Wherein, multiple PFUs may be integrated in the same device, or may be set in different discrete devices, which is not limited in this embodiment of the present application.
上述步骤401至步骤404通过根据预设的请求发送周期向所有的后端负载发送第一状态查询请求,可以对后端负载进行负载均衡的检测处理,并可以通过检测结果进行动态调整各后端负载之间的负荷情况,以达到动态负载均衡的效果,从而降低后端负载的工作状态出现突变所带来的影响,提高报文发送的可靠性。并且,将负载均衡结果同步至多个PFU,各个PFU向对应的后端负载进行报文发送,能够较好地适应大流量场景下的报文发送,提高报文发送的稳定性。The above steps 401 to 404 send the first status query request to all backend loads according to the preset request sending cycle, so that the backend load can be detected and processed for load balancing, and each backend can be dynamically adjusted according to the detection results Load conditions between loads to achieve the effect of dynamic load balancing, thereby reducing the impact of sudden changes in the working status of back-end loads and improving the reliability of message sending. In addition, the load balancing result is synchronized to multiple PFUs, and each PFU sends a message to the corresponding back-end load, which can better adapt to the message sending in a large traffic scenario and improve the stability of message sending.
在一种实现方式中,主用PFU设置有目标标志位,由于PFU的数量有多个,因此,在上述步骤401之前,先从多个PFU中确定出主用PFU,对于主用PFU来说,参照图5,图5为本申请实施例提供的确定主用PFU的流程示意图,上述报文发送方法还可以包括以下步骤501至步骤503。In one implementation, the active PFU is set with a target flag. Since there are multiple PFUs, before the above step 401, the active PFU is first determined from the multiple PFUs. For the active PFU Referring to FIG. 5 , FIG. 5 is a schematic flowchart of determining a primary PFU provided by an embodiment of the present application. The above message sending method may further include the following steps 501 to 503 .
步骤501:接收OMU发送第二状态查询请求;Step 501: receiving a second status query request sent by the OMU;
步骤502:根据第二状态查询请求向OMU发送第二状态信息,以供OMU根据第二状态信息从多个PFU中确定主用PFU;Step 502: Send the second status information to the OMU according to the second status query request, so that the OMU can determine the active PFU from multiple PFUs according to the second status information;
步骤503:接收OMU发送的主用PFU的确定结果,根据确定结果对目标标志位进行置位,将置位结果发送至OMU。Step 503: Receive the determination result of the active PFU sent by the OMU, set the target flag bit according to the determination result, and send the setting result to the OMU.
其中,OMU被设置为对PFU进行管理,OMU与PFU可以集成于同一设备中,也可以设置于不同的分立设备中,本申请实施例不做限定。Wherein, the OMU is set to manage the PFU, and the OMU and the PFU may be integrated in the same device, or may be set in different separate devices, which is not limited in this embodiment of the present application.
其中,第二状态信息可以是上电时间和虚机标识,对于主用PFU来说,在成为主用PFU之前,会接收到OMU的第二状态查询请求,此时需要向OMU上报自身的上电时间和虚机标识,OMU会根据所有的PFU的上电时间来确定出主用PFU,并记录相应的虚机标识,例如,可以将上电时间最早的PFU作为主用PFU,也可以将上电时间最晚的PFU作为主用PFU,本申请实施例不做限定。然后,主用PFU会接收到来自OMU通知的确定结果,根据确定结果对目标标志位进行置位,将置位结果返回至OMU,OMU根据置位结果确定主用PFU设置完毕。通过设置目标标志位进行置位,可以明确主用PFU的身份。Wherein, the second state information may be the power-on time and the virtual machine ID. For the active PFU, before becoming the active PFU, it will receive the second state query request from the OMU. At this time, it needs to report its own status to the OMU. The power-on time and virtual machine ID. The OMU will determine the active PFU according to the power-on time of all PFUs, and record the corresponding virtual machine ID. For example, the PFU with the earliest power-on time can be used as the active PFU, or the The PFU with the latest power-on time is used as the active PFU, which is not limited in this embodiment of the present application. Then, the active PFU will receive the determination result notified from the OMU, set the target flag according to the determination result, and return the setting result to the OMU, and the OMU determines that the setting of the active PFU is completed according to the setting result. The identity of the active PFU can be clarified by setting the target flag bit.
在一种实现方式中,PFU可能会出现故障,当多个PFU中的一个或者多个处于异常状态,若主用PFU当前处于正常状态,则会收到OMU发送的第三状态查询请求,然后主用PFU根据第三状态查询请求向OMU发送第三状态信息,以供OMU根据第三状态信息的接收结果从多个PFU中确定处于正常状态的PFU,并从处于正常状态的PFU中重新确定主用PFU。In one implementation, the PFU may fail. When one or more of the multiple PFUs is in an abnormal state, if the active PFU is currently in a normal state, it will receive a third state query request sent by the OMU, and then The active PFU sends the third state information to the OMU according to the third state query request, so that the OMU can determine the PFU in the normal state from the multiple PFUs according to the receiving result of the third state information, and re-determine from the PFU in the normal state The main PFU.
其中,第三状态信息可以是虚机标识和虚机状态,虚机状态可以是虚机CPU占用率、内存占用率、硬盘状态、虚机告警数量及级别等等,本申请实施例不做限定。通过向OMU发送第三状态信息,以便于OMU判断出处于正常状态的PFU,进而从处于正常状态的PFU中重新确定主用PFU,以保持负载均衡的动态调整效果,提高报文发送的稳定性。Wherein, the third status information may be the virtual machine identifier and the status of the virtual machine, and the status of the virtual machine may be the virtual machine CPU usage rate, memory usage rate, hard disk status, number and level of virtual machine alarms, etc., which are not limited in the embodiment of the present application . By sending the third state information to the OMU, the OMU can determine the PFU in the normal state, and then re-determine the active PFU from the PFU in the normal state, so as to maintain the dynamic adjustment effect of load balancing and improve the stability of message sending. .
在一种实现方式中,根据第一状态信息确定后端负载的负载均衡结果,可以校验第一状态信息的准确性,当准确性表征第一状态信息不准确,将第一状态信息丢弃,重新向后端负 载发送第一状态查询请求;当准确性表征第一状态信息准确,对第一状态信息进行合并计算处理,根据合并计算处理后的第一状态信息确定后端负载的负载均衡结果。In an implementation manner, the load balancing result of the backend load is determined according to the first state information, and the accuracy of the first state information may be verified. When the accuracy indicates that the first state information is inaccurate, the first state information is discarded. Re-send the first status query request to the back-end load; when the accuracy indicates that the first status information is accurate, perform combined calculation processing on the first status information, and determine the load balancing result of the back-end load according to the first status information after the combined calculation process .
通过校验第一状态信息的准确性,当第一状态信息准确时,再根据第一状态信息确定后端负载的负载均衡结果,可以提高负载均衡结果的可靠性。其中,第一状态信息可以包含多种不同类型的参数,对第一状态信息进行合并计算处理,可以通过预设的计算规则来合并第一状态信息涉及的多种不同类型的参数。By verifying the accuracy of the first state information, when the first state information is accurate, then determining the load balancing result of the backend load according to the first state information, the reliability of the load balancing result can be improved. Wherein, the first status information may include multiple different types of parameters, and the first status information is combined and calculated, and multiple different types of parameters involved in the first status information may be combined through preset calculation rules.
在一种实现方式中,校验第一状态信息的准确性,可以获取预先存储的后端负载的基础状态信息,将基础状态信息与第一状态信息进行比对,根据比对结果确定第一状态信息的准确性。基础状态信息可以是主用PFU初始化时保存的各后端负载的基本数据,通过将基础状态信息与第一状态信息进行比对,有利于提高校验第一状态信息的准确性的效率。In an implementation manner, to verify the accuracy of the first state information, the pre-stored basic state information of the back-end load may be obtained, the basic state information is compared with the first state information, and the first state information is determined according to the comparison result. Accuracy of Status Information. The basic state information may be the basic data of each back-end load saved when the active PFU is initialized. By comparing the basic state information with the first state information, it is beneficial to improve the efficiency of verifying the accuracy of the first state information.
在一种实现方式中,参照图6,图6为本申请实施例提供的另一种确定主用PFU的流程示意图,主用OMU从多个PFU中确定主用PFU,可以包括以下步骤601至步骤603。In one implementation, referring to FIG. 6, FIG. 6 is a schematic flowchart of another determination of the active PFU provided by the embodiment of the present application. The active OMU determines the active PFU from multiple PFUs, which may include the following steps 601 to Step 603.
步骤601:主用OMU向各个PFU发送第二状态查询请求;Step 601: The active OMU sends a second status query request to each PFU;
步骤602:PFU根据第二状态查询请求向主用OMU发送第二状态信息;Step 602: The PFU sends the second status information to the active OMU according to the second status query request;
步骤603:主用OMU根据第二状态信息从多个PFU中确定主用PFU。Step 603: The active OMU determines the active PFU from the multiple PFUs according to the second state information.
可以理解的是,上述步骤601至步骤603的原理可以参照图5所示的流程,在此不再赘述。It can be understood that, for the principles of the foregoing steps 601 to 603, reference may be made to the process shown in FIG. 5 , which will not be repeated here.
DRLB还包括备用OMU,PFU设置有目标标志位,主用OMU根据第二状态信息从多个PFU中确定主用PFU后,还将主用PFU的确定结果发送至各个PFU;各个PFU根据确定结果对目标标志位进行置位,将置位结果发送至主用OMU;主用OMU将置位结果同步至备用OMU。The DRLB also includes a standby OMU, and the PFU is set with a target flag. After the active OMU determines the active PFU from multiple PFUs according to the second state information, it also sends the determination result of the active PFU to each PFU; each PFU Set the target flag bit, and send the set result to the active OMU; the active OMU synchronizes the set result to the standby OMU.
通过设置备用OMU,并且将各个PFU置位结果同步至备用OMU,达到OMU的备份效果,当主用OMU出现故障时,可以快速切换至备用OMU对各个PFU进行管理,提高DRLB工作的可靠性与稳定性,从而提高报文发送的稳定性。By setting the standby OMU and synchronizing the setting results of each PFU to the standby OMU, the backup effect of the OMU is achieved. When the active OMU fails, it can quickly switch to the standby OMU to manage each PFU, improving the reliability and stability of DRLB work. performance, thereby improving the stability of message sending.
在一种实现方式中,主用OMU会定时检测PFU的故障状态,当主用OMU检测到一个或者多个PFU处于异常状态,向各个PFU发送第三状态查询请求;PFU根据第三状态查询请求向主用OMU发送第三状态信息;主用OMU根据第三状态信息的接收结果从多个PFU中确定处于正常状态的PFU,从处于正常状态的PFU中重新确定主用PFU。In one implementation, the active OMU regularly detects the fault status of the PFUs. When the active OMU detects that one or more PFUs are in an abnormal state, it sends a third status query request to each PFU; The active OMU sends the third state information; the active OMU determines the PFU in the normal state from the multiple PFUs according to the receiving result of the third state information, and re-determines the active PFU from the PFUs in the normal state.
通过从处于正常状态的PFU中重新确定主用PFU,以保持负载均衡的动态调整效果,提高报文发送的稳定性。第三状态信息可见参见前述的解释,在此不再赘述。By re-determining the active PFU from the PFUs in the normal state, the dynamic adjustment effect of load balancing is maintained and the stability of packet sending is improved. For the third state information, refer to the foregoing explanation, and details will not be repeated here.
在一种实现方式中,当主用OMU检测到处于异常状态的PFU恢复正常,将重新确定主用PFU的确定结果发送至恢复正常的PFU。通过将重新确定主用PFU的确定结果发送至恢复正常的PFU,以便恢复正常的PFU执行置位操作,从而确定恢复正常的PFU当前的身份,重新加入至报文发送的架构中。In an implementation manner, when the active OMU detects that the PFU in an abnormal state returns to normal, it sends a determination result of re-determining the active PFU to the PFU that returns to normal. By sending the determination result of re-determining the active PFU to the normal PFU, so that the normal PFU performs a bit setting operation, thereby determining the current identity of the normal PFU, and re-joining the message sending architecture.
在一种实现方式中,根据负载均衡策略向对应的后端负载进行报文发送,可以是接收来自上行链路发送的第一媒体面报文,根据负载均衡策略将第一媒体面报文发送至对应的后端负载;也可以是接收来自下行链路发送的第二媒体面报文,根据负载均衡策略将第二媒体面报文发送至对应的后端负载。即上行流量的报文和下行流量的报文可以均通过负载均衡策略进行发送,有利于提高报文发送的总体的可靠性与稳定性。In one implementation, sending the message to the corresponding backend load according to the load balancing policy may be receiving the first media plane message sent from the uplink, and sending the first media plane message according to the load balancing policy to the corresponding backend load; or receive the second media plane message sent from the downlink, and send the second media plane message to the corresponding backend load according to the load balancing policy. That is, the packets of the uplink flow and the packets of the downlink flow can both be sent through the load balancing policy, which is beneficial to improving the overall reliability and stability of sending the packets.
下面通过实际例子说明本申请实施例提供的基于图3所示架构的报文发送方法的详细流 程。The detailed flow of the message sending method based on the architecture shown in FIG. 3 provided by the embodiment of the present application is described below through practical examples.
参照图7,图7为本申请实施例提供的主用PFU选举流程示意图,其中OMU(1+1)表示OMU采用1+1主备模式,其中一个为主用,一个为备用,主要包括以下步骤:Referring to FIG. 7, FIG. 7 is a schematic diagram of the main PFU election process provided by the embodiment of the present application, wherein OMU(1+1) indicates that the OMU adopts a 1+1 master-backup mode, one of which is the master and the other is the backup, mainly including the following step:
步骤701:主用OMU虚机向所有PFU发送查询上电请求;Step 701: The active OMU virtual machine sends a query power-on request to all PFUs;
步骤702:各PFU收到主用OMU请求后,将其***上电时间、虚机标识发送给主用OMU;Step 702: After receiving the request from the active OMU, each PFU sends its system power-on time and virtual machine ID to the active OMU;
步骤703:主用OMU将各PFU返回数据入数据库,并根据上电时间选举出主用PFU,并将判断结果同步给备用OMU;Step 703: The active OMU enters the data returned by each PFU into the database, elects the active PFU according to the power-on time, and synchronizes the judgment result to the standby OMU;
步骤704:主用OMU将选举结果发送给各PFU;Step 704: The active OMU sends the election result to each PFU;
步骤705:各PFU收到选举结果后将自身主备标志位进行置位,同时将置位结果返回给主用OMU;Step 705: After receiving the election result, each PFU sets its active/standby flag bit, and at the same time returns the set result to the active OMU;
步骤706:主用OMU收到各PFU置位结果后,进行数据库更新,并将更新后的结果同步给备用OMU。Step 706: After receiving the setting results of each PFU, the active OMU updates the database, and synchronizes the updated results to the standby OMU.
参照图8,图8为本申请实施例提供的出现PFU异常时主用PFU选举流程示意图,主要包括以下步骤:Referring to Figure 8, Figure 8 is a schematic diagram of the primary PFU election process provided by the embodiment of the present application when a PFU is abnormal, which mainly includes the following steps:
步骤801:主用OMU检测到一个或者多个PFU出现虚机异常;Step 801: The active OMU detects that one or more PFUs have virtual machine abnormalities;
步骤802:主用OMU向各个PFU发送虚机状态查询请求,二次确认各虚机状态;Step 802: The active OMU sends a virtual machine status query request to each PFU, and confirms the status of each virtual machine for a second time;
步骤803:各个PFU返回虚机标识、虚机状态,异常虚机不对请求做响应,不返回数据;Step 803: Each PFU returns the virtual machine ID and virtual machine status, and the abnormal virtual machine does not respond to the request and does not return data;
步骤808:主用OMU根据各PFU返回的数据更新PFU虚机数据入数据库,对于未返回状态的PFU虚机,主用OMU直接按检测到的PFU虚机状态数据入数据库,并将更新后的结果同步给备用OMU;Step 808: The active OMU updates the PFU virtual machine data into the database according to the data returned by each PFU. For the PFU virtual machines that have not returned the status, the active OMU directly enters the detected PFU virtual machine The result is synchronized to the standby OMU;
步骤805:主用OMU向正常的PFU发送查询各虚机标识、虚机CPU使用率、内存、硬盘状态、虚机告警数量及级别的请求;Step 805: The active OMU sends a request to the normal PFU to query the identification of each virtual machine, the CPU usage rate of the virtual machine, the memory, the hard disk status, the number and level of virtual machine alarms;
步骤806:各PFU收到主用OMU请求后,将其虚机标识、虚机CPU使用率、内存、硬盘状态、虚机告警数量及级别发送给主用OMU;Step 806: After receiving the request from the active OMU, each PFU sends its virtual machine ID, virtual machine CPU usage, memory, hard disk status, number and level of virtual machine alarms to the active OMU;
步骤807:主用OMU将各PFU返回数据入数据库,根据返回的各PFU指标选举出主用PFU,并将判断结果同步给备用OMU;Step 807: The active OMU enters the data returned by each PFU into the database, selects the active PFU according to the returned PFU indicators, and synchronizes the judgment result to the standby OMU;
步骤808:主用OMU将选举结果发送给各PFU;Step 808: The active OMU sends the election result to each PFU;
步骤809:各PFU收到选举结果后将自身主备标志位进行置位,同时将置位结果返回给主用OMU;Step 809: After each PFU receives the election result, its active and standby flag is set, and the set result is returned to the active OMU;
步骤810:主用OMU收到各PFU置位结果后,进行数据库更新,并将更新后的结果同步给备用OMU;Step 810: After the active OMU receives the result of setting each PFU, it updates the database, and synchronizes the updated result to the standby OMU;
步骤811:当主用OMU检测到之前异常的PFU恢复正常时,将之前保存的选举结果发送给该PFU;Step 811: When the active OMU detects that the previous abnormal PFU returns to normal, the previously saved election result is sent to the PFU;
步骤812:恢复正常的PFU收到主用OMU发送的选举结果消息后,主备标志位进行置位,同时将置位结果返回给主用OMU;Step 812: After the normal PFU receives the election result message sent by the active OMU, the active and standby flags are set, and at the same time, the set result is returned to the active OMU;
步骤813:主用OMU收到恢复正常的PFU置位结果后,进行数据库更新,并将更新后的结果同步给备用OMU。Step 813: After receiving the PFU setting result of returning to normal, the active OMU updates the database, and synchronizes the updated result to the standby OMU.
其中,一个或者多个PFU出现故障,可以包括主用PFU出现故障的情况,主用OMU扫描检测各PFU状态是一个持续的过程,定时进行扫描,定时时间可配置。主用OMU除了根据正 常的PFU的虚机标识、虚机CPU使用率、内存、硬盘状态、虚机告警数量及级别来重新确定主用PFU以外,也可以将正常的PFU的上电时间进行排序,选择一个上电时间第二的PFU选举为主用PFU,从而提高重新确定主用PFU的效率。Wherein, one or more PFU failures may include failures of the main PFU. The main OMU scans and detects the status of each PFU is a continuous process, and the scanning is performed at regular intervals, and the timing time is configurable. In addition to re-determining the active PFU according to the normal PFU's virtual machine ID, virtual machine CPU usage, memory, hard disk status, virtual machine alarm number and level, the active OMU can also sort the normal PFU power-on time , select a PFU with the second power-on time to be elected as the active PFU, thereby improving the efficiency of re-determining the active PFU.
参照图9,图9为本申请实施例提供的DRLB进行后端负载均衡策略工作流程示意图,其中服务端和插件都是主用PFU中的工作组件,主要包括以下步骤:Referring to FIG. 9, FIG. 9 is a schematic diagram of the workflow of the DRLB implementation of the back-end load balancing strategy provided by the embodiment of the present application, wherein the server and the plug-in are both working components in the active PFU, and mainly include the following steps:
步骤901:服务端向插件发送当前各后端负载的CPU使用率、会话占比以及吞吐量查询请求;Step 901: the server sends the current CPU usage, session proportion and throughput query request of each backend load to the plug-in;
步骤902:插件向各后端负载发送ICMP查询请求;Step 902: the plug-in sends an ICMP query request to each backend load;
步骤903:各后端负载收到请求后将各负载虚机情况发送给插件;Step 903: After receiving the request, each backend load sends the virtual machine status of each load to the plug-in;
步骤904:插件将收到的负载虚机标识、CPU使用率、会话数占比、吞吐量占比等信息返回给服务端;Step 904: The plug-in returns the received load virtual machine ID, CPU usage rate, session number ratio, throughput ratio and other information to the server;
步骤905:服务端收到响应后,根据之前保存的各负载信息首先校验返回数据的正确性(主要是使用服务端之前初始化保存的各后端虚机基本数据,与响应中收到的数据作对比),若收到响应数据被判断为不正确,则将收到的数据丢弃,服务端重复进行步骤901;若收到响应数据被判定为正确,则对响应数据进行计算合并,通过合并后的数据判断各后端负载均衡情况;Step 905: After receiving the response, the server first checks the correctness of the returned data according to the previously saved load information (mainly using the basic data of each back-end virtual machine initialized and saved before the server, and the data received in the response For comparison), if the received response data is judged to be incorrect, the received data will be discarded, and the server will repeat step 901; if the received response data is judged to be correct, the response data will be calculated and combined, and the combined The final data can be used to determine the load balance of each backend;
步骤906:服务端将负载均衡判断结果发送给插件;Step 906: the server sends the load balancing judgment result to the plug-in;
步骤907:插件进行后端负载数据更新;Step 907: The plug-in updates the backend load data;
步骤908:更新结果返回给服务端;Step 908: The update result is returned to the server;
步骤909:服务端将插件返回的更新结果标志进行入数据库更新;Step 909: the server enters the update result flag returned by the plug-in into the database for update;
步骤910:主用PFU服务端将负载均衡结果发送给各正常非主用PFU;Step 910: The active PFU server sends the load balancing result to each normal non-active PFU;
步骤911:非主用PFU更新负载均衡结果入数据库;Step 911: The non-active PFU updates the load balancing result into the database;
步骤912:非主用PFU将更新结果返回给主用PFU服务端;Step 912: The non-active PFU returns the update result to the active PFU server;
步骤913:主用PFU服务端向非主用PFU发送调整负载均衡策略请求消息;Step 913: The active PFU server sends an adjustment load balancing policy request message to the non-active PFU;
步骤914:主用PFU服务端向自身插件发送调整负载均衡策略请求消息;Step 914: The active PFU server sends a load balancing policy adjustment request message to its own plug-in;
步骤915:插件调整向后端负载发媒体面报文负载均衡策略;Step 915: The plug-in adjusts the load balancing strategy for sending media plane packets to the backend load;
步骤916:非主用PFU调整向后端负载发媒体面报文负载均衡策略;Step 916: The non-active PFU adjusts the load balancing strategy for sending media plane packets to the backend load;
步骤917:服务端更新数据库中其负载均衡发送策略数据;Step 917: The server updates its load balancing sending policy data in the database;
步骤918:非主用PFU更新数据库中其负载均衡策略发送数据。Step 918: The non-active PFU updates the data sent by its load balancing policy in the database.
其中,步骤902中,该ICMP请求带有跟各后端负载约定好的特殊字符,步骤902是个定时发送查询步骤,设定好定时查询时间后,向各后端负载定时发送。Wherein, in step 902, the ICMP request carries special characters agreed upon with each backend load, and step 902 is a step of sending a query regularly, after setting the timing query time, it is sent to each backend load regularly.
参照图10,图10为本申请实施例提供的基于DRLB核心网业务链网元上下行流量工作流程示意图,其中后端负载以vFW为例,xGW表示核心网媒体面网元,VR1和VR2表示路由1和路由2,SSC表示核心网业务链网元部分,PDN表示用户浏览的internet服务。其中步骤1001-步骤1005表示上行流程,步骤1011-步骤1015表示下行流程,其中:Referring to Fig. 10, Fig. 10 is a schematic diagram of the uplink and downlink traffic workflow of network elements based on the DRLB core network service chain provided by the embodiment of the present application, where the back-end load takes vFW as an example, xGW represents the core network media plane network element, and VR1 and VR2 represent Route 1 and Route 2, SSC represents the network element part of the service chain of the core network, and PDN represents the internet service that the user browses. Wherein step 1001-step 1005 represents the upstream process, and step 1011-step 1015 represents the downstream process, wherein:
步骤1001:xGW将收到的TCP或者UDP流量根据提前设置好的路由规则推送到VR1;Step 1001: xGW pushes the received TCP or UDP traffic to VR1 according to the pre-set routing rules;
步骤1002:VR1根据路由规则将媒体面报文推送给DRLB;Step 1002: VR1 pushes the media plane message to DRLB according to the routing rules;
步骤1003:DRLB根据负载均衡策略将媒体面报文发送给vFW;Step 1003: DRLB sends the media plane message to the vFW according to the load balancing policy;
步骤1004:vFW将媒体面报文进行处理后根据路由规则发送给VR2;Step 1004: vFW processes the media plane message and sends it to VR2 according to the routing rules;
步骤1005:VR2根据路由规则将媒体面报文发送给PDN;至此,上线流程结束;Step 1005: VR2 sends the media plane message to the PDN according to the routing rules; so far, the online process ends;
步骤1011:PDN收到用户媒体面上网请求,处理完请求后将响应消息返回给VR2;Step 1011: The PDN receives the user's media surface Internet access request, and returns a response message to VR2 after processing the request;
步骤1012:VR2将收到的媒体面报文根据路由规则发送给DRLB;Step 1012: VR2 sends the received media plane message to DRLB according to routing rules;
步骤1013:DRLB根据上线报文分发规则将收到的媒体面报文发送给对应的vFW;Step 1013: DRLB sends the received media plane message to the corresponding vFW according to the online message distribution rule;
步骤1014:vFW根据路由规则将收到的媒体面报文转发给VR1;Step 1014: The vFW forwards the received media plane message to VR1 according to the routing rules;
步骤1015:VR1根据路由规则将收到的媒体面报文转发给xGW。至此,下行流程结束。Step 1015: VR1 forwards the received media plane message to xGW according to routing rules. So far, the downlink process ends.
其中,vFW可以包括多个LOAD。本申请实施例提供的报文发送方法,在核心网流量处理链路中增加DRLB网元,通过该网元对后端网元进行动态负载均衡策略调整,使得核心网业务链媒体面链路能够实现动态负载均衡。DRLB依据后端负载的CPU、会话占用以及吞吐量情况反馈来动态调整向后端业务模块的各业务处理机分发流量,以保证后端网元各业务处理机上所承接的业务流量能够实时动态均衡。DRLB网元在网元实例化部署时设置检测标志模式,并设置后端网元业务处理机CPU、会话占比以及吞吐量的初始门限值,OMU将相关配置下发到DRLB所有PFU,选举其中一个PFU为主用,通过该PFU获取所有后端负载(后端负载在DRLB上以VS(Virtual Service,虚拟服务)和RS(Routing Service,路由服务)形式展示,一个VS标志包含一个后端网元的业务链路的集合,一个RS标志后端网元的一个真实的业务处理机)负荷情况,并且该PFU周期性(可配置)的向指定VS下所有RS发起ICMP请求,ICMP有特殊标记;所有后端网元业务处理机查询CPU使用率、会话数占比、吞吐量占比,并在ICMP应答消息中返回主用PFU,主用PFU汇总后端网元的负载情况,并将负载状态超出门限的RS状态通知到非主用的PFU;若某个RS连续N次检测超出门限后,主用PFU判断该RS为超负荷状态,并将该RS状态通过tipc消息同步到其它非主用PFU,动态调整DRLB分发算法,新分发的媒体面流量不会再往该RS分发,落在该RS上的老的媒体面流量保持不变,直到主用PFU检查该RS负荷连续N次低于门限值时,通知所有PFU状态恢复。DRLB设置异常情况处理场景:如果RS返回响应超时,DRLB主用PFU需要重新发送请求,若后端负载连续N次返回超时,则主用PFU记录返回超时的前一次获取的负荷数据;如果初始化阶段DRLB主用PFU上没有前一次的负荷数据记录,则记录为NULL;DRLB上PFU跟RS之间的链路关系设置健康检查,对于健康检查状态为down的RS,不需要再发送请求,负荷数据均按照NULL记录。Wherein, the vFW may include multiple LOADs. In the message sending method provided by the embodiment of the present application, a DRLB network element is added to the core network traffic processing link, and the dynamic load balancing strategy is adjusted for the back-end network element through this network element, so that the core network service chain media plane link can Realize dynamic load balancing. DRLB dynamically adjusts the distribution of traffic to each service processor of the back-end service module according to the CPU of the back-end load, session occupation and throughput feedback, so as to ensure that the service traffic undertaken by each service processor of the back-end network element can be dynamically balanced in real time . The DRLB network element sets the detection flag mode when the network element is instantiated and deployed, and sets the initial threshold value of the CPU of the back-end network element service processor, session ratio, and throughput. The OMU sends the relevant configuration to all PFUs of the DRLB. One of the PFUs is the main one, through which all back-end loads are obtained (the back-end loads are displayed on the DRLB in the form of VS (Virtual Service, virtual service) and RS (Routing Service, routing service), and a VS symbol contains a back-end A set of service links of network elements, one RS marks a real service processor) load of backend network elements, and the PFU periodically (configurable) initiates ICMP requests to all RSs under the specified VS, ICMP has special mark; all back-end network element service processors query the CPU usage, session number ratio, and throughput ratio, and return the active PFU in the ICMP response message. The active PFU summarizes the load status of the back-end network elements, and The status of the RS whose load status exceeds the threshold is notified to the non-active PFU; if a certain RS detects that the threshold exceeds the threshold for N consecutive times, the active PFU judges that the RS is in an overloaded state, and synchronizes the RS status to other non-active PFUs through tipc messages. The active PFU dynamically adjusts the DRLB distribution algorithm. The newly distributed media plane traffic will not be distributed to the RS, and the old media plane traffic falling on the RS remains unchanged until the active PFU checks the RS load for N consecutive times. When the value is lower than the threshold, all PFUs are notified to restore their status. DRLB setting exception handling scenario: If RS returns a response timeout, the active PFU of DRLB needs to resend the request. If the backend load returns the timeout for N consecutive times, the active PFU records the load data obtained before returning the timeout; if the initialization phase If there is no previous load data record on the active PFU of DRLB, the record is NULL; the link relationship between PFU and RS on DRLB is set for health check. For the RS whose health check status is down, there is no need to send a request again, and the load data They are all recorded as NULL.
本申请实施例中在当前传统核心网媒体面基础上,基于业务链场景引入一个DRLB网元,该网元可以通过检测其后端负载的CPU、会话数占比以及吞吐量来动态的调整向后端各负载业务处理机建立的会话数和分发的流量数,以保证后端负载能够动态的实现负载均衡,不会出现因负载的瞬时突变增加而导致后端业务处理机出现重启等故障,影响正常业务处理。通过在DRLB的主用PFU发送特殊ICMP请求向后端负载业务处理机动态查询其CPU、会话数占比以及吞吐量情况,并把获取的数据跟初始门限值对比,通过对比结果来动态调整DRLB向后端各业务处理机建立会话和分发流量的情况,来达到整体业务链中各后端负载的动态均衡。In the embodiment of this application, on the basis of the current traditional core network media plane, a DRLB network element is introduced based on the service chain scenario. The number of sessions established by each load business processor at the back end and the number of traffic distributed to ensure that the back-end load can dynamically achieve load balancing, and there will be no failures such as restarting of the back-end business processor due to an instantaneous sudden increase in load. Affect normal business processing. Send a special ICMP request to the back-end load service processor by sending a special ICMP request to the back-end load service processor, and compare the obtained data with the initial threshold value, and dynamically adjust the comparison result DRLB establishes sessions and distributes traffic to each back-end service processor to achieve dynamic balance of back-end loads in the overall service chain.
可以理解的是,虽然上述各个流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本实施例中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,上述流程图中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。It can be understood that, although the steps in the above flowcharts are displayed sequentially as indicated by the arrows, these steps are not necessarily executed sequentially in the order indicated by the arrows. Unless otherwise specified in this embodiment, the execution of these steps is not strictly limited in order, and these steps can be executed in other orders. Moreover, at least some of the steps in the flowchart above may include multiple steps or stages, these steps or stages are not necessarily executed at the same time, but may be executed at different times, the execution order of these steps or stages It does not necessarily have to be performed sequentially, but can be performed alternately or alternately with other steps or at least a part of steps or stages in other steps.
参照图11,图11为本申请实施例提供的报文发送装置的结构示意图。报文发送装置1100包括:存储器1101、处理器1102及存储在存储器1101上并可在处理器1102上运行的计算机程序,计算机程序运行时用于执行上述的报文发送方法。Referring to FIG. 11 , FIG. 11 is a schematic structural diagram of a message sending device provided by an embodiment of the present application. The message sending device 1100 includes: a memory 1101, a processor 1102, and a computer program stored in the memory 1101 and operable on the processor 1102. The computer program is used to execute the above message sending method when running.
处理器1102和存储器1101可以通过总线或者其他方式连接。The processor 1102 and the memory 1101 may be connected through a bus or in other ways.
存储器1101作为一种非暂态计算机可读存储介质,可用于存储非暂态软件程序以及非暂态性计算机可执行程序,如本申请实施例描述的报文发送方法。处理器1102通过运行存储在存储器1101中的非暂态软件程序以及指令,从而实现上述的报文发送方法。The memory 1101, as a non-transitory computer-readable storage medium, can be used to store non-transitory software programs and non-transitory computer-executable programs, such as the message sending method described in the embodiment of the present application. The processor 1102 executes the non-transitory software programs and instructions stored in the memory 1101 to implement the above message sending method.
存储器1101可以包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需要的应用程序;存储数据区可存储执行上述的报文发送方法。此外,存储器1101可以包括高速随机存取存储器1101,还可以包括非暂态存储器1101,例如至少一个储存设备存储器件、闪存器件或其他非暂态固态存储器件。在一些实施方式中,存储器1101可包括相对于处理器1102远程设置的存储器1101,这些远程存储器1101可以通过网络连接至该报文发送装置1100。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 1101 may include a program storage area and a data storage area, wherein the program storage area may store an operating system and an application program required by at least one function; the data storage area may store and execute the above message sending method. In addition, the memory 1101 may include a high-speed random access memory 1101, and may also include a non-transitory memory 1101, such as at least one storage device, a flash memory device or other non-transitory solid-state storage devices. In some implementations, the memory 1101 may include memory 1101 remotely located relative to the processor 1102, and these remote memories 1101 may be connected to the message sending device 1100 through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
实现上述的报文发送方法所需的非暂态软件程序以及指令存储在存储器1101中,当被一个或者多个处理器1102执行时,执行上述的报文发送方法。The non-transitory software programs and instructions required to implement the above message sending method are stored in the memory 1101, and when executed by one or more processors 1102, the above message sending method is executed.
本申请实施例还提供了计算机可读存储介质,存储有计算机可执行指令,计算机可执行指令用于执行上述的报文发送方法。The embodiment of the present application also provides a computer-readable storage medium storing computer-executable instructions, and the computer-executable instructions are used to execute the above method for sending a message.
在一实施例中,该计算机可读存储介质存储有计算机可执行指令,该计算机可执行指令被一个或多个控制处理器执行,可以实现上述的报文发送方法。In an embodiment, the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are executed by one or more control processors, so as to realize the above-mentioned message sending method.
本申请实施例至少具有以下有益效果:通过所述PFU获取所述后端负载的第一状态信息,根据所述第一状态信息动态调整向所述后端负载分发流量的负载均衡策略,根据所述负载均衡策略向所述后端负载进行报文发送,可以根据后端负载的状态动态调整各后端负载之间的负荷情况,以达到动态负载均衡的效果,从而降低后端负载的工作状态出现突变所带来的影响,提高报文发送的可靠性。The embodiments of the present application have at least the following beneficial effects: obtain the first state information of the back-end load through the PFU, dynamically adjust the load balancing strategy for distributing traffic to the back-end load according to the first state information, and according to the The above load balancing strategy sends messages to the back-end loads, and the load status between the back-end loads can be dynamically adjusted according to the status of the back-end loads to achieve the effect of dynamic load balancing, thereby reducing the working status of the back-end loads The impact of sudden changes can improve the reliability of message sending.
以上所描述的装置实施例仅仅是示意性的,其中作为分离部件说明的单元可以是或者也可以不是物理上分开的,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。The device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、***可以被实施为软件、固件、硬件及其适当的组合。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、储存设备存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包括计算机可读指令、数据结构、程序模块或者诸如载波或其 他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。Those of ordinary skill in the art can understand that all or some of the steps and systems in the methods disclosed above can be implemented as software, firmware, hardware and an appropriate combination thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application-specific integrated circuit . Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). As known to those of ordinary skill in the art, the term computer storage media includes both volatile and nonvolatile media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. permanent, removable and non-removable media. Computer storage media including, but not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, tape, storage device storage or other magnetic storage devices, or Any other medium that can be used to store desired information and that can be accessed by a computer. In addition, as is well known to those of ordinary skill in the art, communication media typically embody computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .
还应了解,本申请实施例提供的各种实施方式可以任意进行组合,以实现不同的技术效果。It should also be understood that the various implementation manners provided in the embodiments of the present application may be combined arbitrarily to achieve different technical effects.
以上是对本申请的若干实施方式进行了说明,但本申请并不局限于上述实施方式,熟悉本领域的技术人员在不违背本申请精神的前提下还可作出种种的等同变形或替换,这些等同的变形或替换均包含在本申请权利要求所限定的范围内。The above is a description of several embodiments of the present application, but the present application is not limited to the above-mentioned embodiments. Those skilled in the art can also make various equivalent deformations or replacements without violating the spirit of the present application. Any modification or substitution is included within the scope defined by the claims of the present application.

Claims (12)

  1. 一种报文发送方法,应用于直连路由负载均衡器DRLB,所述DRLB包括操作管理单元OMU和多个数据包转发单元PFU;所述OMU被设置为对所述PFU进行管理,各个所述PFU被设置为向对应的后端负载进行报文发送;所述报文发送方法包括:A message sending method, applied to a direct route load balancer DRLB, the DRLB includes an operation management unit OMU and a plurality of data packet forwarding units PFU; the OMU is set to manage the PFU, each of the The PFU is set to send a message to the corresponding back-end load; the message sending method includes:
    所述PFU获取所述后端负载的第一状态信息,根据所述第一状态信息动态调整向所述后端负载分发流量的负载均衡策略,根据所述负载均衡策略向所述后端负载进行报文发送。The PFU obtains the first status information of the back-end load, dynamically adjusts a load balancing policy for distributing traffic to the back-end load according to the first status information, and performs traffic distribution to the back-end load according to the load balancing policy. Message sent.
  2. 根据权利要求1所述的报文发送方法,其中,所述PFU获取对应的所述后端负载的第一状态信息之前,所述报文发送方法还包括:所述OMU从多个所述PFU中确定主用PFU;The message sending method according to claim 1, wherein, before the PFU obtains the first status information of the corresponding back-end load, the message sending method further comprises: the OMU sends the message from a plurality of the PFU Determine the main PFU in
    所述PFU获取对应的所述后端负载的第一状态信息,根据所述第一状态信息动态调整向对应的所述后端负载分发流量的负载均衡策略,包括:The PFU acquires the first status information of the corresponding back-end load, and dynamically adjusts the load balancing strategy for distributing traffic to the corresponding back-end load according to the first status information, including:
    所述主用PFU根据预设的请求发送周期向所有的所述后端负载发送第一状态查询请求,接收所述后端负载根据所述第一状态查询请求发送的第一状态信息,根据所述第一状态信息确定所述后端负载的负载均衡结果,将所述负载均衡结果同步至非主用PFU,各个所述PFU根据所述负载均衡结果确定负载均衡策略;The active PFU sends a first status query request to all the back-end loads according to a preset request sending period, receives the first status information sent by the back-end load according to the first status query request, and receives the first status information sent by the back-end load according to the first status query request. The first state information determines the load balancing result of the back-end load, and synchronizes the load balancing result to the non-active PFU, and each of the PFUs determines a load balancing strategy according to the load balancing result;
    其中,所述非主用PFU为多个所述PFU中除了所述主用PFU以外的其他PFU。Wherein, the non-active PFU is other PFUs in the multiple PFUs except the active PFU.
  3. 根据权利要求2所述的报文发送方法,其中,所述OMU从多个所述PFU中确定主用PFU,包括:The message sending method according to claim 2, wherein the OMU determines the active PFU from a plurality of the PFUs, comprising:
    所述OMU向各个所述PFU发送第二状态查询请求;The OMU sends a second status query request to each of the PFUs;
    所述PFU根据所述第二状态查询请求向所述OMU发送第二状态信息;The PFU sends second status information to the OMU according to the second status query request;
    所述OMU根据所述第二状态信息从多个所述PFU中确定主用PFU。The OMU determines the active PFU from the multiple PFUs according to the second state information.
  4. 根据权利要求3所述的报文发送方法,其中,所述OMU的数量为多个,从多个所述PFU中确定主用PFU的OMU为主用OMU,其余的所述OMU为备用OMU,所述PFU设置有目标标志位,所述OMU从多个所述PFU中确定主用PFU,还包括:The message sending method according to claim 3, wherein the number of the OMUs is multiple, and the OMU of the active PFU is determined from the plurality of PFUs as the active OMU, and the remaining OMUs are standby OMUs, The PFU is provided with a target flag, and the OMU determines the active PFU from a plurality of the PFUs, and also includes:
    所述主用OMU将所述主用PFU的确定结果发送至各个所述PFU;The active OMU sends the determination result of the active PFU to each of the PFUs;
    各个所述PFU根据所述确定结果对所述目标标志位进行置位,将所述置位结果发送至所述主用OMU;Each of the PFUs sets the target flag bit according to the determination result, and sends the setting result to the active OMU;
    所述主用OMU将所述置位结果同步至所述备用OMU。The active OMU synchronizes the setting result to the standby OMU.
  5. 根据权利要求1所述的报文发送方法,其中,所述报文发送方法还包括:The message sending method according to claim 1, wherein the message sending method further comprises:
    当所述OMU检测到一个或者多个PFU处于异常状态,向各个所述PFU发送第三状态查询请求;When the OMU detects that one or more PFUs are in an abnormal state, send a third state query request to each of the PFUs;
    所述PFU根据所述第三状态查询请求向所述OMU发送第三状态信息;The PFU sends third state information to the OMU according to the third state query request;
    所述OMU根据所述第三状态信息的接收结果从多个所述PFU中确定处于正常状态的PFU,从处于正常状态的PFU中重新确定所述主用PFU。The OMU determines the PFU in the normal state from the multiple PFUs according to the receiving result of the third state information, and re-determines the active PFU from the PFUs in the normal state.
  6. 根据权利要求5所述的报文发送方法,其中,所述报文发送方法还包括:The message sending method according to claim 5, wherein the message sending method further comprises:
    当所述主用OMU检测到处于异常状态的PFU恢复正常,将重新确定所述主用PFU的确定结果发送至恢复正常的PFU。When the active OMU detects that the PFU in the abnormal state returns to normal, it sends a determination result of re-determining the active PFU to the PFU that returns to normal.
  7. 一种报文发送方法,应用于主用PFU,所述主用PFU为多个PFU中的其中一个,各个所述PFU被设置为向对应的后端负载进行报文发送,所述报文发送方法包括:A message sending method, applied to the main PFU, the main PFU is one of a plurality of PFU, each of the PFU is set to send a message to the corresponding back-end load, the message sending Methods include:
    根据预设的请求发送周期向所有的所述后端负载发送第一状态查询请求;sending a first status query request to all the backend loads according to a preset request sending period;
    接收所述后端负载根据所述第一状态查询请求发送的第一状态信息,根据所述第一状态信息确定所述后端负载的负载均衡结果;receiving the first status information sent by the backend load according to the first status query request, and determining the load balancing result of the backend load according to the first status information;
    根据所述负载均衡结果确定第一负载均衡策略,根据所述第一负载均衡策略向对应的所述后端负载进行报文发送;Determine a first load balancing strategy according to the load balancing result, and send a message to the corresponding backend load according to the first load balancing strategy;
    将所述负载均衡结果同步至非主用PFU,以使所述非主用PFU根据所述负载均衡结果确定第二负载均衡策略并根据所述第二负载均衡策略向对应的所述后端负载进行报文发送,其中,所述非主用PFU为多个PFU中除了所述主用PFU以外其余的PFU。Synchronize the load balancing result to the non-active PFU, so that the non-active PFU determines a second load balancing strategy according to the load balancing result and loads the corresponding backend according to the second load balancing strategy Sending a message, wherein the non-active PFU is the rest of the multiple PFUs except the active PFU.
  8. 根据权利要求7所述的报文发送方法,其中,所述主用PFU设置有目标标志位,所述根据预设的请求发送周期向所有的所述后端负载发送第一状态查询请求之前,所述报文发送方法还包括:The message sending method according to claim 7, wherein the active PFU is provided with a target flag bit, and before sending the first status query request to all the back-end loads according to the preset request sending cycle, The message sending method also includes:
    接收OMU发送第二状态查询请求;receiving a second status query request sent by the OMU;
    根据所述第二状态查询请求向所述OMU发送第二状态信息,以供所述OMU根据所述第二状态信息从多个所述PFU中确定所述主用PFU;sending second status information to the OMU according to the second status query request, so that the OMU can determine the active PFU from a plurality of PFUs according to the second status information;
    接收所述OMU发送的所述主用PFU的确定结果,根据所述确定结果对所述目标标志位进行置位,将置位结果发送至所述OMU。receiving the determination result of the active PFU sent by the OMU, setting the target flag bit according to the determination result, and sending the setting result to the OMU.
  9. 根据权利要求7所述的报文发送方法,其中,所述根据所述第一状态信息确定所述后端负载的负载均衡结果,包括:The message sending method according to claim 7, wherein said determining the load balancing result of said backend load according to said first state information comprises:
    校验所述第一状态信息的准确性;verifying the accuracy of the first state information;
    当所述准确性表征所述第一状态信息不准确,将所述第一状态信息丢弃,重新向后端负载发送所述第一状态查询请求;When the accuracy indicates that the first state information is inaccurate, discarding the first state information, and resending the first state query request to the backend load;
    当所述准确性表征所述第一状态信息准确,对所述第一状态信息进行合并计算处理,根据合并计算处理后的所述第一状态信息确定所述后端负载的负载均衡结果。When the accuracy indicates that the first state information is accurate, performing combined calculation processing on the first state information, and determining a load balancing result of the backend load according to the combined calculated first state information.
  10. 根据权利要求9所述的报文发送方法,其中,所述校验所述第一状态信息的准确性,包括:The message sending method according to claim 9, wherein said verifying the accuracy of said first state information comprises:
    获取预先存储的所述后端负载的基础状态信息;Obtain pre-stored basic state information of the backend load;
    将所述基础状态信息与所述第一状态信息进行比对,根据比对结果确定所述第一状态信息的准确性。The basic state information is compared with the first state information, and the accuracy of the first state information is determined according to the comparison result.
  11. 一种报文发送装置,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现权利要求1至10任意一项所述的报文发送方法。A message sending device, comprising a memory and a processor, the memory stores a computer program, and the processor implements the message sending method according to any one of claims 1 to 10 when executing the computer program.
  12. 一种计算机可读存储介质,所述存储介质存储有程序,所述程序被处理器执行时实现权利要求1至10任意一项所述的报文发送方法。A computer-readable storage medium, the storage medium stores a program, and when the program is executed by a processor, the message sending method according to any one of claims 1 to 10 is realized.
PCT/CN2022/136961 2022-02-18 2022-12-06 Message sending methods, message sending apparatus and storage medium WO2023155550A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210150418.5A CN116669108A (en) 2022-02-18 2022-02-18 Message sending method, message sending device and storage medium
CN202210150418.5 2022-02-18

Publications (1)

Publication Number Publication Date
WO2023155550A1 true WO2023155550A1 (en) 2023-08-24

Family

ID=87577478

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/136961 WO2023155550A1 (en) 2022-02-18 2022-12-06 Message sending methods, message sending apparatus and storage medium

Country Status (2)

Country Link
CN (1) CN116669108A (en)
WO (1) WO2023155550A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110113399A (en) * 2019-04-24 2019-08-09 华为技术有限公司 Load balancing management method and relevant apparatus
CN110740164A (en) * 2019-09-04 2020-01-31 无锡华云数据技术服务有限公司 Server determination method, regulation and control method, device, equipment and storage medium
US20200314693A1 (en) * 2017-12-20 2020-10-01 Nokia Solutions And Networks Oy Method and Apparatus for User Transfer in a Cloud-Radio Access Network
CN112929408A (en) * 2021-01-19 2021-06-08 郑州阿帕斯数云信息科技有限公司 Dynamic load balancing method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200314693A1 (en) * 2017-12-20 2020-10-01 Nokia Solutions And Networks Oy Method and Apparatus for User Transfer in a Cloud-Radio Access Network
CN110113399A (en) * 2019-04-24 2019-08-09 华为技术有限公司 Load balancing management method and relevant apparatus
CN110740164A (en) * 2019-09-04 2020-01-31 无锡华云数据技术服务有限公司 Server determination method, regulation and control method, device, equipment and storage medium
CN112929408A (en) * 2021-01-19 2021-06-08 郑州阿帕斯数云信息科技有限公司 Dynamic load balancing method and device

Also Published As

Publication number Publication date
CN116669108A (en) 2023-08-29

Similar Documents

Publication Publication Date Title
US10499279B2 (en) Method and apparatus for dynamic association of terminal nodes with aggregation nodes and load balancing
US10257265B2 (en) Redundancy network protocol system
US11734138B2 (en) Hot standby method, apparatus, and system
US6983294B2 (en) Redundancy systems and methods in communications systems
US9270536B2 (en) BGP slow peer detection
US8521884B2 (en) Network system and method of address resolution
WO2018077238A1 (en) Switch-based load balancing system and method
US7848338B2 (en) Network-based reliability of mobility gateways
US7257731B2 (en) System and method for managing protocol network failures in a cluster system
US20060182033A1 (en) Fast multicast path switching
EP3371940B1 (en) System and method for handling link loss in a network
WO2021004517A1 (en) Method, device and system for implementing core network sub-slice disaster recovery
US11252267B2 (en) Content stream integrity and redundancy system
US11546215B2 (en) Method, system, and device for data flow metric adjustment based on communication link state
CN112039710A (en) Service fault processing method, terminal device and readable storage medium
CN114389972B (en) Packet loss detection method and device and storage medium
CN110674096B (en) Node troubleshooting method, device and equipment and computer readable storage medium
US10205630B2 (en) Fault tolerance method for distributed stream processing system
WO2023155550A1 (en) Message sending methods, message sending apparatus and storage medium
CN109428814B (en) Multicast traffic transmission method, related equipment and computer readable storage medium
CN112104531B (en) Backup implementation method and device
US8437274B2 (en) Discovery of disconnected components in a distributed communication network
CN115277379B (en) Distributed lock disaster recovery processing method and device, electronic equipment and storage medium
US11563638B1 (en) Methods, systems, and computer readable media for optimizing network bandwidth utilization through intelligent updating of network function (NF) profiles with NF repository function
CN110336750B (en) Data forwarding method and device and service providing side edge equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22926853

Country of ref document: EP

Kind code of ref document: A1