CN114500544A - Method, system, equipment and medium for load balancing among nodes - Google Patents

Method, system, equipment and medium for load balancing among nodes Download PDF

Info

Publication number
CN114500544A
CN114500544A CN202210076080.3A CN202210076080A CN114500544A CN 114500544 A CN114500544 A CN 114500544A CN 202210076080 A CN202210076080 A CN 202210076080A CN 114500544 A CN114500544 A CN 114500544A
Authority
CN
China
Prior art keywords
node
message
size
unprocessed data
unprocessed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210076080.3A
Other languages
Chinese (zh)
Inventor
刘同强
王朝辉
刘刚
周玉龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Original Assignee
Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd filed Critical Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Priority to CN202210076080.3A priority Critical patent/CN114500544A/en
Publication of CN114500544A publication Critical patent/CN114500544A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a load balancing method among nodes, which comprises the following steps: responding to the fact that a first node detects that the size of unprocessed data belonging to a second node is larger than a threshold value, and sending a first message carrying the size of the unprocessed data to the second node; responding to the second node receiving the first message, and judging the size of the unprocessed data carried in the first message; and responding to the size of the unprocessed data in a first preset interval, and the second node sends a second message carrying the time for delaying message sending to the rest nodes so that the rest nodes send data messages to the second node according to the corresponding time delay for delaying message sending. The invention also discloses a system, a computer device and a readable storage medium. The scheme provided by the invention adopts the first message and the second message to carry out load regulation and control according to the congestion condition of each node.

Description

Method, system, equipment and medium for load balancing among nodes
Technical Field
The present invention relates to the field of load balancing, and in particular, to a method, a system, a device, and a storage medium for load balancing between nodes.
Background
With the rapid development of information technology and application scenes, data explosively increases, rapid processing of data becomes a prominent problem, meanwhile, the requirement for real-time data processing puts higher demands on computing performance, and multi-CPU cooperative processing of mass data is a research hotspot in recent years.
In the existing NUMA (Non Uniform Memory Access) technology, a plurality of CPUs are grouped, for example, 4 CPUs are divided into one group, and if the total number of the CPUs is 16, the 4 groups can be divided into 4 groups, each group is called a node, nodes are connected to each other through an interconnection module, each node has resources such as an independent local Memory, but each node can Access resources such as a Memory of another node through the interconnection module.
However, the existing hardware NUMA technology has high performance, and because the technology is transparent to upper-layer software, the load operation of each CPU is easy to be unbalanced.
Disclosure of Invention
In view of the above, in order to overcome at least one aspect of the above problems, an embodiment of the present invention provides an inter-node load balancing method, including the following steps:
responding to the fact that a first node detects that the size of unprocessed data belonging to a second node is larger than a threshold value, and sending a first message carrying the size of the unprocessed data to the second node;
responding to the second node receiving the first message, and judging the size of the unprocessed data carried in the first message;
and responding to the size of the unprocessed data in a first preset interval, the second node sends second messages carrying the time for delaying message sending to all the rest nodes so that all the rest nodes send data messages to the second node according to the corresponding time delay for delaying message sending.
In some embodiments, further comprising:
and responding to the size of the unprocessed data in a second preset interval, and sending a second message carrying the time for delaying message sending to a corresponding node according to the residual unprocessed request in the second node so that the corresponding node sends a data message to the second node according to the time delay of the corresponding message sending delay.
In some embodiments, further comprising:
and in response to the size of the unprocessed data being in a third preset interval, raising the priority of the requests which belong to the requests sent by the first node in the remaining unprocessed requests in the second node so that the second node processes the requests sent by the first node preferentially.
In some embodiments, further comprising:
and in response to the second node receiving second messages sent by other nodes, determining the priority of the unprocessed request corresponding to the corresponding node according to the size of unprocessed data carried in each second message.
In some embodiments, in response to a first node detecting that a size of unprocessed data belonging to a second node is greater than a threshold, sending a first packet carrying the size of the unprocessed data to the second node, further including:
mapping the size of the unprocessed data to a state value corresponding to a preset interval;
and converting the state value into a binary system and writing the binary system into a preset position of the first message.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides an inter-node load balancing system, including:
the first message module is configured to respond to the fact that the first node detects that the size of unprocessed data belonging to the second node is larger than a threshold value, and send a first message carrying the size of the unprocessed data to the second node;
the judging module is configured to respond to the second node receiving the first message and judge the size of the unprocessed data carried in the first message;
and the second message module is configured to respond that the size of the unprocessed data is in a first preset interval, and the second node sends second messages carrying the message sending delay time to all the remaining nodes so that all the remaining nodes send data messages to the second node according to the corresponding message sending delay time.
In some embodiments, the second messaging module is further configured to:
and responding to the size of the unprocessed data in a second preset interval, and sending a second message carrying the time for delaying message sending to a corresponding node according to the residual unprocessed request in the second node so that the corresponding node sends a data message to the second node according to the time delay of the corresponding message sending delay.
In some embodiments, the second messaging module is further configured to:
and in response to the size of the unprocessed data being in a third preset interval, raising the priority of the requests which belong to the requests sent by the first node in the remaining unprocessed requests in the second node so that the second node processes the requests sent by the first node preferentially.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides a computer apparatus, including:
at least one processor; and
a memory storing a computer program operable on the processor, wherein the processor executes the program to perform any of the steps of the method for load balancing between nodes as described above.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides a computer-readable storage medium, which stores a computer program that, when executed by a processor, performs the steps of any one of the above-described methods for load balancing between nodes.
The invention has one of the following beneficial technical effects: the scheme provided by the invention adopts the first message and the second message to carry out load regulation and control according to the congestion condition of each node.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for load balancing between nodes according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a message structure provided in the embodiment of the present invention;
FIG. 3 is a schematic diagram of a node connection structure provided by an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an inter-node load balancing system according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a computer device provided by an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention are described in further detail with reference to the accompanying drawings.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two entities with the same name but different names or different parameters, and it should be noted that "first" and "second" are merely for convenience of description and should not be construed as limitations of the embodiments of the present invention, and they are not described in any more detail in the following embodiments.
According to an aspect of the present invention, an embodiment of the present invention provides an inter-node load balancing method, as shown in fig. 1, which may include the steps of:
s1, responding to the fact that the size of unprocessed data belonging to a second node is larger than a threshold value when the first node detects that the size of the unprocessed data is larger than the threshold value, and sending a first message carrying the size of the unprocessed data to the second node;
s2, responding to the first message received by the second node, and judging the size of the unprocessed data carried in the first message;
s3, responding to the size of the unprocessed data in a first preset interval, the second node sends a second message carrying the message sending delay time to all the rest nodes so that all the rest nodes send data messages to the second node according to the corresponding message sending delay time.
The scheme provided by the invention adopts the first message and the second message to carry out load regulation and control according to the congestion condition of each node.
In the embodiment of the invention, the first message can use a Req channel, the operation code selects reserved 3B-3F, and only one message operation code is used as 3B at present. The second message may use the Snp lane, with the opcode selected to be 18-1F reserved, and currently only one message opcode of 18 is used. The message format may be as shown in fig. 2, where TgtID represents a target ID, SrcID represents a request ID, Opcode represents a request type, State represents a status, and Reserved is a Reserved bit. The state bit represents the resource use condition of the node in the first message, and represents the waiting time period of the message sent by the Snp node in the second message.
In some embodiments, in step S1, in response to the first node detecting that the size of the unprocessed data belonging to the second node is greater than the threshold, sending a first packet carrying the size of the unprocessed data to the second node, further including:
mapping the size of the unprocessed data to a state value corresponding to a preset interval;
and converting the state value into a binary system and writing the binary system into a preset position of the first message.
Specifically, when the node detects that the size of unprocessed data belonging to other nodes reaches a threshold value or the resource occupancy rate exceeds the threshold value, the node sends a first message.
For example, as shown in the structure of fig. 3, when the node chip 0 detects that the resource occupancy rate of the chip internal to the node chip 3 exceeds 50%, the first packet is sent to the node chip 3, and the value of the status bit is mapped to 0-15 according to the occupancy rate.
In some embodiments, further comprising:
and responding to the size of the unprocessed data in a second preset interval, and sending a second message carrying the time for delaying message sending to a corresponding node according to the residual unprocessed request in the second node so that the corresponding node sends a data message to the second node according to the time delay of the corresponding message sending delay.
In some embodiments, further comprising:
and in response to the size of the unprocessed data being in a third preset interval, raising the priority of the requests which belong to the requests sent by the first node in the remaining unprocessed requests in the second node so that the second node processes the requests sent by the first node preferentially.
Specifically, after the second node receives the first packet, it may determine whether to send a downgrade packet to another node according to the value of the status bit in the first packet. For example, when the value of the status bit is in the first preset interval, the degradation packet may be sent to all other nodes, when the value of the status bit is in the second preset interval, the degradation packet may be sent to the node with the smaller number of requests, and when the value of the status bit is in the third preset interval, the degradation packet may not be sent, and only the request processing priority of the first node needs to be increased. The time for sending the degraded message carrying the delay may be a value obtained by multiplying a state bit in the first message by a preset value.
For example, as shown in fig. 3, a node chip 3 receives a first message sent by a node 0, determines whether nodes 1 and 2 send the first message, if not, the chip preferentially processes a data request message of the node 0, and takes corresponding measures according to a value of a status bit in the first message sent by the node 0, the value of the status bit is 0 to 7, and no Snp degradation information is sent to other nodes; the value of the status bit is 8-11, the data request message quantity of the chips 1 and 2 in the chip of the node is judged, and degradation information is sent to the chip with small quantity ratio; the value of the status bit is larger than 11, and two pieces of degradation information are sent to the nodes 1 and 2; the value of the status bit in the degradation information may be a status value 4.
In some embodiments, further comprising:
and in response to the second node receiving second messages sent by other nodes, determining the priority of the unprocessed request corresponding to the corresponding node according to the size of unprocessed data carried in each second message.
Specifically, when the other nodes also send the first packet to the second node, the processing priority needs to be determined according to the value of the status bit in the first packet, and the higher the value of the status bit is, the higher the priority is.
For example, as shown in fig. 3, if the nodes 1 and 2 also send the first message, the priorities are sorted from high to low according to the value of the status bit in the first message, and the priority of the unsolicited message is the lowest. The same is true. Taking corresponding measures according to the value of the state bit, wherein the value of the state bit is 0-7, and Snp degradation information is not sent to other nodes; the value of the status bit is 8-11, and degradation information is sent to the node chip which does not send the request; the value of the status bit in the degradation information may be a status value 4.
And the other nodes receive the degradation information and send data transmission messages to the third node according to the information delay in the Snp messages.
The scheme provided by the invention adopts the first message and the second message to carry out load regulation and control according to the congestion condition of each node.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides an inter-node load balancing system 400, as shown in fig. 4, including:
the first packet module 401 is configured to send, to a second node, a first packet carrying a size of unprocessed data in response to a first node detecting that the size of unprocessed data belonging to the second node is greater than a threshold;
the determining module 402 is configured to determine, in response to the second node receiving the first packet, a size of the unprocessed data carried in the first packet;
the second packet module 403 is configured to respond that the size of the unprocessed data is in a first preset interval, and the second node sends a second packet carrying a delay packet sending time to all the remaining nodes, so that all the remaining nodes send data packets to the second node according to the corresponding delay packet sending time delay.
In some embodiments, the second messaging module 403 is further configured to:
and responding to the size of the unprocessed data in a second preset interval, and sending a second message carrying the time for delaying message sending to a corresponding node according to the residual unprocessed request in the second node so that the corresponding node sends a data message to the second node according to the time delay of the corresponding message sending delay.
In some embodiments, the second messaging module 403 is further configured to:
and in response to the size of the unprocessed data being in a third preset interval, raising the priority of the requests which belong to the requests sent by the first node in the remaining unprocessed requests in the second node so that the second node processes the requests sent by the first node preferentially.
In some embodiments, the second messaging module 403 is further configured to:
and in response to the second node receiving second messages sent by other nodes, determining the priority of the unprocessed request corresponding to the corresponding node according to the size of unprocessed data carried in each second message.
In some embodiments, the first messaging module 401 is further configured to:
mapping the size of the unprocessed data to a state value corresponding to a preset interval;
and converting the state value into a binary system and writing the binary system into a preset position of the first message.
Based on the same inventive concept, according to another aspect of the present invention, as shown in fig. 5, an embodiment of the present invention further provides a computer apparatus 501, comprising:
at least one processor 520; and
the memory 510, the memory 510 stores a computer program 511 that can be executed on the processor, and the processor 520 executes the program to perform the steps of any of the above methods for load balancing between nodes.
Based on the same inventive concept, according to another aspect of the present invention, as shown in fig. 6, an embodiment of the present invention further provides a computer-readable storage medium 601, where the computer-readable storage medium 601 stores computer program instructions 610, and the computer program instructions 610, when executed by a processor, perform the steps of any one of the above methods for load balancing between nodes.
Finally, it should be noted that, as will be understood by those skilled in the art, all or part of the processes of the methods of the above embodiments may be implemented by a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above.
Further, it should be appreciated that the computer-readable storage media (e.g., memory) herein can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as software or hardware depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments of the present invention.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the present disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items.
The numbers of the embodiments disclosed in the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, of embodiments of the invention is limited to these examples; within the idea of an embodiment of the invention, also technical features in the above embodiment or in different embodiments may be combined and there are many other variations of the different aspects of the embodiments of the invention as described above, which are not provided in detail for the sake of brevity. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present invention are intended to be included within the scope of the embodiments of the present invention.

Claims (10)

1. A method for balancing load among nodes is characterized by comprising the following steps:
responding to the fact that a first node detects that the size of unprocessed data belonging to a second node is larger than a threshold value, and sending a first message carrying the size of the unprocessed data to the second node;
responding to the second node receiving the first message, and judging the size of the unprocessed data carried in the first message;
and responding to the size of the unprocessed data in a first preset interval, the second node sends second messages carrying the time for delaying message sending to all the rest nodes so that all the rest nodes send data messages to the second node according to the corresponding time delay for delaying message sending.
2. The method of claim 1, further comprising:
and responding to the size of the unprocessed data in a second preset interval, and sending a second message carrying the time for delaying message sending to a corresponding node according to the residual unprocessed request in the second node so that the corresponding node sends a data message to the second node according to the time delay of the corresponding message sending delay.
3. The method of claim 1, further comprising:
and in response to the size of the unprocessed data being in a third preset interval, raising the priority of the requests which belong to the requests sent by the first node in the remaining unprocessed requests in the second node so that the second node processes the requests sent by the first node preferentially.
4. The method of claim 1, further comprising:
and in response to the second node receiving second messages sent by other nodes, determining the priority of the unprocessed request corresponding to the corresponding node according to the size of unprocessed data carried in each second message.
5. The method of claim 1, wherein in response to a first node detecting that a size of unprocessed data belonging to a second node is greater than a threshold, sending a first packet carrying the size of the unprocessed data to the second node, further comprising:
mapping the size of the unprocessed data to a state value corresponding to a preset interval;
and converting the state value into a binary system and writing the binary system into a preset position of the first message.
6. An inter-node load balancing system, comprising:
the first message module is configured to respond to the fact that the first node detects that the size of unprocessed data belonging to the second node is larger than a threshold value, and send a first message carrying the size of the unprocessed data to the second node;
the judging module is configured to respond to the second node receiving the first message and judge the size of the unprocessed data carried in the first message;
and the second message module is configured to respond that the size of the unprocessed data is in a first preset interval, and the second node sends second messages carrying the message sending delay time to all the remaining nodes so that all the remaining nodes send data messages to the second node according to the corresponding message sending delay time.
7. The system of claim 6, wherein the second messaging module is further configured to:
and responding to the size of the unprocessed data in a second preset interval, and sending a second message carrying the time for delaying message sending to a corresponding node according to the residual unprocessed request in the second node so that the corresponding node sends a data message to the second node according to the time delay of the corresponding message sending delay.
8. The system of claim 6, wherein the second messaging module is further configured to:
and in response to the size of the unprocessed data being in a third preset interval, raising the priority of the requests which belong to the requests sent by the first node in the remaining unprocessed requests in the second node so that the second node processes the requests sent by the first node preferentially.
9. A computer device, comprising:
at least one processor; and
memory storing a computer program operable on the processor, characterized in that the processor executes the program to perform the steps of the method according to any of claims 1-4.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, is adapted to carry out the steps of the method according to any one of claims 1-4.
CN202210076080.3A 2022-01-23 2022-01-23 Method, system, equipment and medium for load balancing among nodes Pending CN114500544A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210076080.3A CN114500544A (en) 2022-01-23 2022-01-23 Method, system, equipment and medium for load balancing among nodes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210076080.3A CN114500544A (en) 2022-01-23 2022-01-23 Method, system, equipment and medium for load balancing among nodes

Publications (1)

Publication Number Publication Date
CN114500544A true CN114500544A (en) 2022-05-13

Family

ID=81472813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210076080.3A Pending CN114500544A (en) 2022-01-23 2022-01-23 Method, system, equipment and medium for load balancing among nodes

Country Status (1)

Country Link
CN (1) CN114500544A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115733710A (en) * 2022-11-18 2023-03-03 苏州挚途科技有限公司 Message sending method, target node, non-target node and message transmission system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090023455A1 (en) * 2007-07-16 2009-01-22 Shishir Gupta Independent Load Balancing for Servers
CN103560947A (en) * 2013-10-25 2014-02-05 福建星网锐捷网络有限公司 Method and device for avoiding vibration of neighboring relationship of intermediate system
US20160092258A1 (en) * 2014-09-30 2016-03-31 Vmware, Inc. Numa i/o aware network queue assignments
CN107547321A (en) * 2017-06-28 2018-01-05 新华三技术有限公司 Message processing method, device, associated electronic device and readable storage medium storing program for executing
US20190079805A1 (en) * 2017-09-08 2019-03-14 Fujitsu Limited Execution node selection method and information processing apparatus
WO2019242638A1 (en) * 2018-06-20 2019-12-26 华为技术有限公司 Load balancing method and device
CN111078363A (en) * 2019-12-18 2020-04-28 深信服科技股份有限公司 NUMA node scheduling method, device, equipment and medium for virtual machine
CN113411263A (en) * 2021-06-18 2021-09-17 中国工商银行股份有限公司 Data transmission method, device, equipment and storage medium
US20210314837A1 (en) * 2018-08-08 2021-10-07 Zte Corporation Load Balancing Method and Device, Storage Medium, and Electronic Device
CN113806045A (en) * 2021-09-12 2021-12-17 济南浪潮数据技术有限公司 Task allocation method, system, device and medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090023455A1 (en) * 2007-07-16 2009-01-22 Shishir Gupta Independent Load Balancing for Servers
CN103560947A (en) * 2013-10-25 2014-02-05 福建星网锐捷网络有限公司 Method and device for avoiding vibration of neighboring relationship of intermediate system
US20160092258A1 (en) * 2014-09-30 2016-03-31 Vmware, Inc. Numa i/o aware network queue assignments
CN107547321A (en) * 2017-06-28 2018-01-05 新华三技术有限公司 Message processing method, device, associated electronic device and readable storage medium storing program for executing
US20190079805A1 (en) * 2017-09-08 2019-03-14 Fujitsu Limited Execution node selection method and information processing apparatus
WO2019242638A1 (en) * 2018-06-20 2019-12-26 华为技术有限公司 Load balancing method and device
US20210314837A1 (en) * 2018-08-08 2021-10-07 Zte Corporation Load Balancing Method and Device, Storage Medium, and Electronic Device
CN111078363A (en) * 2019-12-18 2020-04-28 深信服科技股份有限公司 NUMA node scheduling method, device, equipment and medium for virtual machine
CN113411263A (en) * 2021-06-18 2021-09-17 中国工商银行股份有限公司 Data transmission method, device, equipment and storage medium
CN113806045A (en) * 2021-09-12 2021-12-17 济南浪潮数据技术有限公司 Task allocation method, system, device and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115733710A (en) * 2022-11-18 2023-03-03 苏州挚途科技有限公司 Message sending method, target node, non-target node and message transmission system
CN115733710B (en) * 2022-11-18 2024-04-26 苏州挚途科技有限公司 Message sending method, target node, non-target node and message transmission system

Similar Documents

Publication Publication Date Title
TWI477127B (en) Computer-implemented method,machine-readable medium and client device for scheduling packet transmission
CN101616083B (en) Message forwarding method and device
US8121035B2 (en) Apparatus and method for packet buffer management in IP network system
CN105812287A (en) Effective circuits in packet-switched networks
US20090213732A1 (en) Preserving packet order when migrating network flows between cores
CN110995616B (en) Management method and device for large-flow server and readable medium
CN111835562B (en) Log uploading method, log inquiring method and system
CN102891809B (en) Multi-core network device message presses interface order-preserving method and system
CN114500544A (en) Method, system, equipment and medium for load balancing among nodes
WO2021197128A1 (en) Traffic rate-limiting method and apparatus
WO2021259321A1 (en) Storage scheduling method, device, and storage medium
CN116414534A (en) Task scheduling method, device, integrated circuit, network equipment and storage medium
CN113453315A (en) Terminal access method, device and storage medium
CN104038441A (en) Method and system for transmitting data
CN117032977A (en) Mixed part application resource allocation method and device, computer equipment and storage medium
CN115955447A (en) Data transmission method, switch and switch system
JP2011091711A (en) Node, method for distributing transmission frame, and program
CN111131083B (en) Method, device and equipment for data transmission between nodes and computer readable storage medium
CN114499762A (en) Communication system, multi-path forwarding method under 5G network and communication equipment
CN114531401A (en) Token bucket flow limiting method based on priority
CN113098763A (en) Instant communication message sending method, device, storage medium and equipment
CN112019589A (en) Multi-level load balancing data packet processing method
CN105072047A (en) Message transmitting and processing method
WO2023184979A1 (en) Base station energy-saving method for ultra-dense network, energy-saving device, and readable storage medium
RU2772710C2 (en) Method for processing request and corresponding object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination