CN107135278A - A kind of efficient load equalizer and SiteServer LBS - Google Patents
A kind of efficient load equalizer and SiteServer LBS Download PDFInfo
- Publication number
- CN107135278A CN107135278A CN201710548144.4A CN201710548144A CN107135278A CN 107135278 A CN107135278 A CN 107135278A CN 201710548144 A CN201710548144 A CN 201710548144A CN 107135278 A CN107135278 A CN 107135278A
- Authority
- CN
- China
- Prior art keywords
- load equalizer
- packet
- module
- network interface
- interface card
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1014—Server selection for load balancing based on the content of a request
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Computer And Data Communications (AREA)
Abstract
The present invention provides a kind of efficient load equalizer and SiteServer LBS, belongs to Control on Communication field.Load equalizer of the present invention is arranged on network interface card, the load equalizer input/output terminal is connected with the input and output queue of network interface card respectively, the load equalizer is provided with reception parsing module, balance module and sending module, the parsing module is used to receive the packet of network interface card input rank and parses the forwarding information of packet, and the balance module is used for the packet equilibrium assignment of parsing into transmit queue;Sending module is used for the read data packet from transmit queue, and packet is distributed to back-end server by the output queue equilibrium of network interface card.Beneficial effects of the present invention are:Packet need not carry out the parsing of complete TCP/IP protocol stacks, it is only necessary to several bytes before parsing, greatly improve data processing performance.
Description
Technical field
The present invention relates to Control on Communication field, more particularly to a kind of efficient load equalizer, further relate to a kind of comprising institute
State the SiteServer LBS of load equalizer.
Background technology
Load balancing under normal circumstances will be done between flexibility and performance to be weighed, and User space software view has
Old brand load balancing software as Haproxy and Nginx, they typically configure and used and are easier, still
Forwarded downwards due to needing packet to be handled upwards from level to level to software again from network interface card to kernel, then from level to level, storehouse is deep
Unit performance is generally all relatively more general.
In order to improve unit performance, reducing storehouse level just has LVS(Linux virtual server), it is substantially work
Make the load equalizer in inner nuclear layer, performance has the raising of the order of magnitude, but configures relatively also more complicated and right
Network condition requirement also has particular/special requirement;Other network data will also pass through complete TCP/IP protocol stacks before reaching LVS
Such as packet will by the driving layer of network interface card, link layer, IP layers, TCP layer, application layer etc. and kernel it is a series of
Filter modules, and these are very miscellaneous for forwarding and are not necessarily to.
The content of the invention
To solve the problems of the prior art, the present invention provides a kind of efficient load equalizer, also provides one kind and includes
The SiteServer LBS of the load equalizer.
Load equalizer of the invention efficient is arranged on network interface card, the load equalizer input/output terminal respectively with network interface card
Input and output queue be connected, the load equalizer is provided with receiving parsing module, balance module and sending module, the solution
Analysis module is used to receive the packet of network interface card input rank and parses the forwarding information of packet, and the balance module is used for solution
The packet equilibrium assignment of analysis is into transmit queue;Sending module is used for the read data packet from transmit queue, and packet
Back-end server is distributed to by the output queue equilibrium of network interface card.
The present invention is further improved, described to receive preceding several bytes that parsing module only parses packet, is extracted and is turned
Photos and sending messages, the forwarding information includes source address, source port, destination address, target port and protocol number.
The present invention is further improved, in addition to detection module, is arranged between reception parsing module and balance module, is used
Detected in the packet to entering balance module, if the testing result of packet is abandons, the packet does not enter
Balance module.
The present invention is further improved, the memory sharing of the load equalizer and network interface card, the load equalizer and net
Card shares a data pool space, and the load equalizer can read the internal storage data of network interface card.
The present invention is further improved, using DPDK technologies, supports that there is provided the support of driver under application space by UIO.
The present invention is further improved, including memory pool, and the memory pool is used without lock ring shape cache management.
The present invention is further improved, when the load equalizer is arranged on the server of LINUX operating systems, control
Device upper thread journey processed and each data surface thread are tied to different CPU cores.
The present invention also provides a kind of SiteServer LBS for including the load equalizer, including client, domain name mapping
Server, router, multiple servers, the client are connected with router and domain name resolution server respectively, the load
Equalizing system is arranged between router and multiple servers.
The present invention is further improved, and when client sends request bag, domain name resolution server is according to request
The position of client returns to the nearest VIP addresses in a client geographical position from request;The request bag reaches VIP addresses
Corresponding router;Multiple request bags are evenly distributed on the multiple load equalizers being attached thereto by router respectively;Load
Request bag equilibrium is distributed to the multiple servers being attached thereto by the balance module of balanced device again.
The present invention is further improved, and the load equalizer will be asked when receiving request bag and being transmitted to server
The source address modification of bag is address when user asks, when server is responded, the source address of response bag is changed to the VIP of user
Location, the obstructed overload balanced device of response bag is directly transmitted and gives the connected router of client.
Compared with prior art, the beneficial effects of the invention are as follows:Packet only needs to be born in the driving layer for entering network interface card
Carry balanced device to call, it is not necessary to which IP layers of entrance, TCP layer, application layer etc. carry out the parsing of complete TCP/IP protocol stacks, and
The packet of the present invention only needs to several bytes before analysis, greatly improves data processing performance.Load equalizer can be entered
The extending transversely and Longitudinal Extension of row, effectively accelerates the speed of network data bag.
Brief description of the drawings
Fig. 1 is load equalizer structural representation of the present invention;
Fig. 2 is load equalizer data processing schematic diagram;
Fig. 3 is that load equalizer shares schematic diagram with network interface card internal storage data;
Fig. 4 is SiteServer LBS structural representation of the present invention.
Embodiment
The present invention is described in further details with reference to the accompanying drawings and examples.
As shown in figure 1, the present invention provides a kind of efficient load equalizer, the load equalizer is arranged on network interface card,
The load equalizer input/output terminal is connected with the input and output queue of network interface card respectively, and the load equalizer is provided with reception
Parsing module, balance module and sending module, the parsing module are used to receive the packet of network interface card input rank and parse number
According to the forwarding information of bag, the balance module is used for the packet equilibrium assignment of parsing into transmit queue;Sending module is used
Back-end server is distributed to by the output queue equilibrium of network interface card in the read data packet from transmit queue, and packet.
The reception parsing module of this example is receiving queue and data packet head rewrites unit, and balance module therein includes receiving
IP matching units, connection session management unit, data packet element etc. again.After data have been Resealed, pass through packet
Transmitting element is sent.
By the setting of this example, data network can bypass kernel, it is not necessary to by a series of resume modules of kernel, and
And, for packet, it is not required that carried out completely by the driving layer of network interface card, link layer, IP layer, TCP layer, application layer etc.
The parsing of TCP/IP protocol stacks.The packet of this example only needs to be supported balanced device in the driving layer for entering network interface card and call, and
By several bytes before the reception parsing module analysis of this example, forwarding information is extracted, data processing time is effectively saved, carried
High data-handling efficiency.The forwarding information of this example include source address, source port, destination address, target port and protocol number this five
Tuple is just enough for forwarding;Remaining such as payload, the thing of sequence number etc is completely indifferent to, directly
Connect plug to network interface card delivery outlet to below realize server.
As shown in Fig. 2 as one embodiment of the present of invention, the packet of this example is message, the load equalizer of this example
Realized respectively by corresponding thread.The flow chart of data processing of the load equalizer of this example is:
1. receive:Network interface card is received the message DMA received(Direct memory access)Into the hardware acceptance ring buffering area controlled oneself,
Receiving thread takes out message from the ring buffering area of hardware, explains type of message, type identification to message and management structure
rte_mbuf->It is exactly subport, pipe, the traffic_class of identification message in fact in pk.Hash.sched,
Queue, color field, are finally temporarily stored in message in one receiving queue, wait balance dispatching thread that dispatching message is arrived
Corresponding queue.
2. join the team:Balance dispatching thread takes out a number of message from receiving queue, according to the mark of each message
Know, find corresponding queue, message is assigned in corresponding queue, then scheme to BIT(A kind of mass data processing algorithm)
Middle activation queue.
It is noted that this example also includes detection module, it is arranged between reception parsing module and balance module, is used for
The packet for entering balance module is detected, if the testing result of packet is abandons, the packet does not enter
Weigh module.
Such as, if being configured with detection module, need first to detect before joining the team, such as the condition set is receiving queue
In data can not be more than setting value, or packet quantity no more than certain threshold value, when some packets are transmitted, reach
To these conditions, then just abandoned, to improve data transmission efficiency.
3. go out team:After balance dispatching thread joins the team the message of certain amount, using corresponding equalization algorithm, from queue
One queue of middle selection, then takes out message from selected queue and is saved in transmit queue, wait thread to be sent to send out
Send.
The load-balancing algorithm of this example includes Generating Random Number, polling algorithm, Weighted Round Robin, minimum connection calculation
Method, most fast transmission algorithm, anticipation algorithm etc..Packet, is averagely allocated to connection by the preferred polling algorithm of this example in sequence
Multiple servers.
4. send:Send thread and message is read from transmit queue, in the buffering area for message being sent to outgoing network interface card.
As shown in figure 3, the data-handling efficiency in order to improve this example, the load equalizer of this example and the internal memory of network interface card are common
Enjoy, so there is no need to by packet again from network interface card is once copied to load equalizer, it is not required that packet again
Network interface card is copied to from load equalizer.The input rank of this example network interface card, load equalizer and network interface card output queue share a data
Pool space, the load equalizer can read the internal storage data of network interface card, the receiving queue of load equalizer and transmit queue
Three pointers constantly move processing data bag in internal memory shared pool, reduce the expense of replicate data.
The load equalizer of this example can utilize DPDK(Data Plane Development Kit)Technology vertically extends,
Wherein, DPDK is the application development bag for the lifting data surface message fast processing speeds that INTEL is provided, this load equalizer
The main speed for accelerating network data bag using several characteristics below DPDK:
1. support to improve internal storage access efficiency using big caching of page;
2. utilize UIO(Userspace I/O, are the I/O technologies for operating in user's space)Support that there is provided driven under application space
The support of program, that is to say, that trawl performance operates in user's space, subtracts and has descended message in user's space and application space
Multiple copy;
3. being supported using LINUX compatibilities, control upper thread journey and each data surface thread are tied to different CPU cores, saved
Thread is dispatched in each CPU core back and forth;
4. providing memory pool, and memory pool is carried out, without lock ring shape cache management, to accelerate internal storage access efficiency.
As shown in figure 4, this example additionally provides a kind of SiteServer LBS for including the load equalizer, including client
End, domain name resolution server(Dns server), router, multiple servers, the client respectively with router and domain name solution
Analyse server to be connected, the SiteServer LBS is arranged between router and multiple servers.Realize the horizontal stroke of load equalizer
To extension and the overall equilibrium assignment of SiteServer LBS, data-handling efficiency is further improved.Wherein, client and DNS service
The intercommunication of DNS message is carried out between device, between the load equalizer and router in addition to the forwarding of network traffics, is also sent
BGP signalings are to router, the BGP(Border Gateway Protocol)Signaling is operate in a kind of route of autonomous system on TCP
Agreement.
In general, a load equalizer inherently back-end services access point extending transversely, for general website
One load equalizer is just much of that;But high level flow is tackled, load equalizer is also wanted in itself can be extending transversely, also to locate
Manage the High Availabitity of load equalizer.The load equalizer that this example is connected with router is more than 1, each load equalizer connection
Multiple servers, when load equalizer is multiple, the handling process of this example is:
When client, which is sent, asks, DNS servers(Domain name resolution server)One can be returned according to the position of client request
It is individual from the nearest VIP addresses in request geographical position, first DNS this layer do one it is extending transversely.
Then, when request bag reaches the corresponding router in VIP addresses;Router passes through ECMP(Equal cost multipath)Association
Request, can be evenly distributed on following reciprocity multiple load equalizers by view, and so in router, this layer has done individual negative
Carry balanced, the load equalizer for allowing below also achieves extending transversely;The balance module of load equalizer is balanced by request bag again
It is distributed to the multiple servers being attached thereto.
Preferably, in load equalizer processing links, this example load equalizer is transmitted to server receiving request bag
When, it is address when user asks by the source address modification of request bag, when server is responded, the source address of response bag is changed to use
The VIP addresses at family, the obstructed overload balanced device of response bag is directly transmitted and gives the connected router of client, so as to reduce negative
Carry balanced device pressure.
The present invention effectively accelerates network data bag by carrying out extending transversely and Longitudinal Extension to load equalizer
Speed.
Embodiment described above is the better embodiment of the present invention, not limits the specific of the present invention with this
Practical range, the scope of the present invention includes being not limited to present embodiment, all equal according to the equivalence changes of the invention made
Within the scope of the present invention.
Claims (10)
1. a kind of efficient load equalizer, it is characterised in that:The load equalizer is arranged on network interface card, the load balancing
Device input/output terminal is connected with the input and output queue of network interface card respectively, the load equalizer provided with receive parsing module,
The module that weighs and sending module, the parsing module are used to receive the packet of network interface card input rank and parse the forwarding letter of packet
Breath, the balance module is used for the packet equilibrium assignment of parsing into transmit queue;Sending module is used for from transmit queue
Middle read data packet, and packet is distributed to back-end server by the output queue equilibrium of network interface card.
2. load equalizer according to claim 1, it is characterised in that:The reception parsing module only parses packet
Preceding several bytes, extract forwarding information, and the forwarding information includes source address, source port, destination address, target port and association
View number.
3. load equalizer according to claim 1, it is characterised in that:Also include detection module, be arranged on reception parsing
Between module and balance module, for being detected to the packet for entering balance module, if the testing result of packet is
Abandon, then the packet does not enter balance module.
4. the load equalizer according to claim any one of 1-3, it is characterised in that:The load equalizer and network interface card
Memory sharing, the load equalizer and network interface card share a data pool space, and the load equalizer can read network interface card
Internal storage data.
5. load equalizer according to claim 4, it is characterised in that:Using DPDK technologies, support that there is provided application by UIO
The support of driver under space.
6. load equalizer according to claim 5, it is characterised in that:Including memory pool, the memory pool is used without lock
Circular buffer is managed.
7. load equalizer according to claim 5, it is characterised in that:The load equalizer is arranged on LINUX operations
When on the server of system, controller upper thread journey and each data surface thread are tied to different CPU cores.
8. a kind of include the SiteServer LBS of any one of the claim 1-7 load equalizers, it is characterised in that:Including visitor
Family end, domain name resolution server, router, multiple servers, the client respectively with router and domain name resolution server
It is connected, the SiteServer LBS is arranged between router and multiple servers.
9. SiteServer LBS according to claim 8, it is characterised in that:When client sends request bag, the domain
Name resolution server according to the position of the client of request with returning to VIP nearest from the client geographical position of request
Location;The request bag reaches the corresponding router in VIP addresses;Multiple request bags are evenly distributed to are attached thereto respectively by router
Multiple load equalizers on;Request bag equilibrium is distributed to the multiple services being attached thereto by the balance module of load equalizer again
Device.
10. SiteServer LBS according to claim 9, it is characterised in that:The load equalizer is receiving request
It is address when user asks by the source address modification of request bag, when server is responded, by response bag when bag is transmitted to server
Source address be changed to the VIP addresses of user, the obstructed overload balanced device of response bag is directly transmitted and gives the connected road of client
By device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710548144.4A CN107135278A (en) | 2017-07-06 | 2017-07-06 | A kind of efficient load equalizer and SiteServer LBS |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710548144.4A CN107135278A (en) | 2017-07-06 | 2017-07-06 | A kind of efficient load equalizer and SiteServer LBS |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107135278A true CN107135278A (en) | 2017-09-05 |
Family
ID=59737518
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710548144.4A Pending CN107135278A (en) | 2017-07-06 | 2017-07-06 | A kind of efficient load equalizer and SiteServer LBS |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107135278A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108536535A (en) * | 2018-01-24 | 2018-09-14 | 北京奇艺世纪科技有限公司 | A kind of dns server and its thread control method and device |
CN110611622A (en) * | 2018-06-15 | 2019-12-24 | 伊姆西Ip控股有限责任公司 | Method for load balancing, network interface card and computer program product |
CN110768994A (en) * | 2019-10-30 | 2020-02-07 | 中电福富信息科技有限公司 | Method for improving SIP gateway performance based on DPDK technology |
CN111294220A (en) * | 2018-12-07 | 2020-06-16 | 网宿科技股份有限公司 | Network isolation configuration method and device based on nginx |
CN111371804A (en) * | 2020-03-16 | 2020-07-03 | 广州根链国际网络研究院有限公司 | DNS (Domain name Server) back-end forwarding method and system |
CN111490947A (en) * | 2019-01-25 | 2020-08-04 | 上海哔哩哔哩科技有限公司 | Data packet transmitting method, data packet receiving method, system, device and medium |
CN111984415A (en) * | 2020-08-24 | 2020-11-24 | 北京亚鸿世纪科技发展有限公司 | Load balancing method and device based on pipeline forwarding model |
CN112688878A (en) * | 2020-11-25 | 2021-04-20 | 紫光云技术有限公司 | Method for achieving tcp load balancing of transparent transmission source ip based on haproxy |
CN112737966A (en) * | 2020-12-23 | 2021-04-30 | 北京浪潮数据技术有限公司 | Load balancing method and related device |
CN113765785A (en) * | 2021-08-19 | 2021-12-07 | 东北大学 | DPDK-based novel multi-path transmission scheme |
CN114095251A (en) * | 2021-11-19 | 2022-02-25 | 南瑞集团有限公司 | SSLVPN realization method based on DPDK and VPP |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101778050A (en) * | 2010-03-11 | 2010-07-14 | 浪潮(北京)电子信息产业有限公司 | Load balancing method, device and system |
CN102497322A (en) * | 2011-12-19 | 2012-06-13 | 曙光信息产业(北京)有限公司 | High-speed packet filtering device and method realized based on shunting network card and multi-core CPU (Central Processing Unit) |
CN102769575A (en) * | 2012-08-08 | 2012-11-07 | 南京中兴特种软件有限责任公司 | Flow load balancing method for intelligent network card |
CN103853617A (en) * | 2014-02-28 | 2014-06-11 | 中国人民解放军91655部队 | Method for improving data processing performance of Linux system application layer |
-
2017
- 2017-07-06 CN CN201710548144.4A patent/CN107135278A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101778050A (en) * | 2010-03-11 | 2010-07-14 | 浪潮(北京)电子信息产业有限公司 | Load balancing method, device and system |
CN102497322A (en) * | 2011-12-19 | 2012-06-13 | 曙光信息产业(北京)有限公司 | High-speed packet filtering device and method realized based on shunting network card and multi-core CPU (Central Processing Unit) |
CN102769575A (en) * | 2012-08-08 | 2012-11-07 | 南京中兴特种软件有限责任公司 | Flow load balancing method for intelligent network card |
CN103853617A (en) * | 2014-02-28 | 2014-06-11 | 中国人民解放军91655部队 | Method for improving data processing performance of Linux system application layer |
Non-Patent Citations (1)
Title |
---|
蒋文旭: "基于Nginx部署环境的Web加速方案设计与实现", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108536535A (en) * | 2018-01-24 | 2018-09-14 | 北京奇艺世纪科技有限公司 | A kind of dns server and its thread control method and device |
CN110611622A (en) * | 2018-06-15 | 2019-12-24 | 伊姆西Ip控股有限责任公司 | Method for load balancing, network interface card and computer program product |
CN111294220A (en) * | 2018-12-07 | 2020-06-16 | 网宿科技股份有限公司 | Network isolation configuration method and device based on nginx |
CN111490947A (en) * | 2019-01-25 | 2020-08-04 | 上海哔哩哔哩科技有限公司 | Data packet transmitting method, data packet receiving method, system, device and medium |
CN111490947B (en) * | 2019-01-25 | 2024-01-23 | 上海哔哩哔哩科技有限公司 | Data packet sending method, data packet receiving method, system, equipment and medium |
CN110768994A (en) * | 2019-10-30 | 2020-02-07 | 中电福富信息科技有限公司 | Method for improving SIP gateway performance based on DPDK technology |
CN110768994B (en) * | 2019-10-30 | 2022-01-11 | 中电福富信息科技有限公司 | Method for improving SIP gateway performance based on DPDK technology |
CN111371804A (en) * | 2020-03-16 | 2020-07-03 | 广州根链国际网络研究院有限公司 | DNS (Domain name Server) back-end forwarding method and system |
CN111984415A (en) * | 2020-08-24 | 2020-11-24 | 北京亚鸿世纪科技发展有限公司 | Load balancing method and device based on pipeline forwarding model |
CN112688878B (en) * | 2020-11-25 | 2023-04-18 | 紫光云技术有限公司 | Method for achieving tcp load balancing of transparent transmission source ip based on haproxy |
CN112688878A (en) * | 2020-11-25 | 2021-04-20 | 紫光云技术有限公司 | Method for achieving tcp load balancing of transparent transmission source ip based on haproxy |
CN112737966A (en) * | 2020-12-23 | 2021-04-30 | 北京浪潮数据技术有限公司 | Load balancing method and related device |
CN113765785B (en) * | 2021-08-19 | 2022-07-05 | 东北大学 | DPDK-based multipath transmission method |
CN113765785A (en) * | 2021-08-19 | 2021-12-07 | 东北大学 | DPDK-based novel multi-path transmission scheme |
CN114095251A (en) * | 2021-11-19 | 2022-02-25 | 南瑞集团有限公司 | SSLVPN realization method based on DPDK and VPP |
CN114095251B (en) * | 2021-11-19 | 2024-02-13 | 南瑞集团有限公司 | SSLVPN implementation method based on DPDK and VPP |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107135278A (en) | A kind of efficient load equalizer and SiteServer LBS | |
US20210377186A1 (en) | Edge datapath using user space network stack | |
KR102138619B1 (en) | Message clustering method and load balancer based on server cluster | |
EP3404878B1 (en) | Virtual network apparatus, and related method | |
WO2018099384A1 (en) | Packet load balancing method, apparatus and system | |
US12010173B2 (en) | Class-based queueing for scalable multi-tenant RDMA traffic | |
EP4009593A1 (en) | Data transmission method and apparatus, network card and storage medium | |
CN1206600C (en) | Full distribution type aggregation network servicer system | |
US20040260745A1 (en) | Load balancer performance using affinity modification | |
US11233737B2 (en) | Stateless distributed load-balancing | |
CN106973053B (en) | The acceleration method and system of BAS Broadband Access Server | |
CN103595648A (en) | Method and system for balancing load at receiving side of server | |
WO2017133291A1 (en) | Server cluster-based message generation method and load balancer | |
CN106416147A (en) | System and method for a software defined protocol network node | |
CN103780502A (en) | System, method and device for data interaction under load balancing | |
CN101442493A (en) | Method for distributing IP message, cluster system and load equalizer | |
US10693785B2 (en) | Method and system for forwarding data, virtual load balancer, and readable storage medium | |
TWI759571B (en) | Data transfer method based on flow table | |
WO2014023003A1 (en) | Method, apparatus, and system for controlling data transmission | |
US20200153739A1 (en) | Load balancing among network links using an efficient forwarding scheme | |
CN109525684A (en) | Message forwarding method and device | |
US20150043586A1 (en) | Control Apparatus, Communication Apparatus, Communication System, Communication Method, and Program | |
US8630296B2 (en) | Shared and separate network stack instances | |
US20190327173A1 (en) | Load balancing among network links using an efficient forwarding scheme | |
US9491098B1 (en) | Transparent network multipath utilization through encapsulation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170905 |