CN109729017B - Load balancing method based on DPI prediction - Google Patents

Load balancing method based on DPI prediction Download PDF

Info

Publication number
CN109729017B
CN109729017B CN201910196102.8A CN201910196102A CN109729017B CN 109729017 B CN109729017 B CN 109729017B CN 201910196102 A CN201910196102 A CN 201910196102A CN 109729017 B CN109729017 B CN 109729017B
Authority
CN
China
Prior art keywords
data
flow
load balancing
node
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910196102.8A
Other languages
Chinese (zh)
Other versions
CN109729017A (en
Inventor
玄世昌
杨武
王巍
苘大鹏
吕继光
杨茂深
于成鑫
王还红
袁玉同
任天朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN201910196102.8A priority Critical patent/CN109729017B/en
Publication of CN109729017A publication Critical patent/CN109729017A/en
Application granted granted Critical
Publication of CN109729017B publication Critical patent/CN109729017B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention belongs to the field of network technology research, and particularly relates to a load balancing method based on DPI prediction, which comprises the following steps: dividing input flow into a strip flow according to the quintuple and the sequence number, and determining which application data the flow belongs to through a DPI technical analysis protocol and application characteristics; determining which applied characteristics the flow belongs to, and selecting a correctly applied characteristic library model for matching; calculating the query of a hash value to the content of a history record base through data flow information, and estimating the current flow condition to obtain a predicted value of the flow; the invention uses a dynamic monitoring system to obtain the information of subordinate nodes and carries out real-time control on the state of the server. By using the pre-estimated value and the node real-time information to pre-distribute the flow, the complexity of the load balancing strategy is reduced, and the calculation is transferred to the load balancing prediction, so that the whole optimization of the load balancing strategy is realized.

Description

Load balancing method based on DPI prediction
Technical Field
The invention belongs to the field of network technology research, and particularly relates to a load balancing method based on DPI prediction.
Background
The balancing effect of load balancing in the network nodes and the length of processing time determine the performance of the whole network system to a great extent. With the great amount of research in load balancing technology, many various methods have been proposed. For example, based on a load balancing strategy of a DNS proxy, this method does not consider the state of each server, cannot quickly respond to an unexpected situation, and does not consider the factor of the data request itself, although the effect of load balancing is achieved by replacing a response server; the method is characterized in that equipment is connected in series in a line to obtain all flow passing through the line, a data link layer, a network layer, a transport layer, an application layer and the like of the data packet are analyzed by using a matching technology, and then the flow is forwarded according to a configured distribution strategy. The method analyzes and judges the data content, so that the distribution effect is better, but due to the serial structure, the data processing time can influence the overall network performance.
Many patent documents about load balancing methods have advantages and disadvantages. For example, in a "application layer traffic load balancing method based on DPI" disclosed in patent document No. 201710968835.X, the method is connected in series between a client and a server, analyzes traffic flow in a manner of splitting or mirroring, and outputs dynamic priority as one of the bases of load balancing, and selects an optimal service server in combination with static priority, but the time for analyzing the traffic by splitting or mirroring is too long, and the traffic of the serial part needs dynamic priority as a parameter, so that a waiting relationship occurs between two modules, and the overall performance is affected. Further, the content of parsing using the DPI technique is not clear.
Disclosure of Invention
The invention aims to provide a load balancing method with better performance.
A load balancing method based on DPI prediction comprises the following steps:
(1) Dividing input flow into a strip flow according to a quintuple and a sequence number, and determining which application data the flow belongs to through a DPI technical analysis protocol and application characteristics;
(2) Determining the characteristics of which application the flow belongs to, and selecting a correctly applied characteristic library model for matching;
(3) Calculating the query of a hash value to the content of a history record base through data flow information, and estimating the current flow condition to obtain a predicted value of the flow;
(4) Monitoring subordinate nodes of the load balancing equipment in real time by using the dynamic nodes, and feeding back the current memory occupation condition of the nodes;
(5) Analyzing a feedback result, performing pre-allocation by using the estimated value obtained in the step (3), and selecting a proper allocation node;
(6) Sending data stream, and updating information such as actual flow size, resource name, source address and the like into an application feature library;
(7) And repeating the steps 1) -6) until all the traffic is sent.
The method for dividing the input flow into one flow according to the quintuple and the sequence number and determining which application data the flow belongs to through a DPI technical analysis protocol and application characteristics comprises the following steps:
the flow input to the load balancing equipment is accepted, the flow is distinguished according to a quintuple source IP, a target IP, a source port, a target port and a protocol, the flow is divided by using sequence numbers for different flows of the same IP port, the quintuple and the sequence numbers are used as a key value of hash, the record is stored in the middle of a network layer head and a transport layer head, and the data flow meeting a characteristic model R is defined as predictable data e and a characteristic model R:
R={r 1 ,r 2 ,…,r n-1 ,r n }
wherein r is n Is a certain characteristic element.
The determining which applied characteristics the flow belongs to, and selecting a correctly applied characteristic library model for matching includes:
when it is predicted that the data e satisfies the feature of the application A by the multi-mode matching, it is determined that G (e, A) is associated, that is
Figure BDA0001995304460000021
Wherein a is i One of the traffic models for application a, i.e. application a traffic satisfying the characteristic i, e belongs to one of the application a traffic.
The method for inquiring the content of the history record base by calculating the hash value through the data flow information and estimating the current flow condition to obtain the estimated value of the flow comprises the following steps:
by using a i As a model of the characteristics of the object, the key information in the acquired data e comprises a resource name and resource characteristics L = { L = { (L) 1 ,l 2 ,…,l n-1 ,l n The method comprises the following steps of }, a resource source name, a resource source Ipsip, a resource type, a resource size, a resource remark note and resource acquisition time t, calculating the content query of a hash value in a history record base LOG, acquiring the information of the same type of resource and performing pre-estimation analysis on the current flow condition to obtain a pre-estimated value Y of the flow size:
Figure BDA0001995304460000022
wherein, size i Is the resource size corresponding to the characteristic i.
The real-time monitoring of the dynamic node and the subordinate node of the load balancing device, and the feedback of the current memory occupation of the node, include:
monitoring subordinate nodes j of a load balancing device in real time by using dynamic nodes, wherein each node j has two flow processing modes, one is a recombination type, and all data in a flow are required to be collected and recombined one by one and then are processed; the other is forwarding type, and data flow is processed in a pipeline mode; the real-time monitoring module carries out real-time monitoring on each node to obtain the contents of the current memory occupation situation sigma, the connection number M, the traffic processing mode tau, the task scheduling mode rho and the like of the node j, and feeds the contents back to the load balancing processing module.
Analyzing the feedback result, performing pre-allocation by using the estimated value obtained in the step (3), and selecting a proper allocation node, wherein the method comprises the following steps:
the state of the load node is first analyzed,
if the traffic processing mode tau is a recombination class, the memory occupation of the load node is a predicted value Y;
if the traffic processing mode τ is a forwarding class, the memory occupation of the load balancing node is smaller than the estimated value Y of the whole flow:
the memory that node j can bear is:
Figure BDA0001995304460000031
wherein, t c For processing time of a single packet, M is the number of connections, t d Is a time-sharing interval, alpha and beta are adjustment parameters, t w As wait time, t k For internal and external copy time, t s The time that the data has elapsed from the data sender to the data receiver.
The invention has the beneficial effects that:
the method uses DPI analysis to divide the input flow by feature matching, extracts key information and estimates the whole flow to obtain a predicted value. And a dynamic monitoring system is used for acquiring the information of subordinate nodes and controlling the state of the server in real time. By using the pre-estimated value and the node real-time information to pre-distribute the flow, the complexity of the load balancing strategy is reduced, and the calculation is transferred to the load balancing prediction, so that the whole optimization of the load balancing strategy is realized.
Drawings
Fig. 1 is a flowchart of a DPI prediction based load balancing method;
FIG. 2 is a physical topology;
figure 3 is a block diagram of load balancing based on DPI prediction.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
The balancing effect of load balancing in the network nodes and the length of processing time determine the performance of the whole network system to a great extent. With the great amount of research in load balancing technology, many various methods have been proposed. For example, based on a load balancing strategy of a DNS proxy, this method does not consider the state of each server, cannot quickly respond to an unexpected situation, and does not consider the factor of the data request itself, although the effect of load balancing is achieved by replacing a response server; the method comprises the steps of serially connecting equipment in a line, acquiring all flow passing through the line, analyzing a data link layer, a network layer, a transport layer, an application layer and the like of the data packet by using a matching technology, and then forwarding the flow according to a configured distribution strategy. The method analyzes and judges the data content, so that the distribution effect is better, but due to the serial structure, the data processing time can influence the overall network performance.
Many patent documents about load balancing methods have advantages and disadvantages. For example, in a "application layer traffic load balancing method based on DPI" disclosed in patent document No. 201710968835.X, the method is connected in series between a client and a server, analyzes traffic flow in a manner of splitting or mirroring, and outputs dynamic priority as one of the bases of load balancing, and selects an optimal service server in combination with static priority, but the time for analyzing the traffic by splitting or mirroring is too long, and the traffic of the serial part needs dynamic priority as a parameter, so that a waiting relationship occurs between two modules, and the overall performance is affected. Further, the content of parsing using the DPI technique is not clear.
The purpose of the invention is realized as follows:
1) Dividing input flow into a strip flow according to the quintuple and the sequence number, and determining which application data the flow belongs to through a DPI technical analysis protocol and application characteristics;
2) Determining the characteristics of which application the flow belongs to, and selecting a correctly applied characteristic library model for matching;
3) Calculating the query of a hash value to the content of a history record base through data flow information, and estimating the current flow condition to obtain a predicted value of the flow;
4) Monitoring subordinate nodes of the load balancing equipment in real time by using the dynamic nodes, and feeding back the current memory occupation condition of the nodes;
5) Analyzing a feedback result, performing pre-allocation by using the estimated value obtained in the step 3), and selecting a proper allocation node;
6) Sending data stream, and updating information such as actual flow size, resource name, source address and the like into an application feature library;
7) And repeating the steps 1) -6) until the traffic is completely sent.
Each application carries its own data, and the content of the data itself is not scattered but has a certain characteristic rule, so that the data content can be detected and distinguished by using a message detection technology. By collecting the data characteristics of the application layer, a characteristic string can be established, all matched contents are stored to establish a characteristic library, along with the continuous improvement of the characteristic library, the prediction based on the characteristic library is more and more accurate, and the load balancing effect is gradually improved. And a dynamic load balancing strategy is used, so that the system algorithm has real-time control on the server nodes, real-time adjustment can be performed on the server, and the load balancing effect is better and more stable.
The method uses DPI analysis to divide the input flow by feature matching, extracts key information and estimates the whole flow to obtain a predicted value. And a dynamic monitoring system is used for acquiring the information of subordinate nodes and controlling the state of the server in real time. By using the pre-estimated value and the node real-time information to pre-distribute the flow, the complexity of the load balancing strategy is reduced, and the calculation is transferred to the load balancing prediction, so that the whole optimization of the load balancing strategy is realized.
The invention is described in detail below by way of example with reference to the accompanying drawings:
1) The flow input to the load balancing equipment is received, the flow is distinguished according to the quintuple source IP, the target IP, the source port, the target port and the protocol, the different flows of the same IP port are divided by using the sequence number, the quintuple and the sequence number are used as the key value of the hash, the record is stored, and the contents are stored between the network layer head and the transport layer head and can be obtained by using the structure body pointer. And the data characteristics of the application layer are obtained by using a deep packet analysis technology, and the data flow meeting the characteristic model R is defined as predictable data e.
R={r 1 ,r 2 ,…,r n-1 ,r n }
Before feature matching, feature analysis needs to be carried out on application layer data so as to establish a feature model R, a large amount of communication data of the application A are collected firstly, clustering analysis is carried out on data content, and data are analyzed by DBSCAN density clustering. E.g. data e 1 Owning a data segment p 1 p 2 p 3 p 4 p 5 Data e 2 Having p 1 p 6 p 7 p 4 p 5 Data e 3 Having p 1 p 8 p 9 p 4 p 5 Then p will be 1 ,p 4 p 5 As a string of features.
2) When it is predicted that the data e satisfies the feature of the application A by multi-mode matching, it can be determined that G (e, A) is associated, that is
Figure BDA0001995304460000051
Wherein a is i The flow model is one of the flow models of the application A, namely the flow of the application A meeting the characteristic i, and e belongs to one of the flow of the application A;
3) By using a i As a feature model, key information in the data e is acquired. Resource name, resource feature L = { L = { L = 1 ,l 2 ,…,l n-1 ,l n }, resource source nameName, resource source IPsip, resource type, resource size, resource remarknote note, resource acquisition time t, and the like. Calculating the content query of the hash value in a LOG of a historical record library, acquiring similar resource information, and performing predictive analysis on the current flow condition to obtain a predicted value Y of the flow;
Figure BDA0001995304460000052
4) Using a real-time monitoring load balancing device of a dynamic node to subordinate nodes j, wherein each node j has two flow processing modes, one is a recombination type, and all data in a flow are required to be collected and recombined one by one and then are processed; the other is a forwarding class, which handles data streams in the form of pipelines. The real-time monitoring module carries out real-time monitoring on each node to obtain the contents of the current memory occupation situation sigma, the connection number M, the traffic processing mode tau, the task scheduling mode rho and the like of the node j, and feeds the contents back to the load balancing processing module;
6) Analyzing the feedback result, and performing pre-allocation by using the estimated value obtained in the step 4). Performing pre-allocation, namely analyzing the state of a load node, wherein when a flow processing mode tau is a recombination class, the memory occupation of the load node is a pre-estimated value Y; when the traffic processing mode τ is a forwarding class, the memory occupation of the load balancing node is smaller than the estimated value Y of the whole flow:
when the data stream e is divided into e 1 e 2 e 3 ……e n-1 e n Sending to a load balancing node, and sending data e by a data sending end 1 The receiving end receives data e 1 Time is over t s . The receiving end finishes processing 1 While receiving data e i Then the time between the receipt of 1 and i is the processing time t of a single data packet c . The actual memory usage of the load node is about
Figure BDA0001995304460000053
The above is the case of a single thread and a single CPU, and considering that the multithreading case is different according to the form of the task scheduling mode ρ, the time sharing scheduling is performedDegree, number of connections M and time-sharing interval t d Respectively occupying some time, and continuously sending the data stream e at the time to occupy more space; under the short task priority strategy, the waiting time is t w If the data is transferred to external memory during waiting, the copy time of internal memory and external memory is defined as t k . The memory that can be borne by the pre-estimated node j is then:
Figure BDA0001995304460000061
wherein alpha and beta are adjustment parameters, and adjustment is performed according to factors such as the actual task scheduling mode rho. The memory occupies the estimated value Y at most, namely the occupied size is Y after the memory is switched to the memory under the condition that the waiting time is too long or the memory directly enters the external memory for storage.
Pre-allocating and judging the memory occupation situation sigma of an actual node j for a data stream e, selecting the node j with the minimum memory occupation by using a small top stack, if the flow e is sent to the node j, and if the problem of excessive memory occurs, reselecting the node, otherwise, selecting the node;
7) Sending a data stream e, calculating a hash value of information such as the actual flow size, the resource name, the source address and the like, and updating the hash value into an applied LOG (LOG of history record) for next flow query;
8) And repeating the steps 1) -7) until the traffic is completely sent.
The above is one implementation of the algorithm proposed by the present invention, but in some steps, it can be changed appropriately to suit the needs of specific situations. For example, when the features are connected in series in step 1), the feature string and the feature library may be established in other clustering analysis manners, where each application mode is different, each category is different, each application angle is different, and it can be said that the features are obtained in other manners. The multi-mode matching algorithm in the step 2) can specify a clear and effective multi-mode matching algorithm according to the data characteristics. In the step 6), the bearable value of the node to the flow is analyzed and obtained through each information of the load node, and the adaptation analysis can be directly carried out according to the pre-estimated quantity Y in a compressed calculation quantity mode.
The method greatly compresses the calculated amount, and in the load balancing algorithm stage, only O (1) time complexity is needed by selecting the minimum memory occupied node; the dynamic node monitoring module and the feedback processing module belong to parallel modules, and time delay is not caused to the whole system; the prediction part analyzes the message content by using a DPI technology, matches features by using a multi-mode matching technology, the minimum time complexity is O (m), m is the length of a mode string, then, historical records are calculated to obtain a prediction value, the time complexity of the calculation part is the accumulation of N records, the average value of data can be calculated during storage, a Hash structure is constructed by searching information such as resource name sources, the time complexity is O (1), namely, the overall time complexity is O (N).
The invention designs a load balancing algorithm structure taking prediction as a main body by taking the analysis of data by a DPI technology as a starting point, and compresses the calculated amount of a load balancing model, so that the method provided by the patent is considered to be essentially different from an application layer flow load balancing method based on the DPI by the inventor, and the emphasis points are different although the same technology is used. The invention combines the multimode matching and cluster analysis technology, and the DPI technology alone can not achieve better effect. According to different contents of cluster analysis, the data structures of the loaded application layers are different, the matching mode and the analysis process can be adjusted according to actual conditions, and a proper effect is achieved.

Claims (1)

1. A load balancing method based on DPI prediction is characterized by comprising the following steps:
step 1: receiving flow input to load balancing equipment, distinguishing flows according to quintuple, namely source IP, destination IP, source port, destination port and protocol, dividing different flows of the same IP port by using sequence numbers, and using the quintuple and the sequence numbers as hash key values; the data characteristics of the application layer are obtained by using a deep packet analysis technology, and the data flow meeting the characteristic model R is defined as predictable data e;
R={r 1 ,r 2 ,…,r n-1 ,r n }
before feature matching, feature analysis needs to be carried out on application layer data so as to establish a feature model R, a large amount of communication data of the application A are collected firstly, clustering analysis is carried out on data content, and data are analyzed by DBSCAN density clustering;
and 2, step: when the predictable data e satisfies the characteristics of the application a through multi-mode matching, it is determined that G (e, a) has a correlation, which is expressed as:
Figure FDA0003959403850000011
wherein, a i The traffic model is one of the traffic models of the application A, namely the traffic of the application A meeting the characteristic i; e belongs to one of the application A traffic;
and step 3: using a i As a feature model, key information in the predictable data e is acquired, including resource name, resource feature L = { L = { [ L ] 1 ,l 2 ,...,l n-1 ,l n The resource allocation method comprises the steps of (1) obtaining resource source name, resource source IPsip, resource type, resource size, resource remark note and resource acquisition time t; inquiring the content of the hash value in a LOG of a history record base, acquiring similar resource information, and performing predictive analysis on the current flow condition to obtain a predicted value Y of the flow;
Figure FDA0003959403850000012
and 4, step 4: monitoring subordinate nodes j of the load balancing equipment by using a real-time monitoring module, acquiring the current memory occupation condition sigma, the connection number M, the traffic processing mode tau and the task scheduling mode rho of the nodes j, and feeding back to the load balancing processing module;
each node j has two flow processing modes, one is a recombination type, and data processing is carried out after all data in the flow are collected and recombined one by one; the other is forwarding type, and data flow is processed in a pipeline mode;
and 5: analyzing a feedback result, and performing pre-allocation by using the estimated value obtained in the step 3;
under the condition of using a single-thread single CPU, firstly, analyzing the state of a load node, wherein when a flow processing mode tau is a recombination class, the memory occupation of the load node is a predicted value Y; when the flow processing mode tau is a forwarding class, the memory occupation of the load balancing node is smaller than the estimated value Y of the whole flow; when predictable data e is divided into e 1 e 2 e 3 ......e n-1 e n Sending to a load balancing node, and sending data e by a data sending end 1 The receiving end receives the data e 1 Time is over t s (ii) a The receiving end finishes processing e 1 While receiving data e i Then e is received 1 And e i The time between is the processing time t of a single data packet c Then the load node actually occupies the memory as
Figure FDA0003959403850000021
In the case of using multithreading, in consideration of the fact that the multithreading situation is differentiated according to the form of the task scheduling pattern ρ, the time-sharing scheduling situation, the number of connections M, and the time-sharing interval t d Respectively occupying some time, and predicting that the data e can be continuously sent in the time, so that more space is occupied; under the short task priority strategy, the waiting time is t w If the data is transferred to external memory during waiting, the copy time of internal memory and external memory is defined as t k (ii) a The memory that predictor node j can bear is then:
Figure FDA0003959403850000022
wherein, alpha and beta are adjustment parameters, and are adjusted according to the factor rho of the actual task scheduling mode; y is the maximum occupation estimated value of the memory, and is the size occupied by the memory after the condition that the waiting time is overlong or the memory directly enters the external memory is switched;
pre-allocating and judging the memory occupation situation sigma of the actual node j according to the predictable data e, and selecting the node j with the minimum current memory occupation by using a small top stack; if the predictable data e is sent to the node j and the problem of excessive memory occurs, reselecting the node, otherwise, selecting the node;
step 6: sending predictable data e, calculating a hash value according to the actual flow size, the resource name and the source address information, and updating the hash value into an applied history record library LOG for next flow query;
and 7: and (6) repeating the steps 1 to 6 until all the traffic is sent.
CN201910196102.8A 2019-03-14 2019-03-14 Load balancing method based on DPI prediction Active CN109729017B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910196102.8A CN109729017B (en) 2019-03-14 2019-03-14 Load balancing method based on DPI prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910196102.8A CN109729017B (en) 2019-03-14 2019-03-14 Load balancing method based on DPI prediction

Publications (2)

Publication Number Publication Date
CN109729017A CN109729017A (en) 2019-05-07
CN109729017B true CN109729017B (en) 2023-02-14

Family

ID=66302373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910196102.8A Active CN109729017B (en) 2019-03-14 2019-03-14 Load balancing method based on DPI prediction

Country Status (1)

Country Link
CN (1) CN109729017B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110601991A (en) * 2019-09-16 2019-12-20 赛尔网络有限公司 Flow packet-by-packet distribution method and device, electronic equipment and storage medium
CN110995769B (en) * 2020-02-27 2020-06-05 上海飞旗网络技术股份有限公司 Deep data packet detection method and device
CN111880926B (en) * 2020-06-30 2022-07-01 苏州浪潮智能科技有限公司 Load balancing method and device and computer storage medium
CN112491738A (en) * 2020-12-08 2021-03-12 深圳海智创科技有限公司 Real-time data stream balanced distribution method and device
CN114138463B (en) * 2021-11-04 2024-03-26 中国电力科学研究院有限公司 Method for predicting load balance of spot system application layer based on deep neural network
US20240022514A1 (en) * 2022-07-18 2024-01-18 Microsoft Technology Licensing, Llc Dynamic load balancing based on flow characteristics

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101645806A (en) * 2009-09-04 2010-02-10 东南大学 Network flow classifying system and network flow classifying method combining DPI and DFI
CN101945407A (en) * 2010-10-22 2011-01-12 东南大学 Load balancing method for content monitoring of mobile service
CN101958843A (en) * 2010-11-01 2011-01-26 南京邮电大学 Intelligent routing selection method based on flow analysis and node trust degree
CN105847078A (en) * 2016-03-17 2016-08-10 哈尔滨工程大学 HTTP (Hyper Text Transport Protocol) traffic refined identification method based on DPI (Data Processing Installation) self-study mechanism
CN106209506A (en) * 2016-06-30 2016-12-07 瑞斯康达科技发展股份有限公司 A kind of virtualization deep-packet detection flow analysis method and system
CN107864189A (en) * 2017-10-18 2018-03-30 南京邮电大学 A kind of application layer traffic load-balancing method based on DPI
CN108632159A (en) * 2017-03-16 2018-10-09 哈尔滨英赛克信息技术有限公司 A kind of network service traffic load-balancing method based on prediction

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101645806A (en) * 2009-09-04 2010-02-10 东南大学 Network flow classifying system and network flow classifying method combining DPI and DFI
CN101945407A (en) * 2010-10-22 2011-01-12 东南大学 Load balancing method for content monitoring of mobile service
CN101958843A (en) * 2010-11-01 2011-01-26 南京邮电大学 Intelligent routing selection method based on flow analysis and node trust degree
CN105847078A (en) * 2016-03-17 2016-08-10 哈尔滨工程大学 HTTP (Hyper Text Transport Protocol) traffic refined identification method based on DPI (Data Processing Installation) self-study mechanism
CN106209506A (en) * 2016-06-30 2016-12-07 瑞斯康达科技发展股份有限公司 A kind of virtualization deep-packet detection flow analysis method and system
CN108632159A (en) * 2017-03-16 2018-10-09 哈尔滨英赛克信息技术有限公司 A kind of network service traffic load-balancing method based on prediction
CN107864189A (en) * 2017-10-18 2018-03-30 南京邮电大学 A kind of application layer traffic load-balancing method based on DPI

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"高性能网络业务分流技术研究";张宁;《硕士电子期刊》;20180715;全文 *

Also Published As

Publication number Publication date
CN109729017A (en) 2019-05-07

Similar Documents

Publication Publication Date Title
CN109729017B (en) Load balancing method based on DPI prediction
CN108259367B (en) Service-aware flow strategy customization method based on software defined network
CN112073542B (en) Fog node scheduling method and device, computer equipment and storage medium
JP2005010970A (en) Distributed cache control method, network system, and control server or router used for network concerned
US20040098499A1 (en) Load balancing system
WO2004063928A1 (en) Database load reducing system and load reducing program
CN115277574B (en) Data center network load balancing method under SDN architecture
CN107317764A (en) The method and system of flow load balance
CN111638948A (en) Multi-channel high-availability big data real-time decision making system and decision making method
CN101341692B (en) Admission control using backup link based on access network in Ethernet
CN117149746B (en) Data warehouse management system based on cloud primordial and memory calculation separation
CN102271078A (en) Service quality guarantee oriented load balancing method
CN114490100B (en) Message queue telemetry transmission load balancing method, device and server
CN111614726B (en) Data forwarding method, cluster system and storage medium
CN113765825A (en) Planning method and system architecture for chain type service flow scheduling
CN110138684B (en) Traffic monitoring method and system based on DNS log
CN114124778B (en) Anycast service source routing method and device based on QoS constraint
CN113259263B (en) Data packet scheduling method in deep packet inspection cluster
CN101958843A (en) Intelligent routing selection method based on flow analysis and node trust degree
CN115604311A (en) Cloud fusion computing system and self-adaptive routing method for service network
Du et al. Distributed in-network coflow scheduling
Tian et al. Complex application identification and private network mining algorithm based on traffic-aware model in large-scale networks
CN109587057B (en) Intelligent routing method and system for information transmission platform
CN109067668B (en) Global network acceleration link construction method based on intelligent balanced distribution
Ishizaki et al. On-line sensitivity analysis of feedback controlled queueing systems with respect to buffer capacity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant