CN109120463B - Flow prediction method and device - Google Patents
Flow prediction method and device Download PDFInfo
- Publication number
- CN109120463B CN109120463B CN201811195940.5A CN201811195940A CN109120463B CN 109120463 B CN109120463 B CN 109120463B CN 201811195940 A CN201811195940 A CN 201811195940A CN 109120463 B CN109120463 B CN 109120463B
- Authority
- CN
- China
- Prior art keywords
- time
- flow data
- predicted flow
- predicted
- moment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/147—Network analysis or design for predicting network behaviour
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/04—Processing captured monitoring data, e.g. for logfile generation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0876—Network utilisation, e.g. volume of load or congestion level
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Mining & Analysis (AREA)
- Environmental & Geological Engineering (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The present disclosure relates to a traffic prediction method and device, including: acquiring flow data of a target network in a first time period; preprocessing the flow data to obtain preprocessed flow data; acquiring first predicted flow data in a second time period according to the preprocessed flow data and the first model; acquiring second predicted flow data corresponding to the first predicted flow data according to the first predicted flow data and a second model; and acquiring the predicted flow data in the second time period according to the first predicted flow data and the second predicted flow data. The flow prediction method and the flow prediction device can realize accurate prediction of flow data in a future time period.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for predicting traffic.
Background
With the rapid development of network technology, the services and applications carried on the network become more and more abundant. As the services provided by the internet become more diversified and complicated, the pressure on the network link is increasing, and therefore, it is particularly important to relieve the pressure on the network link.
In order to relieve the pressure of network link bearing, enterprises need to know the service borne in the link in time, grasp the traffic characteristics of the link, handle risks in advance, and solve the network burst problem in time.
Disclosure of Invention
In view of this, the present disclosure provides a traffic prediction method and apparatus, which are used for predicting traffic data of a network for a period of time in the future.
According to an aspect of the present disclosure, a traffic prediction method is provided, the method including:
acquiring flow data of a target network in a first time period;
preprocessing the flow data to obtain preprocessed flow data;
acquiring first predicted flow data in a second time period according to the preprocessed flow data and the first model;
acquiring second predicted flow data corresponding to the first predicted flow data according to the first predicted flow data and a second model;
and acquiring the predicted flow data in the second time period according to the first predicted flow data and the second predicted flow data.
According to another aspect of the present disclosure, a flow prediction apparatus is provided, the apparatus comprising:
the acquisition module is used for acquiring the flow data of the target network in a first time period;
the preprocessing module is connected with the acquisition module and used for preprocessing the flow data to acquire the preprocessed flow data;
the first operation module is connected to the preprocessing module and used for acquiring first predicted flow data in a second time period according to the preprocessed flow data and the first model;
the second operation module is connected to the first operation module and used for acquiring second predicted flow data corresponding to the first predicted flow data according to the first predicted flow data and a second model;
and the third operation module is connected to the second operation module and used for acquiring the predicted flow data in the second time period according to the first predicted flow data and the second predicted flow data.
According to another aspect of the present disclosure, there is provided a flow prediction system including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the above-described traffic prediction method.
According to another aspect of the present disclosure, a non-transitory computer-readable storage medium is provided, having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the above-described flow prediction method.
According to the flow prediction method, the flow data of the target network in the first time period is obtained, the flow data is preprocessed, the preprocessed flow data is obtained, the first predicted flow data in the second time period is obtained according to the preprocessed flow data and the first model, the second predicted flow data corresponding to the first predicted flow data is obtained according to the first predicted flow data and the second model, the predicted flow data in the second time period is obtained according to the first predicted flow data and the second predicted flow data, and the flow data in the future time can be accurately predicted.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flow chart of a traffic prediction method according to an embodiment of the present disclosure.
FIG. 2 illustrates a flow diagram of data pre-processing according to an embodiment of the present disclosure.
FIG. 3 shows a schematic prediction flow diagram of a first model according to an embodiment of the present disclosure.
Fig. 4 shows a schematic flow chart of obtaining the second predicted flow according to an embodiment of the present disclosure.
Fig. 5 shows a schematic diagram of flow data over a first time period according to an embodiment of the present disclosure.
FIG. 6 shows a graphical representation of predicted values versus actual values according to an embodiment of the present disclosure.
Fig. 7 shows a block diagram of a flow prediction device according to an embodiment of the present disclosure.
Fig. 8 shows a block diagram of a flow prediction device according to an embodiment of the present disclosure.
FIG. 9 shows a block diagram of a flow prediction system according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
In order to realize intelligent management of the network, it is critical to predict link traffic.
By measuring and predicting the link flow, the flow condition and trend between links can be known, so that the link optimization is more effectively carried out, the routing design and the load balance design are better carried out, and the link congestion control can be determined, so that the information loss and delay caused by the link congestion can be reduced, the link resources are fully utilized, and the service quality is improved.
Real-time monitoring of link traffic is an important aspect of network management. When the abnormal condition of the flow occurs, after the network management system sends out the alarm notice, the network management personnel proceed to solve the problem, and only a response type behavior is adopted, namely a mode of the prior problem post-processing. In this way, it is likely that there is not enough time to analyze and process the problem, and thus the normal operation of the network is affected. If traffic overload can be predicted, the availability of the link can be significantly improved by analyzing and solving problems before traffic overload occurs through advanced network management.
The flow prediction model is the basis for link performance analysis and link planning design, and a good link flow prediction model and prediction method are of great significance for designing new generation network protocols, network management and diagnosis, designing high-performance network hardware equipment such as routers and load balancers and improving the service quality of networks. With the increase of network bandwidth and the emergence of various network services, the conventional traffic prediction model is also difficult to accurately describe and predict the existing and future network traffic, so that it is necessary to provide a more accurate traffic prediction model and prediction method for link traffic according to the limitations of the conventional traffic prediction model and prediction method.
Referring to fig. 1, fig. 1 is a flow chart illustrating a traffic prediction method according to an embodiment of the present disclosure.
The method can be applied to a server or a terminal, and is executed by the server or the terminal to predict the traffic of a target network. As shown in fig. 1, the method may include:
step S110, acquiring flow data of a target network in a first time period;
step S120, preprocessing the flow data to obtain preprocessed flow data;
step S130, acquiring first predicted flow data in a second time period according to the preprocessed flow data and the first model;
step S140, second predicted flow data corresponding to the first predicted flow data is obtained according to the first predicted flow data and a second model;
step S150, obtaining the predicted flow data in the second time period according to the first predicted flow data and the second predicted flow data.
Through the method, the flow data of the future time period can be accurately predicted according to the first model and the second model.
For step S110:
in one possible implementation, the target network may be a data source that obtains traffic data, for example, a data center of an enterprise network management center, or other data sources.
In one possible embodiment, the first time period may be a time period of any duration, such as 12 hours, 1 day, 1 week, etc.
In one possible implementation, the traffic data of the target Network may be acquired by a Network traffic acquirer, for example, the traffic data of the target Network may be acquired by a Simple Network Management Protocol (SNMP), which is a Network Management standard based on a TCP/IP Protocol family and is a standard Protocol for managing Network nodes (such as servers, workstations, routers, switches, and the like) in the IP Network.
The network managed by the SNMP mainly comprises three parts: managed devices, SNMP agents and Network Management Systems (NMS), each managed device in the network having a Management Information Base (MIB) for collecting and storing management information. The NMS can obtain this information via the SNMP protocol. Generally, SNMP collects traffic data once per minute and stores it in a database, which may be a traffic database that specifically stores traffic data. The method comprises the steps that the flow data of a target network are collected by dividing into a plurality of links, SNMP can respectively obtain the flow data in the incoming direction and the outgoing direction of each link, and the incoming direction can represent the collection of the number of bytes injected into the link within a certain period of time, namely downlink flow; the out direction may represent collecting the number of bytes leaving the link, i.e. the upstream traffic, over a certain period of time.
In one possible implementation, the acquired traffic data may include a link ID, an acquisition time, a traffic value, and the like. The link ID may be used to identify which link the traffic data belongs to, the acquisition time may be used to identify the time within which the acquired traffic data is (e.g., SNMP acquires traffic data in minutes, 2018/3/2319: 04 this time identifies the traffic data acquired in this minute), and the traffic value may be used to identify how much traffic data was acquired during the acquisition time.
For step S120:
the traffic data obtained by the network traffic collector generally includes a plurality of link IDs, that is, the obtained traffic data is chaotic, and some fields in the obtained traffic data may have abnormal values or missing values, for example, some fields have values significantly larger or smaller than the values of the standard range, or some fields have values of negative values or null values. These clutter and even erroneous flow data interfere with the flow prediction, affect the accuracy of the flow prediction, and may even cause an error in the prediction result, so that the flow data needs to be preprocessed to improve the accuracy of the flow prediction.
Referring to fig. 2, fig. 2 is a flow chart illustrating data preprocessing according to an embodiment of the disclosure.
As shown in fig. 2, in a possible implementation, in step S210, the preprocessing the traffic data to obtain the preprocessed traffic data may include:
and step S121, performing data cleaning on the flow data.
Data cleansing refers to a procedure for finding and correcting recognizable errors in data files, including checking data consistency, processing invalid and missing values, and the like.
The invalid value may be flow data in which a flow value in the flow data is greater than a first threshold or less than a second threshold, and the missing value may be flow data in which the flow value in the flow data is missing.
For the invalid value, in one possible embodiment, the flow data containing the invalid value with a small number may be deleted directly, and in other embodiments, the abnormal flow data may be replaced reasonably by adopting other methods. For example, when the amount of the abnormal traffic data is greater than the amount threshold, a reasonable replacement manner may be adopted to avoid the interference of the prediction result caused by deleting too much abnormal traffic data.
For missing values, in one possible implementation, the missing values in the cleaned traffic data may be filled using nearest neighbor interpolation.
The distance between the flow rate data may be defined by a nearest neighbor interpolation method, the distance between the flow rate data may reflect a degree of similarity between the flow rate data, and after the distance between the flow rate data including the missing value and the other flow rate data except the flow rate data including the missing value is determined, the missing value may be interpolated by using, as an interpolation value, the flow rate data having the smallest distance from the flow rate data including the missing value among the other flow rate data.
And step S122, grouping the traffic data according to the link ID of the traffic data after data cleaning, and acquiring the traffic data with the same link ID.
When traffic data of a future period of time is predicted, the traffic data of a certain link is usually predicted, and when the traffic data is acquired, the acquired traffic data often includes a plurality of link IDs, so that the traffic data can be grouped according to different link IDs, and the traffic data of each link is predicted according to the traffic data corresponding to the different link IDs.
For example, Table 1 shows traffic data having link IDs 51c5a002-c8cc-4f16-a45d-6daf3bd68a4 c.
TABLE 1
Serial number | Link ID | Obtaining time | Downstream traffic | Upstream flow |
1 | 51c5a002-c8cc-4f16-a45d-6daf3bd68a4c | 2018/3/23 19:04 | 5442844 | 3697609 |
2 | 51c5a002-c8cc-4f16-a45d-6daf3bd68a4c | 2018/3/23 19:05 | 6164659 | 4120042 |
3 | 51c5a002-c8cc-4f16-a45d-6daf3bd68a4c | 2018/3/23 19:06 | 6454498 | 4404634 |
4 | 51c5a002-c8cc-4f16-a45d-6daf3bd68a4c | 2018/3/23 19:07 | 5974726 | 4481856 |
5 | 51c5a002-c8cc-4f16-a45d-6daf3bd68a4c | 2018/3/23 19:08 | 5370612 | 3462507 |
6 | 51c5a002-c8cc-4f16-a45d-6daf3bd68a4c | 2018/3/23 19:09 | 5887826 | 4155548 |
7 | 51c5a002-c8cc-4f16-a45d-6daf3bd68a4c | 2018/3/23 19:10 | 6184585 | 4297634 |
8 | 51c5a002-c8cc-4f16-a45d-6daf3bd68a4c | 2018/3/23 19:11 | 6589378 | 4558746 |
9 | 51c5a002-c8cc-4f16-a45d-6daf3bd68a4c | 2018/3/23 19:12 | 6412367 | 4351408 |
10 | 51c5a002-c8cc-4f16-a45d-6daf3bd68a4c | 2018/3/23 19:13 | 5932734 | 4010300 |
… | 51c5a002-c8cc-4f16-a45d-6daf3bd68a4c | … | … | … |
… | 51c5a002-c8cc-4f16-a45d-6daf3bd68a4c | 2018/3/23 20:00 | 5765245 | 4053216 |
… | 51c5a002-c8cc-4f16-a45d-6daf3bd68a4c | 2018/3/23 20:01 | 6305452 | 4356452 |
… | 51c5a002-c8cc-4f16-a45d-6daf3bd68a4c | … | … | … |
It can be seen that the data traffic of the link ID includes uplink traffic, downlink traffic, and corresponding acquisition time. It should be understood that the acquisition time includes a time point and a time period prior to the time point, for example, the band acquisition time in table 1 is one minute of the respective time point. The traffic data in table 1 can be used to perform traffic prediction on the link with link ID "51 c5a002-c8cc-4f16-a45d-6daf3bd68a4 c", and similarly, for the links with other IDs, the corresponding traffic data can be used to perform traffic prediction.
Step S123, dividing the first time period into a plurality of time periods, wherein the plurality of time periods in the divided first time period are arranged in sequence.
In one possible embodiment, the time duration of each of the plurality of time periods may be any time duration such as 1 hour, 2 hours, and the like.
The time interval of the network traffic collector for acquiring the traffic data is usually 1 minute, in this case, the time granularity is small, and when the traffic data of 24 hours in a day needs to be predicted, if the interval of 1 minute is adopted, the overall situation of the traffic data may not be reflected, so the time duration of the multiple time periods may be set to 1 hour.
Step S124, obtaining traffic data corresponding to each time period in the first time period under the same link ID, where the traffic data corresponding to each time period respectively includes at least one traffic value.
Under the same link ID, each time segment in the multiple time segments comprises traffic data of multiple 1-minute time segments, and the traffic data of the multiple 1 minutes are correspondingly integrated into the corresponding time segment in each multiple time segment for subsequent processing.
For example, taking the duration of each time period as 1 hour as an example, as shown in table 1, traffic data of a plurality of 1-minute time periods corresponding to each of a plurality of time periods with a link ID of "51 c5a002-c8cc-4f16-a45d-6daf3bd68a4 c" may be obtained, for example, traffic data of 2018/3/2319: 01-2018/3/2320: 00 may be obtained as the traffic data corresponding to the corresponding time period.
Step S125, calculating a flow average value of the same link ID in each time segment in the first time segment according to at least one flow value corresponding to each time segment in the first time segment.
In the present embodiment, the average value of the traffic values in a plurality of 1 minutes for the same link ID in each of the plurality of time periods may be obtained, and the average value of the same link ID in each time period may be used as the traffic value in the time period corresponding to the same link ID.
In another embodiment, the sum of the flow values in a plurality of 1 minutes in which the same link ID is collected for each of a plurality of time slots may be obtained, and the sum of the flow values may be used as the flow value of the time slot corresponding to the same link ID.
Referring to table 2, table 2 shows a schematic view of 1 day traffic data for 1 hour in each of a plurality of time periods under the same link ID.
TABLE 2
As can be seen from table 2, the traffic data of the link ID includes uplink traffic, downlink traffic and corresponding acquisition time within 24 hours.
Step S126, using the link ID, the plurality of time periods corresponding to the link ID, and the average value of the flow rate corresponding to each of the plurality of time periods as the pre-processed flow rate data.
The accuracy of flow prediction can be improved by removing abnormal values, interpolating missing values and grouping the flow data of different link IDs from the preprocessed flow data, and using the preprocessed flow data as the flow data used for prediction.
In a possible embodiment, the pre-processed traffic data includes n pieces of traffic data (corresponding to a plurality of time periods) from the 1 st time to the nth time within the first time period.
For step S130:
in one possible implementation, the first model may include a Long-Short Term Memory network model (LSTM).
The long-short term memory network model LSTM is an improved recurrent neural network structure which comprises a plurality of LSTM units and can solve the problem of long-term dependence. In the LSTM model, each LSTM unit includes 3 control gates, the 3 control gates being a forgetting gate, an input gate, and an output gate, respectively, to control the state, input, and output in the LSTM unit. After the cell structure of the LSTM (LSTM unit) receives the input information, each gate operates and screens the input information from different sources according to its own characteristics to determine which input information can pass through. The forgetting gate has the function that the cyclic neural network forgets information which is not used before, and simultaneously, after the input of the input gate is converted by the nonlinear function, the input of the input gate and the nonlinear function are superposed to form new state information. The output gate is used for generating the output of the cell unit at the current moment after the new state is calculated. The weights of the LSTM model can be learned through a training process, with the LSTM letting information selectively affect the state at each time in the recurrent neural network, based on the structure of the "gate".
The principle of LSTM traffic data prediction for future times is as follows:
obtained according to the following formulaThe forgotten door is at tiForgetting data of time:
wherein the content of the first and second substances,is tiThe input parameters of the time of day are,is ti-1The result of the output of the time of day,is ti-1State parameter of time of day, WxfIs composed ofA predetermined weight matrix of WhfIs composed ofA predetermined weight matrix of WcfIs composed ofA predetermined weight matrix of bfFor a preset first offset vector, the first offset vector,is tiForgetting data of a moment;
obtaining the input gate at t according to the following formulaiTime of day increment parameter:
wherein, WxdIs tiInput parameter of time of dayA predetermined weight matrix of WhdIs composed ofA predetermined weight matrix of WcdIs composed ofA predetermined weight matrix of bdFor the second offset vector to be preset,is tiAn increment parameter of a time;
obtaining the output gate t according to the following formulaiState parameter of the moment:
wherein, WxcIs tiInput parameter of time of dayA predetermined weight matrix of WhcIs composed ofA predetermined weight matrix of bcFor the preset third offset vector,is tiA state parameter of a moment;
obtaining the output parameter of the output gate according to the following formula:
wherein, WxoIs tiInput parameter of time of dayA predetermined weight matrix of WhoIs composed ofA predetermined weight matrix of WcoIs a state parameterA predetermined weight matrix of boFor the fourth offset vector to be preset,is tiThe output parameters of the output gate are timed.
Obtaining t according to the following formulaiOutput result of time:
wherein the content of the first and second substances,is tiAnd outputting the result of the moment.
Obtaining t according to the following formulai+1Predicted flow data at time:
it should be appreciated that the loss function may be utilized to determine the error of the LSTM model from which the LSTM is back trained to optimize the LSTM model parameters such as the weight matrix. In this embodiment, the loss function of the neural network can be defined as:
Referring to fig. 3, fig. 3 is a schematic diagram illustrating a prediction flow of a first model according to an embodiment of the disclosure.
As shown in fig. 3, the prediction flow of the first model includes the following steps:
step S131, obtaining the output of the ith time LSTM, the state parameter of the ith time LSTM and the predicted flow at the (i + 1) th time according to the flow data of the ith time, the output of the (i-1) th time LSTM and the state parameter of the (i-1) th time LSTM in the flow of the n continuous times; wherein i is more than or equal to 1 and less than or equal to n;
when the i is 1, the output of the i-1 th time LSTM and the state parameter of the i-1 th time LSTM are respectively a random value or 0;
when the i is equal to n, the predicted flow rate at the (i + 1) th time is the first predicted flow rate at the (n + 1) th time in the second time period.
Step S132, obtaining the output of the mth moment LSTM, the state parameter of the mth moment LSTM and the predicted flow of the m +1 th moment according to the predicted flow of the mth moment, the output of the m-1 th moment LSTM and the state parameter of the m-1 th moment LSTM in the second time period; wherein m is more than or equal to n +1 and less than or equal to 2 n.
The prediction process of the first predicted flow data of step S130 will be illustrated below, and it should be understood that the following example is illustrative and not intended to limit the present disclosure.
Assume that the collected data includes n-3 times (t) of 29 months 9 and 20181、t2、t3) The flow rate of (c):
For step S131:
step S131 may be a training process of the first model, and after the training of step S131, the first model may be used to predict the first predicted flow data.
First, input t1Actual flow of timeOutput of last moment modelState parameters of last moment modelIt should be noted that the initial stageWhich may be random values or 0 to LSTM models, which perform the following operations to begin the training of the model;
and (3) inputting and outputting:
then, t is input2Actual flow of timeOutput of last moment modelState parameters of last moment modelTo the LSTM model, the LSTM model proceeds to train the model as follows:
and (3) inputting and outputting:
from the above process, in the LSTM model, the predicted flow rate at the next time can be obtained by inputting the actual flow rate at a certain time. Assuming that the flow rates at N times are collected, predicted flow rates at other times than the first time of the N times can be obtained through a model training stage, the predicted flow rates can be used as a sequence, N (N-1) predicted flow rates exist in the sequence, corresponding actual flow rates corresponding to the predicted flow rates can be used as a sequence, and N actual flow rates exist in the sequence.
For step S132:
after the training of the LSTM model is completed, the trained LSTM model may be utilized to make predictions of traffic data for a future time period (e.g., a second time period).
Still using the above n-3 (t)1、t2、t3) The time is taken as an example for explanation, and 3 times (i.e., n +1 to 2n) corresponding to 30 days 9 and 30 months 2018 are predicted according to the time and the corresponding flow data4:2018/9/30 00:00、t5:2018/9/30 08:00、t6: 2018/9/3016: 00), where t3、t4The time instants may be continuous or discontinuous.
After the LSTM model is trained, the same calculation process as the calculation process of the training process can be performed by using the LSTM model to obtain t4The predicted flow rate at a time (that is, when m is equal to n +1 and when n is 3, m is equal to 4) from the n +1 th to the 2n th times may include:
input t1The actual flow at a time can be found as t2Predicted flow of time of dayOutput of hidden layerStatus parameter
Input t2The actual flow at a time can be found as t3Predicted flow of time of dayOutput of hidden layerStatus parameter
Input t3The actual flow at a time can be found as t4Predicted flow of time of dayt3Output of hidden layer of time of dayt3State parameter of time of day
Using t4Predicted flow of time of dayt3Output of hidden layer of time of dayt3State parameter of time of dayThe prediction of the first predicted flow data may be achieved by:
first, input t4Predicted flow rate at time (m + n +1 time, n + 1. ltoreq. m.ltoreq.2n among n +1 to 2 n-th times)t3Output of temporal LSTM modelt3State parameters of a temporal modelTo the LSTM model, the execution is the sameStep S131 is a similar calculation process. T can be obtained by the above calculation5First predicted flow data of time of dayt4Output of the temporal modelState parameters of the model at time t4It should be noted that, during the training processRepresents tiActual flow at the moment in the prediction processRepresents tiThe predicted flow rate at that time, i, is an integer of 1 or more.
Then, t is input5First predicted flow data of time of dayt4Output of the temporal modelt4State parameters of a temporal modelTo the LSTM model, a similar calculation process as described above is performed to obtain t6First predicted flow data of time of dayt5Output of the temporal modelState parameters of the model at time t5
When the flow prediction at other moments is needed, the calculation is sequentially carried out through the steps, and then other first predicted flow data can be obtained.
For step S140:
in one possible embodiment, the Second model may be a Second-order Exponential Smoothing model (SES).
The SES model is built based on a second order exponential smoothing method. The second-order exponential smoothing method can be a method for performing secondary exponential smoothing on the primary exponential smoothing value, and can be matched with the primary exponential smoothing method to establish a predicted mathematical model and then determine a predicted value by using the mathematical model.
The SES model may smooth the first predicted flow data output by the LSTM model to obtain a second predicted flow.
First, the first predicted traffic at the n +1 th to 2n th time points may be sorted according to time to obtain a sequence, where the sequence includes the first predicted traffic at the n time points, and zi represents the first predicted traffic at the ith time point in the sequence.
The operational principle of the SES model may include:
obtaining the first smoothness index by the following formula:
wherein alpha is a decimal fraction of 0 to 1,is the first smoothness index at the ith time in the sequence,the first smoothness index of the ith-1 time in the sequence; when the value of i is 1, the reaction condition is shown,is the first value or the average of a plurality of values in the sequence;
obtaining the second smoothness index by the following formula:
wherein the content of the first and second substances,is the second smoothness index at the ith time in the sequence,is the second smoothing index at the i-1 st instant in the sequence, when i is 1,is the first value or the average of a plurality of values in the sequence;
obtaining second predicted flow data Z of the ith moment in the sequence by the following formulai:
Zi=Ai-BiT
Where T is the predicted number of periods.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating a flow of obtaining a second predicted traffic according to an embodiment of the present disclosure.
As shown in fig. 4, the step of obtaining the second predicted flow rate may include:
step S141, performing second-order exponential smoothing processing on the first predicted flow at any time from the (n + 1) th time to the 2n th time in the second time period;
step S142, determining second predicted flow data at the moment according to the second-order exponential smoothing processing result of the predicted flow at the moment;
step S143, the second predicted flow data includes a second predicted flow from the n +1 th time to the 2n th time in the second time period.
The operation of the SES to perform the second predicted flow data will be described with reference to specific examples, it being understood that the following description is exemplary and not intended to limit the present disclosure.
The predicted first predicted flow may be sorted by time to obtain a sequence, and zi represents the flow at the ith time in the sequence. For example, t predicted in step S1304、t5、t6Taking the predicted flow at the time as an example, the flow at the 3 times is used to form a new sequence, which includes the predicted flow at the 3 times, which is z in turn1、z2、z3And then:
first predicted flow data (z) for time 1 (i.e., 2018/9/3000: 00) is utilized, which may be expressed as follows1) And (3) performing second-order exponential smoothing:
wherein the smoothing coefficient alpha takes the value of (0, 1),can be the first value z in the sequence in the initial phase1It can also be the average of all values in the sequence;
the second predicted flow data may be obtained by the following equation:
Z1=A1-B1T;
where T is the predicted number of periods, for example, three sets of first predicted flow rates (flow rates at times T4 to T6) at times corresponding to time No. 30 predicted from three sets of flow rates (flow rates at times T1 to T3) at time No. 29, and since times T3 to T4 are consecutive, T may be 1, and if three sets of flow rates (flow rates at times T1 to T3) at time No. 31 are predicted, T may be 3, and so on.
The second-order exponential smoothing processing on the predicted flow at each subsequent time is the same as the above, and is not described here again.
For step S150:
in one possible embodiment, an arithmetic mean of the first predicted flow data and the second predicted flow data may be calculated; and using the arithmetic mean as the predicted flow data.
For example, in obtaining z1、z2、z3The corresponding smoothing results are respectively: z1、Z2、Z3In this case, the final predicted flow data may be obtained by using the predicted flow and the corresponding smoothing result, and for example, the average value z of the predicted flow and the smoothing result may be obtained1And Z1Is the final predicted flow data at time 1 in the sequence. It should be appreciated that the predicted flow data described above may be flow values.
It should be noted that the manner of obtaining the predicted flow data in the second time period from the first predicted flow data and the second predicted flow data is only an example of the present disclosure, and the present disclosure is not limited in any way. The predicted flow data during the second time period may be obtained in other manners as desired by those skilled in the art.
The following describes the prediction effect of the flow prediction method according to the present disclosure by way of example.
Traffic data for 24 hours of the day (the second time period includes a plurality of time periods, and the duration of each time period is 1 hour) in a prediction time period (the second time period) may be predicted, for example, the prediction time period may be 3 months and 31 days in 2018 (for example, 2018/3/310: 00-2018/3/3123: 00 may be possible).
The method includes acquiring, by a network traffic acquirer, traffic data of a first time period before a prediction time period, where the first time period may be from 17 days in 3 months in 2018 to 30 days in 3 months in 2018, please refer to fig. 5, and fig. 5 shows a schematic diagram of the traffic data in the first time period according to an embodiment of the present disclosure.
And (3) preprocessing the acquired flow data to obtain preprocessed flow data, and predicting the flow of 24 hours on 3, 31 and 31 days in 2018 by using the first model, the second model and the preprocessed flow data.
Referring to both table 3 and fig. 6, table 3 shows the predicted values and the actual values, and fig. 6 shows a graph illustrating the predicted values and the actual values according to an embodiment of the disclosure.
TABLE 3
As can be seen from table 3 and fig. 6, the difference between the predicted flow data and the true value is not large, the predicted flow data is close to the true value, and the accuracy is very high.
The flow prediction method can accurately predict the flow data in the future time.
Referring to fig. 7, fig. 7 is a block diagram of a flow prediction apparatus according to an embodiment of the present disclosure.
As shown in fig. 7, the apparatus includes:
an obtaining module 10, configured to obtain traffic data of a target network in a first time period;
the preprocessing module 20 is connected to the obtaining module 10, and is configured to preprocess the traffic data to obtain preprocessed traffic data;
the first operation module 30 is connected to the preprocessing module 20, and configured to obtain first predicted flow data in a second time period according to the preprocessed flow data and the first model;
the second operation module 40 is connected to the first operation module 30, and configured to obtain second predicted flow data corresponding to the first predicted flow data according to the first predicted flow data and a second model;
and a third operation module 50, connected to the second operation module 40, for obtaining the predicted flow data in the second time period according to the first predicted flow data and the second predicted flow data.
Referring to fig. 8, fig. 8 is a block diagram of a flow prediction apparatus according to an embodiment of the present disclosure.
As shown in fig. 8, the apparatus includes an obtaining module 10, a preprocessing module 20, a first operation module 30, a second operation module 40, and a third operation module 50.
In one possible embodiment, the first model is a long-short term memory network model LSTM;
in a possible embodiment, the pre-processed traffic data includes n traffic from the 1 st time to the nth time within the first time period.
In one possible embodiment, the first predicted flow rate of the second time period includes the first predicted flow rate from the (n + 1) th time point to the (2 n) th time point.
In a possible implementation manner, the first operation module may include:
the training submodule 310 is configured to obtain an output of the ith time LSTM, a state parameter of the ith time LSTM, and a predicted flow at the (i + 1) th time according to the traffic data of the ith time, the output of the (i-1) th time LSTM, and the state parameter of the (i-1) th time LSTM in the n continuous time flows; wherein i is more than or equal to 1 and less than or equal to n;
when the i is 1, the output of the i-1 th time LSTM and the state parameter of the i-1 th time LSTM are respectively a random value or 0;
when the i is equal to n, the predicted flow rate at the (i + 1) th time is the first predicted flow rate at the (n + 1) th time in the second time period.
The first operation sub-module 320 is connected to the training sub-module 310, and configured to obtain an output of the mth time LSTM, a state parameter of the mth time LSTM, and a predicted flow of the m +1 th time according to the predicted flow of the mth time, the output of the m-1 th time LSTM, and the state parameter of the m-1 th time LSTM in the second time period; wherein m is more than or equal to n +1 and less than or equal to 2 n.
In one possible embodiment, the second model is an SES model.
In a possible implementation manner, the second operation module may include:
the smoothing submodule 410 is configured to perform second-order exponential smoothing processing on the first predicted flow at any time from the (n + 1) th time to the 2n th time within the second time period;
a determining submodule 420, connected to the smoothing submodule 410, for determining the second predicted flow data at the time according to the second-order exponential smoothing result of the predicted flow at the time;
the second operation sub-module 430 is connected to the determining sub-module 420, and is configured to use the second predicted flow data to include a second predicted flow from the (n + 1) th time point to the (2 n) th time point in the second time period.
In a possible implementation manner, the third operation module may include:
an averaging submodule 510 for calculating an arithmetic average of the first predicted flow rate and the second predicted flow rate at any time;
and a third operation sub-module 520, connected to the averaging sub-module 510, for using the arithmetic mean as the predicted flow data at that moment.
The flow prediction device can predict and analyze the flow in a period of time in the future, thereby realizing the active monitoring and intelligent management of the network flow.
For example, from the predicted result, an approximate trend of future traffic can be obtained, if a threshold is given, when the predicted result exceeds the given threshold, an alarm is given, so that a network manager can check the network state in advance, can discover potential attacks and intrusion behaviors, and can realize network intrusion detection. In addition, the traffic conditions and trends among networks can be known through the prediction of the traffic data, so that the network optimization is more effectively carried out, and the routing design, the load balancing design and the like are better carried out.
Referring to fig. 9, fig. 9 shows a block diagram of a traffic prediction system 900 according to an embodiment of the present disclosure.
Referring to fig. 9, the system 900 may include a processor 901, a machine-readable storage medium 902 having stored thereon machine-executable instructions. The processor 901 and the machine-readable storage medium 902 may communicate via a system bus 903. Also, the processor 901 performs the traffic prediction method described above by reading machine-executable instructions in the machine-readable storage medium 902 corresponding to the traffic prediction logic.
The machine-readable storage medium 902 referred to herein may be any electronic, magnetic, optical, or other physical storage system that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (8)
1. A method of traffic prediction, the method comprising:
acquiring flow data of a target network in a first time period;
preprocessing the flow data to obtain preprocessed flow data, wherein the preprocessed flow data comprise n flows from 1 st moment to nth moment in a first time period;
acquiring first predicted flow data in a second time period according to the preprocessed flow data and the first model, wherein the first predicted flow of the second time period comprises first predicted flow from the (n + 1) th moment to the (2 n) th moment;
acquiring second predicted flow data corresponding to the first predicted flow data according to the first predicted flow data and a second model;
acquiring predicted flow data in the second time period according to the first predicted flow data and the second predicted flow data;
the first model is a long-short term memory network model LSTM, and the second model is a second-order exponential smoothing model SES;
the obtaining of the first predicted flow data in the second time period according to the preprocessed flow data and the first model includes:
obtaining the output of the ith time LSTM, the state parameter of the ith time LSTM and the predicted flow of the (i + 1) th time according to the flow data of the ith time, the output of the (i-1) th time LSTM and the state parameter of the (i-1) th time LSTM in the flow of the n continuous times; wherein i is more than or equal to 1 and less than or equal to n;
when the i is 1, the output of the i-1 th time LSTM and the state parameter of the i-1 th time LSTM are respectively a random value or 0;
when the i is equal to n, the predicted flow rate at the (i + 1) th time is the first predicted flow rate at the (n + 1) th time in the second time period.
2. The method of claim 1, wherein obtaining the first predicted flow data within the second time period according to the pre-processed flow data and the first model further comprises:
obtaining the output of the m-th time LSTM, the state parameter of the m-th time LSTM and the predicted flow of the m +1 th time according to the predicted flow of the m-th time LSTM, the output of the m-1 th time LSTM and the state parameter of the m-1 th time LSTM in the second time period; wherein m is more than or equal to n +1 and less than or equal to 2 n.
3. The method of claim 1,
obtaining second predicted flow data corresponding to the first predicted flow data according to the first predicted flow data and a second model, wherein the second predicted flow data comprises:
performing second-order exponential smoothing processing on the first predicted flow at any time from the (n + 1) th time to the 2n th time in the second time period;
determining second predicted flow data at the moment according to a second-order exponential smoothing processing result of the predicted flow at the moment;
the second predicted flow data includes a second predicted flow from the (n + 1) th time to the (2 n) th time within the second time period.
4. The method of claim 3, wherein obtaining the predicted flow data for the second time period based on the first predicted flow data and the second predicted flow data comprises:
calculating the arithmetic mean value of the first predicted flow and the second predicted flow at any moment;
and taking the arithmetic mean value as the predicted flow data at the moment.
5. A flow prediction apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring the flow data of the target network in a first time period;
the preprocessing module is connected with the acquisition module and used for preprocessing the flow data to acquire preprocessed flow data, and the preprocessed flow data comprise n flows from the 1 st moment to the nth moment in a first time period;
the first operation module is connected to the preprocessing module and used for acquiring first predicted flow data in a second time period according to the preprocessed flow data and the first model, wherein the first predicted flow in the second time period comprises a first predicted flow from the (n + 1) th moment to the (2 n) th moment;
the second operation module is connected to the first operation module and used for acquiring second predicted flow data corresponding to the first predicted flow data according to the first predicted flow data and a second model;
the third operation module is connected to the second operation module and used for acquiring the predicted flow data in the second time period according to the first predicted flow data and the second predicted flow data;
the first model is a long-short term memory network model LSTM, and the second model is a second-order exponential smoothing model SES;
the first operation module includes:
the training submodule is used for obtaining the output of the ith moment LSTM, the state parameter of the ith moment LSTM and the predicted flow of the (i + 1) th moment according to the flow data of the ith moment, the output of the (i-1) th moment LSTM and the state parameter of the (i-1) th moment LSTM in the flow of the n continuous moments; wherein i is more than or equal to 1 and less than or equal to n;
when the i is 1, the output of the i-1 th time LSTM and the state parameter of the i-1 th time LSTM are respectively a random value or 0;
when the i is equal to n, the predicted flow rate at the (i + 1) th time is the first predicted flow rate at the (n + 1) th time in the second time period.
6. The apparatus of claim 5, wherein the first computing module further comprises:
the first operation sub-module is connected with the training sub-module and used for obtaining the output of the mth moment LSTM, the state parameter of the mth moment LSTM and the predicted flow of the m +1 moment according to the predicted flow of the mth moment, the output of the m-1 moment LSTM and the state parameter of the m-1 moment LSTM in the second time period; wherein m is more than or equal to n +1 and less than or equal to 2 n.
7. The apparatus of claim 5,
the second operation module includes:
the smoothing submodule is used for performing second-order exponential smoothing processing on the first predicted flow from any one of the (n + 1) th time to the 2n th time in the second time period;
the determining submodule is connected with the smoothing submodule and used for determining second predicted flow data at the moment according to a second-order exponential smoothing processing result of the predicted flow at the moment;
and the second operation submodule is connected with the determining submodule and is used for enabling the second predicted flow data to comprise second predicted flow from the (n + 1) th time to the (2 n) th time in a second time period.
8. The apparatus of claim 7, wherein the third computing module comprises:
the average submodule is used for calculating the arithmetic average value of the first predicted flow and the second predicted flow at any moment;
and the third operation submodule is connected with the averaging submodule and is used for taking the arithmetic mean value as the predicted flow data at the moment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811195940.5A CN109120463B (en) | 2018-10-15 | 2018-10-15 | Flow prediction method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811195940.5A CN109120463B (en) | 2018-10-15 | 2018-10-15 | Flow prediction method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109120463A CN109120463A (en) | 2019-01-01 |
CN109120463B true CN109120463B (en) | 2022-01-07 |
Family
ID=64854255
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811195940.5A Active CN109120463B (en) | 2018-10-15 | 2018-10-15 | Flow prediction method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109120463B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110381524B (en) * | 2019-07-15 | 2022-12-20 | 安徽理工大学 | Bi-LSTM-based large scene mobile flow online prediction method, system and storage medium |
CN110445645A (en) * | 2019-07-26 | 2019-11-12 | 新华三大数据技术有限公司 | Link flow prediction technique and device |
CN110839184B (en) * | 2019-10-15 | 2021-06-15 | 北京邮电大学 | Method and device for adjusting bandwidth of mobile fronthaul optical network based on flow prediction |
CN111327453B (en) * | 2020-01-19 | 2023-04-07 | 国网福建省电力有限公司经济技术研究院 | Communication bandwidth estimation method considering gridding dynamic and static components |
CN112087350B (en) * | 2020-09-17 | 2022-03-18 | 中国工商银行股份有限公司 | Method, device, system and medium for monitoring network access line flow |
CN114726745B (en) * | 2021-01-05 | 2024-05-17 | ***通信有限公司研究院 | Network traffic prediction method, device and computer readable storage medium |
CN113079033B (en) * | 2021-03-08 | 2022-09-27 | 南京苏宁软件技术有限公司 | Flow control method and device, electronic equipment and computer readable medium |
CN115633317B (en) * | 2022-12-21 | 2023-04-07 | 北京金楼世纪科技有限公司 | Message channel configuration method and system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102781081A (en) * | 2012-07-13 | 2012-11-14 | 浙江工业大学 | Energy-saving transmission for wireless sensor network based on secondary exponential smoothing forecasting |
CN103164742A (en) * | 2013-04-02 | 2013-06-19 | 南京邮电大学 | Server performance prediction method based on particle swarm optimization nerve network |
CN104301895A (en) * | 2014-09-28 | 2015-01-21 | 北京邮电大学 | Double-layer trigger intrusion detection method based on flow prediction |
CN105447594A (en) * | 2015-11-17 | 2016-03-30 | 福州大学 | Electric power system grey load prediction method based on exponential smoothing |
GB2541511A (en) * | 2015-06-23 | 2017-02-22 | Ford Global Tech Llc | Rapid traffic parameter estimation |
CN107086935A (en) * | 2017-06-16 | 2017-08-22 | 重庆邮电大学 | Flow of the people distribution forecasting method based on WIFI AP |
CN107124320A (en) * | 2017-06-30 | 2017-09-01 | 北京金山安全软件有限公司 | Traffic data monitoring method and device and server |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2347824B (en) * | 1999-03-05 | 2004-03-03 | Internat Mobile Satellite Orga | Communication methods and apparatus |
CN101453747B (en) * | 2008-10-31 | 2010-09-08 | ***通信集团北京有限公司 | Telephone traffic prediction method and apparatus |
CN104219691B (en) * | 2013-05-29 | 2018-05-18 | 华为技术有限公司 | A kind of Forecasting Methodology and system of cellular network traffic amount |
CN107171848B (en) * | 2017-05-27 | 2020-07-07 | 华为技术有限公司 | Flow prediction method and device |
CN108062561B (en) * | 2017-12-05 | 2020-01-14 | 华南理工大学 | Short-time data flow prediction method based on long-time and short-time memory network model |
CN108197739B (en) * | 2017-12-29 | 2021-03-16 | 中车工业研究院有限公司 | Urban rail transit passenger flow prediction method |
-
2018
- 2018-10-15 CN CN201811195940.5A patent/CN109120463B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102781081A (en) * | 2012-07-13 | 2012-11-14 | 浙江工业大学 | Energy-saving transmission for wireless sensor network based on secondary exponential smoothing forecasting |
CN103164742A (en) * | 2013-04-02 | 2013-06-19 | 南京邮电大学 | Server performance prediction method based on particle swarm optimization nerve network |
CN104301895A (en) * | 2014-09-28 | 2015-01-21 | 北京邮电大学 | Double-layer trigger intrusion detection method based on flow prediction |
GB2541511A (en) * | 2015-06-23 | 2017-02-22 | Ford Global Tech Llc | Rapid traffic parameter estimation |
CN105447594A (en) * | 2015-11-17 | 2016-03-30 | 福州大学 | Electric power system grey load prediction method based on exponential smoothing |
CN107086935A (en) * | 2017-06-16 | 2017-08-22 | 重庆邮电大学 | Flow of the people distribution forecasting method based on WIFI AP |
CN107124320A (en) * | 2017-06-30 | 2017-09-01 | 北京金山安全软件有限公司 | Traffic data monitoring method and device and server |
Non-Patent Citations (1)
Title |
---|
网络流量分析与预测模型研究;郝占军;《中国优秀硕士学位论文全文数据库》;20120430;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN109120463A (en) | 2019-01-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109120463B (en) | Flow prediction method and device | |
US11150974B2 (en) | Anomaly detection using circumstance-specific detectors | |
CN107608862B (en) | Monitoring alarm method, monitoring alarm device and computer readable storage medium | |
JP6913695B2 (en) | Compression technology for encoding stack trace information | |
CN107943809B (en) | Data quality monitoring method and device and big data computing platform | |
EP3863223A1 (en) | Method and device for training service quality evaluation model | |
US11902114B2 (en) | System and method for predicting and reducing subscriber churn | |
Etesami et al. | Learning network of multivariate hawkes processes: A time series approach | |
Kondratenko et al. | Multi-criteria decision making for selecting a rational IoT platform | |
US20190065738A1 (en) | Detecting anomalous entities | |
US20200364607A1 (en) | Systems and methods for unsupervised anomaly detection using non-parametric tolerance intervals over a sliding window of t-digests | |
US8918345B2 (en) | Network analysis system | |
US20160055044A1 (en) | Fault analysis method, fault analysis system, and storage medium | |
US11449798B2 (en) | Automated problem detection for machine learning models | |
US20230115255A1 (en) | Systems and methods for predictive assurance | |
US10581667B2 (en) | Method and network node for localizing a fault causing performance degradation of a service | |
CN114430826A (en) | Time series analysis for predicting computational workload | |
WO2017082782A1 (en) | Managing network alarms | |
Kang et al. | Robust resource scaling of containerized microservices with probabilistic machine learning | |
EP3383088A1 (en) | A computer implemented method, a system and computer programs to quantify the performance of a network | |
US8037365B2 (en) | System and method for automated and adaptive threshold setting to separately control false positive and false negative performance prediction errors | |
US20180196900A1 (en) | System and Method for Forecasting Values of a Time Series | |
KR20200126766A (en) | Operation management apparatus and method in ict infrastructure | |
Ageev et al. | An adaptive method for assessing traffic characteristics in high-speed multiservice communication networks based on a fuzzy control procedure | |
US20230205664A1 (en) | Anomaly detection using forecasting computational workloads |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |