CN113014414B - Network data prediction method, device and system and server - Google Patents

Network data prediction method, device and system and server Download PDF

Info

Publication number
CN113014414B
CN113014414B CN201911335634.1A CN201911335634A CN113014414B CN 113014414 B CN113014414 B CN 113014414B CN 201911335634 A CN201911335634 A CN 201911335634A CN 113014414 B CN113014414 B CN 113014414B
Authority
CN
China
Prior art keywords
prediction
task
subtask
subtasks
network data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911335634.1A
Other languages
Chinese (zh)
Other versions
CN113014414A (en
Inventor
金明浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Cloud Network Technology Co Ltd
Original Assignee
Beijing Kingsoft Cloud Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Cloud Network Technology Co Ltd filed Critical Beijing Kingsoft Cloud Network Technology Co Ltd
Priority to CN201911335634.1A priority Critical patent/CN113014414B/en
Publication of CN113014414A publication Critical patent/CN113014414A/en
Application granted granted Critical
Publication of CN113014414B publication Critical patent/CN113014414B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a method, a device, a system and a server for predicting network data, wherein the method comprises the following steps: determining a prediction task of network data to be executed, wherein the prediction task carries a prediction unit identifier which is used for indicating a prediction unit corresponding to the prediction task; dividing the prediction task into at least one subtask according to the prediction unit corresponding to the prediction task; and executing at least one subtask in a distributed operation mode to obtain a prediction result of each subtask. The invention divides the prediction task of the network data to be executed with larger computation amount into a plurality of subtasks with smaller computation amount, and executes each subtask in a distributed computation mode, thereby executing the prediction task with finer granularity, returning the prediction result more efficiently, dispersing the computation pressure of the data under the condition of larger prediction task amount, and improving the computation efficiency and the computation performance.

Description

Network data prediction method, device and system and server
Technical Field
The present invention relates to the field of network technologies, and in particular, to a method, an apparatus, a system, and a server for predicting network data.
Background
A CDN (Content Delivery Network) Network is a common information Delivery Network, and the Network quality of the CDN Network directly affects the efficiency and reliability of information Delivery, so that various Network data related to the Network quality, such as an error code, a fault rate, a bandwidth, a packet loss rate, a packet sending duration, and the like, need to be monitored. Through monitoring, the abnormal conditions of the network can be found and processed in time, and the normal operation of the CDN network is ensured. When monitoring network quality, it is usually necessary to predict network data to obtain a predicted value of the network data, then compare the predicted value of the network data with an actual value, and determine whether a network anomaly occurs according to a comparison result.
In the related art, in a manner of predicting network data, a large amount of historical network data is generally queried in batches at one time, and corresponding prediction calculation is performed by using the queried data to obtain a corresponding prediction result. However, this method takes too long to predict at one time, and the calculation pressure is large when the prediction amount is large, thereby wasting a large amount of calculation performance.
Disclosure of Invention
The invention aims to provide a method, a device, a system and a server for predicting network data so as to improve the calculation efficiency and the calculation performance.
In a first aspect, an embodiment of the present invention provides a method for predicting network data, where the method includes: determining a prediction task of network data to be executed, wherein the prediction task carries a prediction unit identifier which is used for indicating a prediction unit corresponding to the prediction task; dividing the prediction task into at least one subtask according to the prediction unit corresponding to the prediction task; and executing at least one subtask in a distributed operation mode to obtain a prediction result of each subtask.
In an optional embodiment, the dividing the prediction task into at least one subtask according to the prediction unit corresponding to the prediction task includes: determining a prediction unit number corresponding to prediction history data for predicting network data; and dividing the prediction task into at least one subtask according to the prediction unit number, wherein the number of the subtasks is the same as the prediction unit number.
In an optional embodiment, the step of executing at least one subtask in a distributed computing manner to obtain a prediction result of each subtask includes: the method comprises the steps that subtasks are issued to a preset task table, the issued subtasks comprise prediction modes corresponding to the subtasks, the subtasks in the task table are distributed to a preset task execution machine through a preset task scheduling machine, and the task execution machine executes the subtasks according to the prediction modes to obtain prediction results of the subtasks; wherein each task execution machine executes at least one subtask.
In an alternative embodiment, the prediction method of the subtasks includes: the data address of the prediction history data required by the subtask, the data requirement of the prediction history data required by the subtask, the prediction algorithm corresponding to the subtask, and the storage address of the prediction result of the subtask.
In an optional embodiment, the step of determining the prediction task of the network data to be executed includes: receiving a prediction task execution instruction issued by a user, and determining a prediction task of network data to be executed according to the prediction task execution instruction; or when the preset task execution time is reached, determining the predicted task of the network data to be executed.
In a second aspect, an embodiment of the present invention provides a method for predicting network data, where the method includes: acquiring at least one subtask of a prediction task of network data to be executed; the method comprises the steps that a preset server divides a prediction task into at least one subtask according to a prediction unit of the prediction task; distributing at least one subtask to a preset task execution machine so as to execute the subtask through the task execution machine to obtain a prediction result of the subtask; wherein each task execution machine executes at least one subtask.
In an optional embodiment, the at least one subtask of the prediction task of obtaining the network data to be executed includes: circularly inquiring a preset task table to determine whether a subtask to be distributed exists in the task table, wherein the subtask to be distributed comprises a prediction mode corresponding to the subtask; the step of distributing at least one subtask to a preset task execution machine includes: if the subtask exists, determining the task execution machine for executing the subtask to be distributed according to a preset task distribution rule, and distributing the subtask to be distributed to the determined task execution machine so that the task execution machine executes the subtask according to a prediction mode.
In a third aspect, an embodiment of the present invention provides a method for predicting network data, where the method includes: receiving subtasks of a prediction task of network data to be executed distributed by a preset task scheduler; the method comprises the steps that a preset server divides a prediction task into at least one subtask according to a prediction unit of the prediction task; and executing the subtask to obtain a prediction result of the subtask.
In an optional embodiment, the subtasks also carry prediction modes corresponding to the subtasks; the step of executing the subtask to obtain the prediction result of the subtask includes: and executing the subtask according to the prediction mode of the subtask to obtain the prediction result of the subtask.
In an alternative embodiment, the prediction method of the subtasks includes: the data address of the historical data for prediction required by the subtask, the data requirement of the historical data for prediction required by the subtask, the prediction algorithm corresponding to the subtask, and the storage address of the prediction result of the subtask; the step of executing the subtask according to the prediction mode of the subtask to obtain the prediction result of the subtask includes: acquiring historical data for prediction needed by the subtask from the data address based on the data address of the historical data for prediction needed by the subtask and the data requirement of the historical data for prediction needed by the subtask; according to a prediction algorithm corresponding to the subtasks, performing prediction processing by using historical data for prediction to obtain a prediction result; and sending the prediction result to the storage address of the prediction result of the subtask.
In a fourth aspect, an embodiment of the present invention provides an apparatus for predicting network data, where the apparatus includes: the task determination module is used for determining a prediction task of the network data to be executed, wherein the prediction task carries a prediction unit identifier which is used for indicating a prediction unit corresponding to the prediction task; the task dividing module is used for dividing the predicted task into at least one subtask according to the prediction unit corresponding to the predicted task; and the task execution module is used for executing at least one subtask in a distributed operation mode so as to obtain the prediction result of each subtask.
In a fifth aspect, an embodiment of the present invention provides an apparatus for predicting network data, where the apparatus includes: the task obtaining module is used for obtaining at least one subtask of a prediction task of the network data to be executed; the method comprises the steps that a preset server divides a prediction task into at least one subtask according to a prediction unit of the prediction task; the task distribution module is used for distributing at least one subtask to a preset task execution machine so as to execute the subtask through the task execution machine and obtain a prediction result of the subtask; wherein each task execution machine executes at least one of the subtasks.
In a sixth aspect, an embodiment of the present invention provides an apparatus for predicting network data, where the apparatus includes: the task receiving module is used for receiving subtasks of the prediction tasks of the network data to be executed distributed by a preset task scheduler; the method comprises the following steps that a preset server divides a prediction task into at least one subtask according to a prediction unit of the prediction task; and the prediction module is used for executing the subtasks to obtain the prediction result of the subtasks.
In a seventh aspect, an embodiment of the present invention provides a system for predicting network data, where the system includes a task partitioning device, a task scheduler, and a task execution machine; the task dividing device is used for determining a prediction task of the network data to be executed and dividing the prediction task into at least one subtask according to a prediction unit corresponding to the prediction task; the prediction task carries a prediction unit identifier, and the prediction unit identifier is used for indicating a prediction unit corresponding to the prediction task; the task scheduling machine is used for acquiring at least one subtask of the predicted task and distributing the at least one subtask to a preset task execution machine; the task execution machine is used for executing the subtasks to obtain the prediction results of the subtasks; wherein each task execution machine executes at least one of the subtasks.
In an eighth aspect, an embodiment of the present invention provides a server, where the server includes a processor and a memory, where the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to implement the above-mentioned prediction method for network data.
In a ninth aspect, embodiments of the present invention provide a machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to perform the above-described methods of predicting network data.
The embodiment of the invention has the following beneficial effects:
the invention provides a method, a device, a system and a server for predicting network data, which are characterized in that firstly, a prediction task of the network data to be executed is determined, the prediction task carries a prediction unit identifier, and the prediction unit identifier is used for indicating a prediction unit corresponding to the prediction task; dividing the prediction task into at least one subtask according to the prediction unit corresponding to the prediction task; and then executing at least one subtask in a distributed operation mode to obtain a prediction result of each subtask. The invention divides the prediction task of the network data to be executed with larger computation amount into a plurality of subtasks with smaller computation amount, and executes each subtask in a distributed computation mode, thereby executing the prediction task with finer granularity, returning the prediction result more efficiently, dispersing the computation pressure of the data under the condition of larger prediction task amount, and improving the computation efficiency and the computation performance.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a method for predicting network data according to an embodiment of the present invention;
fig. 2 is a flowchart of another method for predicting network data according to an embodiment of the present invention;
fig. 3 is a flowchart of another method for predicting network data according to an embodiment of the present invention;
fig. 4 is a flowchart of another method for predicting network data according to an embodiment of the present invention;
fig. 5 is a flowchart of another method for predicting network data according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a device for predicting network data according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of another network data prediction apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of another network data prediction apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a system for predicting network data according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A CDN (Content Delivery Network) technology is a commonly used Network information distribution technology, and in this technical service, network health quality is an important aspect of the technology, so when monitoring Network quality, network indexes of various dimensions are often monitored, and an alarm is given for an abnormal Network index. However, in actual work, the network index fluctuates greatly, and the network indexes may be different in different dimensions, and at the same time, the network indexes may also be different at different time points or in different regions.
Based on this, a certain degree of prediction needs to be performed on the network data, and in the related art, in a manner of performing prediction on the network data, a large amount of historical network data is generally queried in batches at one time, and corresponding prediction results are obtained by performing corresponding prediction calculation using queried data. However, the data volume calculated in one time is too concentrated, if the granularity of the network index to be calculated is very fine, the prediction data of tens of thousands or even hundreds of thousands of data indexes may need to be calculated in one time calculation, which results in too long time for one time prediction, and the calculation pressure is large under the condition of large prediction amount, thereby wasting a large amount of calculation performance, and meanwhile, the time for waiting for the prediction result is also long, which is not convenient for obtaining the prediction result in time.
In view of the above description, embodiments of the present invention provide a method, an apparatus, a system, and a server for predicting network data, where the technology may be applied to various prediction scenarios of network data, especially a prediction scenario of a CDN network. To facilitate understanding of the embodiment, a method for predicting network data disclosed in the embodiment of the present invention is first described in detail, and as shown in fig. 1, the method includes the following steps:
step S102, a prediction task of network data to be executed is determined, the prediction task carries a prediction unit identifier, and the prediction unit identifier is used for indicating a prediction unit corresponding to the prediction task.
In this embodiment, the network data may be a network parameter used for indicating the network quality, and specifically may be a bandwidth, an error code, a fault rate, a bandwidth, a packet loss rate, a packet sending duration, and the like.
In this step, the predicted task to be executed may be determined when a predicted task execution instruction issued by the user is received, or the predicted task of the network data to be executed may be determined when a preset task execution time is reached. The prediction task carries a prediction unit identifier, where the prediction unit refers to a prediction granularity of network data, and may be a minimum prediction granularity, for example, the prediction task is a prediction bandwidth, the prediction unit is a domain name, that is, a bandwidth corresponding to each domain name needs to be predicted, for example, the prediction task is a prediction bandwidth, and the prediction unit is a node, that is, a bandwidth corresponding to each node needs to be predicted. The prediction unit of the prediction task may be configured in advance by the user, and then the prediction task carries an identifier indicating the prediction unit.
The prediction task is generally to predict the network data, that is, to predict the network data at the current time through the historical time, or to predict the network data at a future time through the network data at the historical time and the network data at the current time.
And step S104, dividing the prediction task into at least one subtask according to the prediction unit corresponding to the prediction task.
In this step, the prediction task is divided into a plurality of subtasks according to the prediction unit. Generally, the prediction units corresponding to the prediction tasks are different, and the division modes and the division numbers are also different, for example, when the prediction units are nodes, the prediction tasks are generally divided according to the number of the nodes corresponding to the network data, that is, each node corresponds to one subtask; when the prediction unit is a domain name, the prediction task is usually divided according to the number of domain names of the network data, that is, each domain name corresponds to one subtask.
In the embodiment of the application, historical network data used for predicting the network data can be placed in a preset database, and then the quantity of prediction units corresponding to the historical data used for predicting the network data can be known by querying the preset database. For example, if the prediction task is to predict bandwidth, the prediction unit is a domain name, and bandwidth data corresponding to 500 domain names is stored in the database, the bandwidth prediction task is split into 500 subtasks, each domain name corresponds to one subtask, and then bandwidth data corresponding to each domain name is predicted respectively.
During specific implementation, the divided subtasks can be stored in the task table so as to uniformly manage all subtasks and facilitate the calling of subsequent subtasks.
And step S106, executing at least one subtask in a distributed operation mode to obtain a prediction result of each subtask.
The distributed operation mode can divide a problem which can be solved only by huge computing power into a plurality of small tasks, and the small tasks are executed by utilizing distributed operation, so that the overall computing time can be saved, and the computing efficiency is greatly improved.
In this step, the subtasks may be sent to the distributed computing system, and the distributed computing system schedules and executes the subtasks.
In particular, the subtasks may be distributed to multiple task execution machines for processing, i.e. the task execution machines execute at least one subtask. The task execution machine can be a virtual device, a server or a process in the system. In specific implementation, if the number of subtasks is small, all subtasks can be executed by a single task execution machine; if the number of the subtasks is large, the subtasks can be distributed to a plurality of task execution machines to be executed, and when the data volume of the subtasks continuously increases, the data volume of the task execution machines can be increased to improve the execution speed. For example, there may be 500 subtasks and 10 task execution machines, and 50 subtasks may be performed by each task execution machine, and two task execution machines may be added as the number of subtasks increases to 600.
After the task execution machine processes the subtasks, a prediction result corresponding to each subtask can be obtained, the prediction result includes a prediction value of the network data corresponding to the subtask, the prediction value of the network data can be further compared with an actual value, and whether the network corresponding to the subtask is abnormal or not can be determined according to the comparison result.
The network data prediction method comprises the steps of firstly determining a prediction task of network data to be executed, wherein the prediction task carries a prediction unit identifier which is used for indicating a prediction unit corresponding to the prediction task; dividing the prediction task into at least one subtask according to the prediction unit corresponding to the prediction task; and then executing at least one subtask in a distributed operation mode to obtain a prediction result of each subtask. The invention divides the prediction task of the network data to be executed with larger computation amount into a plurality of subtasks with smaller computation amount, and executes each subtask by a distributed computation mode, thereby executing the prediction task with finer granularity, returning the prediction result more efficiently, dispersing the calculation pressure of the data under the condition of larger computation amount of the prediction task, and improving the calculation efficiency and the calculation performance.
The embodiment of the invention also provides another network data prediction method, which is realized on the basis of the method in the embodiment; the method mainly describes a specific process (realized by the following steps S204-S206) of dividing a prediction task into at least one subtask according to a prediction unit corresponding to the prediction task, and a specific process (realized by the following step S208) of executing at least one subtask in a distributed operation mode and acquiring a prediction result of each subtask; as shown in fig. 2, the method comprises the steps of:
step S202, determining a prediction task of network data to be executed, where the prediction task carries a prediction unit identifier, and the prediction unit identifier is used to indicate a prediction unit corresponding to the prediction task.
In a specific implementation, the step S202 may be implemented in one or two of the following manners:
and the method I comprises the steps of receiving a prediction task execution instruction issued by a user, and determining a prediction task of network data to be executed according to the prediction task execution instruction. The user can issue a prediction task execution instruction according to the requirement, and when the prediction task execution instruction is received, the prediction task of the network data to be executed is immediately determined, and the prediction task in the method can be called as a temporary task.
And secondly, when the preset task execution time is reached, determining a prediction task of the network data to be executed. The preset task execution time may be a specific time point (for example, 13.
In step S204, the number of prediction units corresponding to the prediction history data for predicting the network data is determined.
The preset database usually stores historical data for prediction, the historical data for prediction usually refers to historical data of network data corresponding to the prediction task, the number of prediction units can be determined through the historical data, and the number of the prediction units in the historical data for prediction can be inquired from the database, and the inquired number of the prediction units is also the number of the prediction units. For example, when the network data is a bandwidth, the prediction task is a prediction bandwidth, and the prediction unit is a domain name, the historical bandwidth data is determined as the historical data for prediction, a plurality of domain names in the historical bandwidth data need to be searched from the database, the data amount of the domain name needs to be determined as the data amount of the prediction unit, for example, 100 domain names in the historical bandwidth data, and the number of the prediction unit needs to be determined as 100. Then, the prediction task may be split into 100 subtasks, each domain name corresponds to one subtask, and then bandwidth data corresponding to each domain name is predicted respectively subsequently.
Typically, the database may be an ES (distributed search engine), which is typically a distributed, highly-extended, highly-real-time search and data analysis engine capable of conveniently providing a large amount of data with search, analysis and exploration capabilities.
Step S206, according to the number of the prediction units, dividing the prediction task into at least one subtask, wherein the number of the subtasks is the same as the number of the prediction units.
In specific implementation, the prediction tasks are divided according to the prediction unit data to obtain at least one subtask, wherein the number of the subtasks is the same as the number of the prediction units. For example, the prediction unit of the prediction task is a domain name, a plurality of domain names in the historical bandwidth data may be queried from the database, and the data size of the domain name is determined as the data size of the prediction unit to determine the number of divided subtasks, that is, the prediction task is divided into at least one subtask according to the domain name, where each domain name corresponds to one subtask.
Step S208, the subtasks are issued to a preset task table, the issued subtasks include prediction modes corresponding to the subtasks, so that the subtasks in the task table are distributed to a preset task execution machine through a preset task scheduling machine, and the task execution machine executes the subtasks according to the prediction modes and obtains prediction results of the subtasks; wherein each task execution machine executes at least one subtask.
The issued subtasks can be stored and managed in the preset task table. The preset task scheduling machine can distribute the subtasks in the task list to a plurality of preset task execution machines. Typically, the number of task execution machines, e.g., 10, that are used to execute each subtask of the current predicted task may be pre-selected, so that the task scheduler distributes the subtasks in the task list to the pre-selected 10 task execution machines. In most cases, the number of subtasks is equal to or greater than the number of task-executing machines, and therefore, in order to ensure the operating efficiency and improve the utilization rate of the task-executing machines, at least one subtask is allocated to each task-executing machine.
Usually, each task execution machine is executed in parallel, when the number of subtasks is large or the execution speed needs to be increased, the execution resources (also referred to as computing resources) of the current predicted task can be adjusted by increasing the number of task execution machines so as to increase the execution efficiency. For example, if there are 10 ten thousand subtasks to be processed by 10 task execution machines, each task execution machine needs to process ten thousand subtasks, and if the number of task execution machines is increased to 20, each task execution machine needs to process only five thousand subtasks, thereby improving the execution efficiency.
In a specific implementation, the prediction method of the subtask includes: the data address of the prediction history data required by the subtask, the data requirement of the prediction history data required by the subtask, the prediction algorithm corresponding to the subtask, and the storage address of the prediction result of the subtask.
The data address of the prediction history data required for the subtask is usually the data address of the history network data in the database; the data requirements generally refer to data volume, data time granularity and the like; the prediction algorithm is generally a prediction algorithm required by the current prediction task, and a classification algorithm, a regression algorithm, a clustering algorithm and the like can be adopted; the memory address is typically an address in a database holding the prediction results.
The task execution machine executes the subtasks according to the prediction mode, and the specific process of obtaining the prediction result of the subtasks is generally as follows: acquiring historical data for prediction needed by the subtask from the data address based on the data address of the historical data for prediction needed by the subtask and the data requirement of the historical data for prediction needed by the subtask; according to a prediction algorithm corresponding to the subtask, performing prediction processing by using historical data for prediction to obtain a prediction result; and sending the prediction result to the storage address of the prediction result of the subtask.
According to the storage address of the prediction result, the prediction result of the subtask can be stored to a corresponding position of a database, and the data state stored in the database is updated after the storage of the prediction result, wherein the database can be a distributed search engine ES or a preset prediction database.
The task execution machine can be a spark-execute actuator in the spark calculation module. The spark computing module is typically a fast, general-purpose computing engine designed specifically for large-scale data processing, and spark enables a memory-distributed dataset that, in addition to being able to provide interactive queries, can optimize iterative workloads. The spark-executor is generally responsible for executing the tasks (corresponding to the above subtasks) that make up the spark application, and returning the execution results to the driver process; and the spark-execution executor usually comprises a block manager for providing memory storage for data required to be cached in the user program, so that the subtask can fully utilize the cached data to perform accelerated operation during running.
The spark computing module generally stores all prediction algorithms for task execution, and the spark computing module can be divided into a single task execution machine and a computing task system of a multi-machine cluster (equivalent to the cooperation of multiple task execution machines). When the spark calculation module predicts the subtasks, the spark-driver (equivalent to the preset task scheduler) schedules the subtasks, and the spark-executive executor executes the subtasks, and meanwhile, the spark calculation module can dynamically adjust the cluster calculation resources (the cluster calculation resources include a plurality of spark-executive executors) to optimize the calculation efficiency.
The network data prediction method comprises the steps of firstly determining a prediction task of network data to be executed, wherein the prediction task carries a prediction unit identifier and is used for indicating a prediction unit corresponding to the prediction task; determining the number of prediction units corresponding to the prediction history data for predicting the network data; dividing the prediction task into at least one subtask according to the number of the prediction units, wherein the number of the subtasks is the same as the number of the prediction units; and then, the subtasks are issued to a preset task table, the issued subtasks include prediction modes corresponding to the subtasks, so that the subtasks in the task table are distributed to a preset task execution machine through a preset task scheduling machine, and the task execution machine executes the subtasks according to the prediction modes to obtain prediction results of the subtasks. According to the method, the prediction tasks corresponding to the large-scale network data can be dispersed into the fine subtasks for calculation, so that the execution result of the prediction tasks can be obtained, the execution efficiency of the prediction tasks can be improved, and the stability and fault tolerance of prediction are improved.
Corresponding to the embodiment of the foregoing network data prediction method, another network data prediction method is further provided in the embodiments of the present invention, as shown in fig. 3, where the method includes the following steps:
step S302, at least one subtask of a prediction task of network data to be executed is obtained; and the preset server divides the prediction task into at least one subtask according to the prediction unit of the prediction task.
In a specific implementation, at least one subtask is obtained from a task table of subtasks of a prediction task that stores network data to be executed.
Step S304, distributing the at least one subtask to a preset task execution machine so as to execute the subtask through the task execution machine to obtain a prediction result of the subtask; wherein each task execution machine executes at least one subtask.
When distributing subtasks, at least one subtask is distributed on one task execution machine. Generally, when the number of subtasks is small, only one subtask may be allocated to one task execution machine, and when the number of subtasks is large, multiple subtasks may be allocated to one task execution machine.
The network data prediction method comprises the steps of firstly, obtaining at least one subtask of a prediction task of network data to be executed; and then distributing the at least one subtask to a preset task execution machine so as to execute the subtask through the task execution machine, thereby obtaining a prediction result of the subtask. According to the method, the calculation resources can be dispersed by dispersing the calculation granularity of the prediction task, and meanwhile, the calculation efficiency and the calculation performance are improved.
The embodiment of the invention also provides another network data prediction method, which is realized on the basis of the method shown in FIG. 3; the method mainly describes a specific process of acquiring at least one subtask of a predicted task of network data to be executed (realized by the following step S402), and a specific process of distributing the at least one subtask to a preset task execution machine (realized by the following step S404); as shown in fig. 4, the method includes the steps of:
step S402, circularly inquiring a preset task table to determine whether a subtask to be distributed exists in the task table, wherein the subtask to be distributed comprises a prediction mode corresponding to the subtask.
The preset server divides the predicted task into at least one sub-task according to the prediction unit of the predicted task, stores the divided sub-tasks in a task table, and periodically (corresponding to the cycle) inquires whether the sub-task to be distributed is required from the task table, wherein the sub-task to be distributed usually refers to the sub-task which is not executed.
And step S404, if the subtask exists, determining a task execution machine for executing the subtask to be distributed according to a preset task distribution rule, and distributing the subtask to be distributed to the determined task execution machine so that the task execution machine executes the subtask according to a prediction mode to obtain a prediction result.
The preset task distribution rule is usually set manually, and the task distribution rule usually includes setting the maximum executable subtask number per task execution machine, distribution mode, and the like, and the distribution mode includes randomly distributing the subtasks to the task execution machines or sequentially distributing the subtasks to the task execution machines. In general, the task execution machines for specifying the sub-tasks of the current predicted task may be pre-selected, for example, 10, and then the sub-tasks may be distributed to the task execution machines according to the distribution rule. In most cases, the number of subtasks is equal to or greater than the number of task-executing machines, and therefore, in order to ensure the operating efficiency and improve the utilization rate of the task-executing machines, at least one subtask is allocated to each task-executing machine.
And circularly searching the subtasks to be distributed from the task table, immediately distributing the subtasks to a plurality of task execution machines according to a preset task distribution rule if the subtasks to be distributed are inquired, and immediately executing the distributed subtasks after the task execution machines receive the subtasks. In the process of executing the subtasks by the task execution machine, newly added subtasks may be added to the task table, and at this time, the newly added subtasks may be distributed to the corresponding task execution machine based on the process of the circular query. When the execution of the subtask is completed, the state of the corresponding subtask in the task table is usually modified, for example, the state of the completed subtask is modified to "processed" or "executed" or the like.
The method for predicting the network data comprises the steps of firstly circularly inquiring a preset task table to determine whether subtasks to be distributed exist in the task table or not, wherein the subtasks to be distributed comprise prediction modes corresponding to the subtasks; if yes, determining a task execution machine for executing the subtasks to be distributed according to a preset task distribution rule, and distributing the subtasks to be distributed to the determined task execution machine so that the task execution machine executes the subtasks according to a prediction mode. According to the method, the prediction result can be obtained at the first time by dispersing the calculation granularity of the prediction task, the unified processing is not needed after all tasks are completed, the integrity of data in the prediction process can be ensured, and meanwhile, the calculation resources are dispersed, and the calculation task can not be stopped due to single machine failure.
Corresponding to the embodiment of the foregoing network data prediction method, another network data prediction method is further provided in the embodiments of the present invention, as shown in fig. 5, the method includes the following steps:
step S502, a subtask of a prediction task of network data to be executed and distributed by a preset task scheduler is received; the preset server divides the prediction task into at least one subtask according to the prediction unit of the prediction task.
Step S504, the above subtasks are executed to obtain the prediction result of the subtask.
During specific implementation, the subtasks also carry prediction modes corresponding to the subtasks; the step S504 can be implemented by: and executing the subtasks according to the prediction mode of the subtasks to obtain the prediction result of the subtasks.
The prediction mode of the subtasks comprises the following steps: the data address of the prediction history data required by the subtask, the data requirement of the prediction history data required by the subtask, the prediction algorithm corresponding to the subtask, and the storage address of the prediction result of the subtask.
The step of obtaining the prediction result of the subtask can be implemented by the following steps 10-12:
and step 10, acquiring the historical data for prediction needed by the subtask from the data address based on the data address of the historical data for prediction needed by the subtask and the data requirement of the historical data for prediction needed by the subtask.
Because the subtasks are data with the finest granularity, when each subtask reads data from the data list, the subtasks do not interfere with each other, and therefore the data acquisition efficiency can be improved.
And step 11, according to the prediction algorithm corresponding to the subtasks, performing prediction processing by using historical data for prediction to obtain a prediction result.
In specific implementation, the prediction method carried by the subtask is usually adopted to predict the historical data for prediction, the prediction result is stored in the corresponding position of the database, the address of the position is also the storage address, and the data state stored in the database is updated after the prediction result of the subtask is stored in the database.
And step 12, sending the prediction result to a storage address of the prediction result of the subtask.
The network data prediction method comprises the steps of firstly receiving subtasks of prediction tasks of network data to be executed distributed by a preset task scheduler; the method comprises the steps that a preset server divides a prediction task into at least one subtask according to a prediction unit of the prediction task; and then executing the subtasks to obtain the prediction result of the subtask. According to the method, the prediction result can be obtained at the first time by dispersing the calculation granularity of the prediction task without uniformly processing after all tasks are completed, the integrity of data in the prediction process can be ensured, the calculation resources are dispersed, the calculation task cannot be stopped due to the single machine fault, and the calculation efficiency and the calculation performance are improved.
Corresponding to the foregoing method embodiment, an embodiment of the present invention provides a device for predicting network data, and as shown in fig. 6, the device includes:
the task determining module 60 is configured to determine a prediction task of the network data to be executed, where the prediction task carries a prediction unit identifier, and the prediction unit identifier is used to indicate a prediction unit corresponding to the prediction task.
And the task dividing module 61 is configured to divide the prediction task into at least one sub task according to the prediction unit corresponding to the prediction task.
And the task execution module 62 is configured to execute at least one sub-task in a distributed computing manner to obtain a prediction result of each sub-task.
The network data prediction device firstly determines a prediction task of network data to be executed, wherein the prediction task carries a prediction unit identifier which is used for indicating a prediction unit corresponding to the prediction task; dividing the prediction task into at least one subtask according to the prediction unit corresponding to the prediction task; and then executing at least one subtask in a distributed operation mode to obtain a prediction result of each subtask. The invention divides the prediction task of the network data to be executed with larger computation amount into a plurality of subtasks with smaller computation amount, and executes each subtask by a distributed computation mode, thereby executing the prediction task with finer granularity, returning the prediction result more efficiently, dispersing the calculation pressure of the data under the condition of larger computation amount of the prediction task, and improving the calculation efficiency and the calculation performance.
Further, the task dividing module 61 is configured to: determining a prediction unit number corresponding to prediction history data for predicting network data; and dividing the prediction task into at least one subtask according to the number of the prediction units, wherein the number of the subtasks is the same as the number of the prediction units.
Further, the task execution module 62 is configured to: the method comprises the steps that subtasks are issued to a preset task table, the issued subtasks comprise prediction modes corresponding to the subtasks, the subtasks in the task table are distributed to a preset task execution machine through a preset task scheduling machine, and the task execution machine executes the subtasks according to the prediction modes to obtain prediction results of the subtasks; wherein each task execution machine executes at least one subtask.
The prediction mode of the subtasks comprises the following steps: the data address of the prediction history data required by the subtask, the data requirement of the prediction history data required by the subtask, the prediction algorithm corresponding to the subtask, and the storage address of the prediction result of the subtask.
The task determining module 60 is configured to: receiving a prediction task execution instruction issued by a user, and determining a prediction task of network data to be executed according to the prediction task execution instruction; or when the preset task execution time is reached, determining the predicted task of the network data to be executed.
The implementation principle and the generated technical effect of the prediction apparatus for network data provided by the embodiment of the present invention are the same as those of the foregoing method embodiment, and for brief description, no mention is made in the apparatus embodiment, and reference may be made to the corresponding contents in the foregoing method embodiment.
Corresponding to the foregoing method embodiment, another apparatus for predicting network data is provided in an embodiment of the present invention, and as shown in fig. 7, the apparatus includes:
a task obtaining module 70, configured to obtain at least one subtask of a predicted task of network data to be executed; the preset server divides the prediction task into at least one subtask according to the prediction unit of the prediction task.
The task distributing module 71 is configured to distribute at least one subtask to a preset task execution machine, so that the task execution machine executes the subtask to obtain a prediction result of the subtask; wherein each task execution machine executes at least one subtask.
The network data prediction device firstly obtains at least one subtask of a prediction task of network data to be executed; and then distributing the at least one subtask to a preset task execution machine so as to execute the subtask through the task execution machine to obtain a prediction result of the subtask. The method can disperse computing resources by dispersing the computing granularity of the prediction task, and simultaneously improves the computing efficiency and the computing performance.
Further, the task obtaining module 70 is configured to: circularly inquiring a preset task table to determine whether a subtask to be distributed exists in the task table, wherein the subtask to be distributed comprises a prediction mode corresponding to the subtask; the task distribution module 71 is further configured to: if yes, determining a task execution machine for executing the subtasks to be distributed according to a preset task distribution rule, and distributing the subtasks to be distributed to the determined task execution machine so that the task execution machine executes the subtasks according to a prediction mode.
The implementation principle and the generated technical effect of the prediction apparatus for network data provided by the embodiment of the present invention are the same as those of the foregoing method embodiment, and for brief description, no mention is made in the apparatus embodiment, and reference may be made to the corresponding contents in the foregoing method embodiment.
Corresponding to the above method embodiment, another prediction apparatus for network data is provided in the embodiment of the present invention, as shown in fig. 8, the apparatus includes:
a task receiving module 80, configured to receive a subtask of a predicted task of network data to be executed, which is distributed by a preset task scheduler; the method comprises the steps that a preset server divides a prediction task into at least one subtask according to a prediction unit of the prediction task;
and the prediction module 81 is used for executing the subtasks to obtain the prediction results of the subtasks.
Specifically, the subtasks also carry prediction modes corresponding to the subtasks; the prediction module 81 is further configured to: and executing the subtasks according to the prediction mode of the subtasks to obtain the prediction result of the subtasks.
The prediction mode of the subtasks includes: the data address of the historical data for prediction required by the subtask, the data requirement of the historical data for prediction required by the subtask, the prediction algorithm corresponding to the subtask, and the storage address of the prediction result of the subtask; the prediction module 81 is further configured to: acquiring historical data for prediction needed by the subtask from the data address based on the data address of the historical data for prediction needed by the subtask and the data requirement of the historical data for prediction needed by the subtask; according to a prediction algorithm corresponding to the subtask, performing prediction processing by using historical data for prediction to obtain a prediction result; and sending the prediction result to the storage address of the prediction result of the subtask.
The network data prediction device firstly receives subtasks of prediction tasks of network data to be executed distributed by a preset task scheduler; the method comprises the following steps that a preset server divides a prediction task into at least one subtask according to a prediction unit of the prediction task; and then executing the subtasks to obtain the prediction result of the subtask. According to the method, the prediction result can be obtained at the first time by dispersing the calculation granularity of the prediction task, unified processing after all tasks are completed is not required, the integrity of data in the prediction process can be ensured, meanwhile, the calculation resources are dispersed, the calculation task is not stopped due to single machine failure, and further, the calculation efficiency and the calculation performance are improved.
Corresponding to the above method embodiment, the embodiment of the present invention provides a prediction system of network data, as shown in fig. 9, the system includes a task dividing device 90, a task scheduler 91 and a task execution machine 90;
the task dividing device 90 is configured to determine a prediction task of the network data to be executed, and divide the prediction task into at least one sub-task according to a prediction unit corresponding to the prediction task; the prediction task carries a prediction unit identifier, and the prediction unit identifier is used for indicating a prediction unit corresponding to the prediction task.
The task scheduler 91 is configured to obtain at least one subtask of the predicted task, and distribute the at least one subtask to a preset task execution machine.
The task execution machine 92 is configured to execute the subtasks to obtain the prediction results of the subtasks; wherein each task execution machine executes at least one of the subtasks.
The implementation principle and the generated technical effect of the prediction system of network data provided by the embodiment of the present invention are the same as those of the foregoing method embodiment, and for brief description, no part of the system embodiment is mentioned, and reference may be made to the corresponding contents in the foregoing method embodiment.
An embodiment of the present invention further provides a server, configured to execute the network data prediction method, and as shown in fig. 10, the server includes a processor 101 and a memory 100, where the memory 100 stores machine executable instructions that can be executed by the processor 101, and the processor 101 executes the machine executable instructions to implement the network data prediction method.
Further, the server shown in fig. 10 further includes a bus 102 and a communication interface 103, and the processor 101, the communication interface 103, and the memory 100 are connected through the bus 102.
The Memory 100 may include a Random Access Memory (RAM) and a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 103 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like may be used. The bus 102 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 10, but this does not indicate only one bus or one type of bus.
The processor 101 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 101. The Processor 101 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, etc. as is well known in the art. The storage medium is located in the memory 100, and the processor 101 reads the information in the memory 100, and completes the steps of the method of the foregoing embodiment in combination with the hardware thereof.
The embodiment of the present invention further provides a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions, and when the machine-executable instructions are called and executed by a processor, the machine-executable instructions cause the processor to implement the method for predicting network data, and specific implementation may refer to method embodiments, and is not described herein again.
The method, the apparatus, the system, and the computer program product for predicting network data provided in the embodiments of the present invention include a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiments, and specific implementation may refer to the method embodiments, and will not be described herein again.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the apparatus and/or the electronic device described above may refer to corresponding processes in the foregoing method embodiments, and are not described herein again.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that the following descriptions are only illustrative and not restrictive, and that the scope of the present invention is not limited to the above embodiments: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (15)

1. A method for predicting network data, the method comprising:
determining a prediction task of network data to be executed, wherein the prediction task carries a prediction unit identifier, and the prediction unit identifier is used for indicating a prediction unit corresponding to the prediction task; the network data comprises network parameters for indicating a network quality; the prediction unit is the minimum prediction granularity;
determining a prediction unit number corresponding to prediction history data for predicting the network data; dividing the prediction task into at least one subtask according to the number of the prediction units, wherein the number of the subtasks is the same as the number of the prediction units;
and executing the at least one subtask in a distributed operation mode to obtain a prediction result of each subtask.
2. The method according to claim 1, wherein the step of performing the at least one subtask through a distributed operation to obtain a predicted result of each subtask includes:
the subtasks are issued to a preset task table, the issued subtasks comprise prediction modes corresponding to the subtasks, the subtasks in the task table are distributed to a preset task execution machine through a preset task scheduling machine, and the task execution machine executes the subtasks according to the prediction modes to obtain prediction results of the subtasks; wherein each task execution machine executes at least one of the subtasks.
3. The method of claim 2, wherein the manner of prediction of the subtasks comprises:
the data address of the historical data for prediction required by the subtask, the data requirement of the historical data for prediction required by the subtask, the prediction algorithm corresponding to the subtask, and the storage address of the prediction result of the subtask.
4. A method according to any of claims 1-3, wherein the step of determining a predictive task for network data to be performed comprises:
receiving a prediction task execution instruction issued by a user, and determining a prediction task of network data to be executed according to the prediction task execution instruction;
or
And when the preset task execution time is reached, determining a prediction task of the network data to be executed.
5. A method for predicting network data, the method comprising:
acquiring at least one subtask of a prediction task of network data to be executed; the preset server determines the number of prediction units corresponding to prediction historical data for predicting the network data; dividing the prediction task into at least one subtask according to the number of the prediction units, wherein the number of the subtasks is the same as the number of the prediction units; the network data comprises network parameters for indicating a network quality; the prediction unit is the minimum prediction granularity;
distributing the at least one subtask to a preset task execution machine, so that the subtask is executed by the task execution machine to obtain a prediction result of the subtask; wherein each task execution machine executes at least one of the subtasks.
6. The method of claim 5, wherein the step of obtaining at least one subtask of the task of predicting the network data to be executed comprises:
circularly inquiring a preset task table to determine whether a subtask to be distributed exists in the task table, wherein the subtask to be distributed comprises a prediction mode corresponding to the subtask;
the step of distributing the at least one subtask to a preset task execution machine includes:
if the subtask exists, determining a task execution machine for executing the subtask to be distributed according to a preset task distribution rule, and distributing the subtask to be distributed to the determined task execution machine so that the task execution machine executes the subtask according to the prediction mode.
7. A method for predicting network data, the method comprising:
receiving subtasks of a prediction task of network data to be executed distributed by a preset task scheduler; the preset server determines the number of prediction units corresponding to prediction historical data for predicting the network data; dividing the prediction task into at least one subtask according to the number of the prediction units, wherein the number of the subtasks is the same as the number of the prediction units; the network data comprises network parameters for indicating a network quality; the prediction unit is the minimum prediction granularity;
and executing the subtask to obtain a prediction result of the subtask.
8. The method according to claim 7, wherein the subtasks further carry prediction modes corresponding to the subtasks;
the step of executing the subtask to obtain the predicted result of the subtask includes:
and executing the subtasks according to the prediction mode of the subtasks to obtain the prediction result of the subtasks.
9. The method of claim 8,
the prediction mode of the subtask comprises the following steps: the data address of the historical data for prediction required by the subtask, the data requirement of the historical data for prediction required by the subtask, the prediction algorithm corresponding to the subtask, and the storage address of the prediction result of the subtask;
the step of executing the subtask according to the prediction mode of the subtask to obtain the prediction result of the subtask includes:
acquiring historical data for prediction needed by the subtask from a data address based on the data address of the historical data for prediction needed by the subtask and the data requirement of the historical data for prediction needed by the subtask;
according to a prediction algorithm corresponding to the subtasks, performing prediction processing by using the historical data for prediction to obtain a prediction result;
and sending the prediction result to a storage address of the prediction result of the subtask.
10. An apparatus for predicting network data, the apparatus comprising:
the task determination module is used for determining a prediction task of network data to be executed, wherein the prediction task carries a prediction unit identifier, and the prediction unit identifier is used for indicating a prediction unit corresponding to the prediction task; the network data comprises network parameters for indicating a network quality; the prediction unit is the minimum prediction granularity;
a task dividing module for determining a prediction unit number corresponding to prediction history data for predicting the network data; dividing the prediction task into at least one subtask according to the number of the prediction units, wherein the number of the subtasks is the same as the number of the prediction units;
and the task execution module is used for executing the at least one subtask in a distributed operation mode to obtain a prediction result of each subtask.
11. An apparatus for predicting network data, the apparatus comprising:
the task obtaining module is used for obtaining at least one subtask of a prediction task of the network data to be executed; the preset server determines the number of prediction units corresponding to prediction historical data for predicting the network data; dividing the prediction task into at least one subtask according to the number of the prediction units, wherein the number of the subtasks is the same as the number of the prediction units; the network data comprises network parameters for indicating a network quality; the prediction unit is the minimum prediction granularity;
the task distribution module is used for distributing the at least one subtask to a preset task execution machine so as to execute the subtask through the task execution machine and obtain a prediction result of the subtask; wherein each task execution machine executes at least one of the subtasks.
12. An apparatus for predicting network data, the apparatus comprising:
the task receiving module is used for receiving subtasks of the prediction tasks of the network data to be executed distributed by a preset task scheduler; the preset server determines the number of prediction units corresponding to prediction historical data for predicting the network data; dividing the prediction task into at least one subtask according to the number of the prediction units, wherein the number of the subtasks is the same as the number of the prediction units; the network data comprises network parameters for indicating a network quality; the prediction unit is the minimum prediction granularity;
and the prediction module is used for executing the subtasks to obtain the prediction result of the subtasks.
13. The system for predicting the network data is characterized by comprising a task dividing device, a task scheduling machine and a task execution machine; wherein the content of the first and second substances,
the task dividing device is used for determining a prediction task of the network data to be executed and determining the number of prediction units corresponding to prediction historical data for predicting the network data; dividing the prediction task into at least one subtask according to the number of the prediction units, wherein the number of the subtasks is the same as the number of the prediction units; the prediction task carries a prediction unit identifier, and the prediction unit identifier is used for indicating a prediction unit corresponding to the prediction task; the network data comprises network parameters for indicating a network quality; the prediction unit is the minimum prediction granularity;
the task scheduling machine is used for acquiring at least one subtask of the predicted task and distributing the at least one subtask to a preset task execution machine;
the task execution machine is used for executing the subtasks to obtain the prediction result of the subtasks; wherein each of the task performance machines performs at least one of the subtasks.
14. A server comprising a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor executing the machine executable instructions to implement the method of predicting network data of any one of claims 1 to 9.
15. A machine-readable storage medium having stored thereon machine-executable instructions which, when invoked and executed by a processor, cause the processor to implement the method of predicting network data of any of claims 1 to 9.
CN201911335634.1A 2019-12-20 2019-12-20 Network data prediction method, device and system and server Active CN113014414B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911335634.1A CN113014414B (en) 2019-12-20 2019-12-20 Network data prediction method, device and system and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911335634.1A CN113014414B (en) 2019-12-20 2019-12-20 Network data prediction method, device and system and server

Publications (2)

Publication Number Publication Date
CN113014414A CN113014414A (en) 2021-06-22
CN113014414B true CN113014414B (en) 2023-02-24

Family

ID=76383064

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911335634.1A Active CN113014414B (en) 2019-12-20 2019-12-20 Network data prediction method, device and system and server

Country Status (1)

Country Link
CN (1) CN113014414B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103455518A (en) * 2012-06-04 2013-12-18 中兴通讯股份有限公司 Data processing method and device
CN104239144A (en) * 2014-09-22 2014-12-24 珠海许继芝电网自动化有限公司 Multilevel distributed task processing system
CN105719021A (en) * 2016-01-21 2016-06-29 中国铁路总公司 Railway passenger traffic predicting method and system
CN109347697A (en) * 2018-10-10 2019-02-15 南昌航空大学 Opportunistic network link prediction method, apparatus and readable storage medium storing program for executing

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11496413B2 (en) * 2014-12-23 2022-11-08 Telefonaktiebolaget Lm Ericsson (Publ) Allocating cloud computing resources in a cloud computing environment based on user predictability

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103455518A (en) * 2012-06-04 2013-12-18 中兴通讯股份有限公司 Data processing method and device
CN104239144A (en) * 2014-09-22 2014-12-24 珠海许继芝电网自动化有限公司 Multilevel distributed task processing system
CN105719021A (en) * 2016-01-21 2016-06-29 中国铁路总公司 Railway passenger traffic predicting method and system
CN109347697A (en) * 2018-10-10 2019-02-15 南昌航空大学 Opportunistic network link prediction method, apparatus and readable storage medium storing program for executing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
G. Zorn ; Network Zen ; R. Schott ; Deutsche Telekom ; Q. Wu等.RTP Control Protocol (RTCP) Extended Report (XR) Blocksfor Summary Statistics Metrics Reporting.《IETF rfc7004》.2013, *

Also Published As

Publication number Publication date
CN113014414A (en) 2021-06-22

Similar Documents

Publication Publication Date Title
CN110908788B (en) Spark Streaming based data processing method and device, computer equipment and storage medium
CN112162865A (en) Server scheduling method and device and server
US9876703B1 (en) Computing resource testing
US8572621B2 (en) Selection of server for relocation of application program based on largest number of algorithms with identical output using selected server resource criteria
CN110750343B (en) Cluster system timing task scheduling control method and cluster system
CN112256417B (en) Data request processing method and device and computer readable storage medium
CN108762905B (en) Method and device for processing multitask events
CN109189572B (en) Resource estimation method and system, electronic equipment and storage medium
CN107430526B (en) Method and node for scheduling data processing
US9600251B1 (en) Enhancing API service schemes
CN114816709A (en) Task scheduling method, device, server and readable storage medium
CN113014414B (en) Network data prediction method, device and system and server
CN111355751A (en) Task scheduling method and device
CN110955460B (en) Service process starting method and device, electronic equipment and storage medium
CN114595075A (en) Network scheduling asynchronous task execution method based on distributed scheduling
CN110750362A (en) Method and apparatus for analyzing biological information, and storage medium
CN115712572A (en) Task testing method and device, storage medium and electronic device
CN114090268B (en) Container management method and container management system
Brondolin et al. Performance-aware load shedding for monitoring events in container based environments
CN115904729A (en) Method, device, system, equipment and medium for connection allocation
US9052952B1 (en) Adaptive backup model for optimizing backup performance
CN113986510A (en) Resource scheduling method and device and electronic equipment
CN113127289B (en) Resource management method, computer equipment and storage medium based on YARN cluster
CN114356713A (en) Thread pool monitoring method and device, electronic equipment and storage medium
CN111782688B (en) Request processing method, device, equipment and storage medium based on big data analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant