CN111338921A - System performance prediction method and device, computer equipment and storage medium - Google Patents

System performance prediction method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111338921A
CN111338921A CN202010105712.5A CN202010105712A CN111338921A CN 111338921 A CN111338921 A CN 111338921A CN 202010105712 A CN202010105712 A CN 202010105712A CN 111338921 A CN111338921 A CN 111338921A
Authority
CN
China
Prior art keywords
data
system performance
performance prediction
prediction model
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010105712.5A
Other languages
Chinese (zh)
Inventor
罗一凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
OneConnect Smart Technology Co Ltd
OneConnect Financial Technology Co Ltd Shanghai
Original Assignee
OneConnect Financial Technology Co Ltd Shanghai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by OneConnect Financial Technology Co Ltd Shanghai filed Critical OneConnect Financial Technology Co Ltd Shanghai
Priority to CN202010105712.5A priority Critical patent/CN111338921A/en
Publication of CN111338921A publication Critical patent/CN111338921A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3447Performance evaluation by modeling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3433Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment for load management

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a system performance prediction method, which comprises the following steps: responding to a system performance prediction instruction, and acquiring service data to be processed and current configuration data of the system; obtaining a system performance prediction model, wherein the system performance prediction model is obtained by training at least one preset algorithm according to a plurality of batches of historical parameters, and the historical parameters comprise historical service data, system historical configuration data and system historical performance data; and inputting the service data to be processed and the current configuration data of the system into the system performance prediction model so that the system performance prediction model outputs predicted system performance data. The invention also discloses a system performance prediction device, a computer device and a computer readable storage medium.

Description

System performance prediction method and device, computer equipment and storage medium
Technical Field
The invention relates to the technical field of data processing, in particular to a system performance prediction method, a system performance prediction device, computer equipment and a computer readable storage medium.
Background
In the daily operation and maintenance work of the system, the performance monitoring of the system is a very important work, for example, when the system processes data, performance information such as system response time, system throughput, system resource utilization rate and the like can be generated, and a worker can monitor the performance of the system by acquiring the performance information.
In practical applications, when a system needs to process data with a huge amount of data, a high load of data processing tasks may lead to system crash and severely slow down the processing progress. In the prior art, the fault reason can be analyzed according to the system performance only after the system fails, and the failure cannot be avoided in advance.
Disclosure of Invention
The present invention is directed to a system performance prediction method, apparatus, computer device and computer readable storage medium, which can solve the above-mentioned drawbacks of the prior art.
One aspect of the present invention provides a system performance prediction method, including: responding to a system performance prediction instruction, and acquiring service data to be processed and current configuration data of the system; obtaining a system performance prediction model, wherein the system performance prediction model is obtained by training at least one preset algorithm according to a plurality of batches of historical parameters, and the historical parameters comprise historical service data, system historical configuration data and system historical performance data; and inputting the service data to be processed and the current configuration data of the system into the system performance prediction model so that the system performance prediction model outputs predicted system performance data.
Optionally, the method further comprises: acquiring expected system performance data; determining a difference between the predicted system performance data and the expected system performance data; judging whether the difference is within a preset allowable range; and if the difference is within the preset allowable range, processing the service data to be processed by using the current configuration data of the system.
Optionally, the method further comprises: if the difference is not within the preset allowable range, removing data of a first data amount from the to-be-processed service data by using the predicted system performance data and the expected system performance data to obtain reduced service data; and inputting the cut-down service data and the current configuration data of the system into the system performance prediction model so that the system performance prediction model continuously outputs newly predicted system performance data.
Optionally, the predicted system performance data includes a plurality of types of first elements, the expected system performance data includes a plurality of types of second elements, and the step of culling a first amount of data from the to-be-processed traffic data using the predicted system performance data and the expected system performance data includes: calculating a ratio of the first element and the second element for either of the first element and the second element of the same type; determining the data volume of the service data to be processed; calculating the first data volume by using the ratio and the data volume of the service data to be processed; and eliminating the data of the first data amount from the service data to be processed.
Optionally, the step of obtaining a system performance prediction model comprises: determining an optimal model from a plurality of preliminary performance prediction models as the system performance prediction model; the performance prediction models are obtained by training the following steps: acquiring a plurality of batches of the historical parameters; and training each preset algorithm in the preset algorithms by using a plurality of batches of the historical parameters to obtain a plurality of preliminary performance prediction models, wherein each preset algorithm corresponds to one preliminary performance prediction model.
Optionally, the step of determining an optimal model from the plurality of preliminary performance prediction models as the system performance prediction model comprises: calculating a loss function of each of the plurality of preliminary performance prediction models to obtain a plurality of loss functions; determining a minimum loss function from a plurality of said loss functions; and taking a preliminary performance prediction model corresponding to the minimum loss function in the plurality of preliminary performance prediction models as the system performance prediction model.
Another aspect of the present invention provides a system performance prediction apparatus, comprising: the first acquisition module is used for responding to a system performance prediction instruction and acquiring to-be-processed service data and system current configuration data; the system performance prediction model is obtained by training at least one preset algorithm according to multiple batches of historical parameters, and the historical parameters comprise historical service data, historical system configuration data and historical system performance data; and the prediction module is used for inputting the service data to be processed and the current configuration data of the system into the system performance prediction model so as to enable the system performance prediction model to output predicted system performance data.
Optionally, the apparatus further comprises: a third obtaining module, configured to obtain expected system performance data; a determination module to determine a difference between the predicted system performance data and the expected system performance data; the judging module is used for judging whether the difference is within a preset allowable range; and the processing module is used for processing the service data to be processed by using the current configuration data of the system when the difference is within the preset allowable range.
Yet another aspect of the present invention provides a computer apparatus, comprising: the system performance prediction method according to any of the embodiments is implemented by a memory, a processor, and a computer program stored in the memory and executable on the processor.
Yet another aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a system performance prediction method as described in any of the embodiments above.
The system performance prediction method provided by the invention does not directly process the to-be-processed service data as in the prior art, but predicts the performance data of the system when the to-be-processed service data is processed by using the current configuration data of the system according to a pre-trained system performance prediction model before the to-be-processed service data is processed, so that a worker can analyze whether the system is likely to have faults such as paralysis, downtime or processing delay and the like when the system processes the to-be-processed service data according to the predicted system performance data, thereby making relevant measures in advance, reducing the probability of the system having faults when the system processes the to-be-processed service data, improving the processing speed and accelerating the project schedule to a certain extent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 schematically illustrates a flow diagram of a system performance prediction method according to an embodiment of the invention;
FIG. 2 schematically shows a block diagram of a system performance prediction apparatus according to an embodiment of the present invention;
FIG. 3 schematically illustrates a block diagram of a computer device suitable for implementing a method of system performance prediction, in accordance with an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The system performance prediction method provided by the invention does not directly process the to-be-processed service data as in the prior art, but predicts the performance data of the system when the to-be-processed service data is processed by using the current configuration data of the system according to a pre-trained system performance prediction model before the to-be-processed service data is processed, so that a worker can analyze whether the system is likely to have faults such as paralysis, downtime or processing delay and the like when the system processes the to-be-processed service data according to the predicted system performance data, thereby making relevant measures in advance, reducing the probability of the system having faults when the system processes the to-be-processed service data, improving the processing speed and accelerating the project schedule to a certain extent.
Fig. 1 schematically shows a flow chart of a system performance prediction method according to an embodiment of the invention. As shown in fig. 1, the system performance prediction method may include steps S1 to S3, wherein:
step S1, in response to the system performance prediction instruction, obtaining the service data to be processed and the current configuration data of the system.
In this embodiment, before processing the to-be-processed service data, in order to ensure the processing progress, it is generally necessary to predict the performance data of the system in advance. Therefore, after the user triggers the system performance prediction instruction, the system receives the instruction, obtains the tag carried in the system performance prediction instruction, determines the current to-be-processed service data according to the tag, and obtains the current configuration data of the system at the current moment.
The to-be-processed service data may be any service data, such as web page access data, user billing data, or user purchase record, and the to-be-processed service data may include a data volume of the to-be-processed service data and a type of the to-be-processed service data. The current configuration data of the system may be current basic configuration data of the system, such as CPU load, CPU usage, memory load, memory usage, disk load and/or disk usage, etc.
Step S2, obtaining a system performance prediction model, wherein the system performance prediction model is obtained by training at least one preset algorithm according to a plurality of batches of historical parameters, and the historical parameters comprise historical service data, historical system configuration data and historical system performance data.
In this embodiment, the system performance prediction model may be pre-trained, and the system performance prediction model may predict the performance data of the system. Specifically, when at least one prediction algorithm only has one preset algorithm, a system performance prediction model can be obtained by directly training the preset algorithm by using multiple batches of historical parameters; in the case that there are a plurality of preset algorithms in the at least one preset algorithm, the step S2 may include: determining an optimal model from a plurality of preliminary performance prediction models as the system performance prediction model; wherein a plurality of the performance prediction models are trained by the following steps: acquiring a plurality of batches of the historical parameters; and training each preset algorithm in the preset algorithms by using a plurality of batches of the historical parameters to obtain a plurality of preliminary performance prediction models, wherein each preset algorithm corresponds to one preliminary performance prediction model.
In this embodiment, the historical business data may also be any data, such as web page access data, user billing data, or user purchase records. The historical traffic data may also include a data volume of the historical traffic data and/or a type of the historical traffic data. The historical configuration data of the system may be basic configuration data of the system when processing historical business data, such as CPU load, CPU usage, memory load, memory usage, disk load and/or disk usage, and the like. The system historical performance data may be real performance data of the system when historical service data is processed by using the system historical configuration data, such as log information, which may include system response time, system throughput, system resource utilization rate, and the like.
In this embodiment, for each batch of historical parameters in the plurality of batches of historical parameters, the historical service data and the historical system configuration data are used as x values, the historical system performance data are used as y values, and the training target is set as the predicted system performance data. Based on the method, multiple batches of historical parameters are used as a training set to train each preset algorithm in sequence, so that each preset algorithm automatically learns the relation among historical business data, system historical configuration data and system historical performance data, and further a primary performance prediction model corresponding to each preset algorithm is obtained, wherein each primary performance prediction model can predict the performance data of the system. In addition, the preset algorithm may be a Logistic Regression (LR) algorithm, a Random Forest (RF) algorithm, an xgboost (extreeme Gradient boosting) algorithm, or a Support Vector Machine (SVM) algorithm, and the specific type of the preset algorithm is not limited in this embodiment.
It should be noted that, in this embodiment, each preset algorithm has a self-learning process, that is, the preset algorithm may automatically learn, according to the set characteristics in the training target learning training set, which parameters in the historical service data and the system historical configuration data have higher contribution degrees and which parameters have weaker contribution degrees, from the training set. Based on this, when the trained preliminary performance prediction model is used, at least the parameters with higher contribution degree are input into the preliminary performance prediction model to ensure the accuracy of the output result, or all the parameters can be directly input into the preliminary performance prediction model, and the preliminary performance prediction model can automatically recognize the contribution degree sequence of the parameters according to the parameters, so as to predict the operation data more accurately, thereby obtaining the system performance. Of course, the more parameters with higher contribution are input, the higher the accuracy of the output result of the preliminary performance prediction model is.
For example, a training set includes r sets of historical parameters, each set including: the data volume of the historical business data, the type of the historical business data, the CPU load, the CPU usage rate, the memory load, the memory usage rate, the disk load, the disk usage rate, and the system response time, the training set may be { (data volume 1 of the historical business data, type 1 of the historical business data, CPU load 1, CPU usage rate 1, memory load 1, memory usage rate 1, disk load 1, disk usage rate 1, system response time 1), (data volume 2 of the historical business data, type 2 of the historical business data, CPU load 2, CPU usage rate 2, memory load 2, memory usage rate 2, disk load 2, disk usage rate 2, system response time 2), …, (data volume r of the historical business data, type r of the historical business data, CPU load r, CPU usage rate r, memory load r, memory usage rate r, disk usage rate 1, system response time 1, and the like, Disk load r, disk utilization r, system response time r). After training the training set by the preset algorithm, the preset algorithm self-learns from the training set according to the set training target, and then the contribution degree ranking of the output parameters is as follows: z1 data volume of historical traffic data, z2 CPU load, z3 memory load, z4 type of historical traffic data, z5 CPU usage, z6 memory usage, z7 disk load and z8 disk usage, wherein z characterizes the contribution and the sum of z1, z2, z3, z4, z5, z6, z7 and z8 is 1.
According to the embodiment of the disclosure, through training the preset algorithm for multiple times, and selecting the model with the optimal effect from the trained preliminary performance prediction models as the system performance prediction model, the quality of the system performance prediction model is guaranteed, and the prediction accuracy is improved.
Optionally, determining an optimal model from the plurality of preliminary performance prediction models as the system performance prediction model may include steps S21 to S23, wherein:
step S21, calculating a loss function of each of the plurality of preliminary performance prediction models to obtain a plurality of loss functions;
step S22 of determining a minimum loss function from the plurality of loss functions;
step S23, using a preliminary performance prediction model corresponding to the minimum loss function among the plurality of preliminary performance prediction models as the system performance prediction model.
The loss function is used for evaluating the inconsistency degree of the predicted value and the true value of the model, and is a non-negative real value function, and the smaller the loss function is, the better the performance of the model is.
For example, for 4 preliminary performance prediction models, a loss function of each preliminary performance prediction model of the 4 preliminary performance prediction models is calculated to obtain 4 loss functions, then a minimum loss function is found from the 4 loss functions, and the preliminary performance prediction model corresponding to the minimum loss function is used as the system performance prediction model.
Step S3, inputting the to-be-processed service data and the current configuration data of the system into the system performance prediction model, so that the system performance prediction model outputs predicted system performance data.
In this embodiment, according to the own function of the system performance prediction model, the service data to be processed and the current configuration data of the system are input into the system performance prediction model, and the system performance prediction model can output predicted system performance data. The predicted system performance data may also include: predicted system response time, predicted system throughput, and predicted system resource utilization.
According to the system performance prediction method provided by the invention, the service data to be processed is not directly processed as in the prior art, but before the service data to be processed is processed, the performance data of the system when the service data to be processed is processed by using the current configuration data of the system is predicted according to the pre-trained system performance prediction model, and then a worker can analyze whether the system is likely to have faults such as paralysis, downtime or processing delay and the like when the system processes the service data to be processed according to the predicted system performance data, so that related measures are made in advance, the probability of the system having faults when the system processes the service data to be processed is reduced, the processing speed is increased, and the project schedule is accelerated to a certain extent.
Optionally, the system performance prediction method may further include a step a1 to a step a5, where:
step A1, obtaining expected system performance data;
step a2, determining a difference between the predicted system performance data and the expected system performance data;
step A3, judging whether the difference is within a preset allowable range;
step A4, if the difference is within the preset allowable range, processing the service data to be processed by using the current configuration data of the system;
step A5, if the difference is not within the preset allowable range, removing data of a first data amount from the to-be-processed service data by using the predicted system performance data and the expected system performance data to obtain reduced service data; and inputting the cut-down service data and the current configuration data of the system into the system performance prediction model so that the system performance prediction model continuously outputs newly predicted system performance data.
In this embodiment, the expected system performance data may be performance data of the system when the user wants the system to process the pending service data by using the current configuration data of the system. When the difference between the predicted system performance data and the expected system performance data is within a preset allowable range, which indicates that the user can receive a certain range of floating, the system can utilize the current configuration data of the system to process the service data to be processed. If the difference is not within the preset allowable range, it indicates that the data volume of the current service to be processed may be too large, which may result in that the processing capability of the system may be insufficient when the system processes the service data to be processed using the current configuration data of the system, at this time, the data of the first data volume may be removed from the service data to be processed, so as to reduce the data volume of the service data to be processed, and the service data to be processed from which the data of the first data volume is removed is used as cut-down service data, further, it is also necessary to predict the system performance data when the service data is removed by processing using the current configuration data of the system, for example, the cut-down service data and the current configuration data of the system may be input into a system performance prediction model, so that the system performance prediction model continues to output the newly predicted system performance data, and then it may also continue to determine whether the difference between the newly predicted system performance data and the, and repeating the steps until the difference between the predicted system performance data and the expected system performance data is within the preset allowable range. The preset allowable range may be customized by the user according to the business requirements, for example, if the to-be-processed business data is core business data of an enterprise, and the processing time (i.e., the system response time) of the to-be-processed business data is relatively strict, if the processing speed is too slow, the project schedule may be slowed down seriously, and at this time, the preset allowable range may be set to a smaller range.
According to the invention, the predicted system performance data is compared with the expected system performance, and when the difference between the predicted system performance data and the expected system performance is not within the preset allowable range, the data volume of the service data to be processed at the time is reduced, and the system performance data is repeatedly predicted by combining a system performance prediction model, so that the probability of system failure is ensured to be low enough, and the system can normally operate.
Optionally, the predicted system performance data includes a plurality of types of first elements, the expected system performance data includes a plurality of types of second elements, and the step of removing the first data amount of data from the to-be-processed service data by using the predicted system performance data and the expected system performance data in step a5 may include steps a51 to a54, where:
step A51, calculating the ratio of the first element and the second element for the first element and the second element with the same type;
step A52, determining the data volume of the service data to be processed;
step a53, calculating the first data volume by using the ratio and the data volume of the service data to be processed;
step a54, removing the data of the first data size from the service data to be processed.
In this embodiment, the predicted system performance data may include a plurality of first elements, such as a predicted system response time, a predicted system throughput, and a predicted system resource utilization, and the expected system performance data may include a plurality of second elements, such as an expected system response time, an expected system throughput, and an expected system resource utilization, and in this embodiment, the first data amount may be calculated according to any type of the same first element and second element, and the data amount of the service data to be processed.
For example, for the same type of predicted system response time and expected system response time, the predicted system response time is 25min, the expected system response time is 20min, and the data volume of the service data to be processed is 10G, which indicates that the predicted system 25min can process 10G of data, and since the ratio of the predicted system response time to the expected system response time is 5:4, the predicted system 20mim can process 8G of data, where the first data volume is 10 "8 — 2G. Therefore, the 2G data can be removed from the service data to be processed.
The embodiment of the present invention further provides a system performance prediction apparatus, which corresponds to the system performance prediction method provided in the above embodiment, and corresponding technical features and technical effects are not described in detail in this embodiment, and reference may be made to the embodiments for relevant points. In particular, fig. 2 schematically shows a block diagram of a system performance prediction apparatus according to an embodiment of the present invention. As shown in fig. 2, the system performance prediction apparatus 200 may include a first obtaining module 201, a second obtaining module 202, and a prediction module 203, wherein:
a first obtaining module 201, configured to obtain to-be-processed service data and current configuration data of the system in response to a system performance prediction instruction;
a second obtaining module 202, configured to obtain a system performance prediction model, where the system performance prediction model is obtained by training at least one preset algorithm according to multiple batches of historical parameters, and the historical parameters include historical service data, historical system configuration data, and historical system performance data;
the prediction module 203 is configured to input the to-be-processed service data and the current configuration data of the system into the system performance prediction model, so that the system performance prediction model outputs predicted system performance data.
Optionally, the apparatus may further include: a third obtaining module, configured to obtain expected system performance data; a determination module to determine a difference between the predicted system performance data and the expected system performance data; the judging module is used for judging whether the difference is within a preset allowable range; and the processing module is used for processing the service data to be processed by using the current configuration data of the system when the difference is within the preset allowable range.
Optionally, the apparatus may further include: a removing module, configured to remove data of a first data size from the to-be-processed service data by using the predicted system performance data and the expected system performance data when the difference is not within the preset allowable range, so as to obtain reduced service data; and the input module is used for inputting the reduction service data and the current configuration data of the system into the system performance prediction model so as to enable the system performance prediction model to continuously output newly predicted system performance data.
Optionally, the predicted system performance data includes a plurality of types of first elements, the expected system performance data includes a plurality of types of second elements, and the culling module, when culling the first data amount of data from the to-be-processed service data using the predicted system performance data and the expected system performance data, is further configured to: calculating a ratio of the first element and the second element for either of the first element and the second element of the same type; determining the data volume of the service data to be processed; calculating the first data volume by using the ratio and the data volume of the service data to be processed; and eliminating the data of the first data amount from the service data to be processed.
Optionally, the second obtaining module is further configured to: determining an optimal model from a plurality of preliminary performance prediction models as the system performance prediction model; the performance prediction models are obtained by training the following steps: acquiring a plurality of batches of the historical parameters; and training each preset algorithm in the preset algorithms by using a plurality of batches of the historical parameters to obtain a plurality of preliminary performance prediction models, wherein each preset algorithm corresponds to one preliminary performance prediction model.
Optionally, when determining an optimal model from the plurality of preliminary performance prediction models as the system performance prediction model, the second obtaining module is further configured to: calculating a loss function of each of the plurality of preliminary performance prediction models to obtain a plurality of loss functions; determining a minimum loss function from a plurality of said loss functions; and taking a preliminary performance prediction model corresponding to the minimum loss function in the plurality of preliminary performance prediction models as the system performance prediction model.
FIG. 3 schematically illustrates a block diagram of a computer device suitable for implementing a method of system performance prediction, in accordance with an embodiment of the present invention. In this embodiment, the computer device 300 may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a rack server, a blade server, a tower server, or a rack server (including an independent server or a server cluster composed of a plurality of servers), and the like that execute programs. As shown in fig. 3, the computer device 300 of the present embodiment includes at least but is not limited to: a memory 301, a processor 302, a network interface 303, which may be communicatively coupled to each other via a system bus. It is noted that FIG. 3 only shows computer device 300 having components 301 and 303, but it is understood that not all of the shown components are required and that more or fewer components may be implemented instead.
In this embodiment, the memory 303 includes at least one type of computer-readable storage medium, which includes flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the storage 301 may be an internal storage unit of the computer device 300, such as a hard disk or a memory of the computer device 300. In other embodiments, the memory 301 may also be an external storage device of the computer device 300, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the computer device 300. Of course, the memory 301 may also include both internal and external storage devices for the computer device 300. In the present embodiment, the memory 301 is generally used for storing an operating system and various application software installed in the computer device 300, such as program codes of a system performance prediction method. In addition, the memory 301 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 302 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 302 generally serves to control the overall operation of the computer device 300. Such as program code that performs system performance prediction methods related to the control and processing of data interactions or communications with computer device 300.
In this embodiment, the system performance prediction method stored in the memory 301 may be further divided into one or more program modules and executed by one or more processors (in this embodiment, the processor 302) to complete the present invention.
The network interface 303 may comprise a wireless network interface or a wired network interface, and the network interface 303 is typically used to establish communication links between the computer device 300 and other computer devices. For example, the network interface 303 is used to connect the computer device 300 to an external terminal via a network, establish a data transmission channel and a communication link between the computer device 300 and the external terminal, and the like. The network may be a wireless or wired network such as an Intranet (Intranet), the Internet (Internet), a Global System of Mobile communication (GSM), Wideband Code Division Multiple Access (WCDMA), 4G network, 5G network, Bluetooth (Bluetooth), Wi-Fi, etc.
The present embodiment also provides a computer-readable storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application mall, etc., on which a computer program is stored, which when executed by a processor implements the above-described system performance prediction method.
It will be apparent to those skilled in the art that the modules or steps of the embodiments of the invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, embodiments of the invention are not limited to any specific combination of hardware and software.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A method for predicting system performance, the method comprising:
responding to a system performance prediction instruction, and acquiring service data to be processed and current configuration data of the system;
obtaining a system performance prediction model, wherein the system performance prediction model is obtained by training at least one preset algorithm according to a plurality of batches of historical parameters, and the historical parameters comprise historical service data, system historical configuration data and system historical performance data;
and inputting the service data to be processed and the current configuration data of the system into the system performance prediction model so that the system performance prediction model outputs predicted system performance data.
2. The method of claim 1, further comprising:
acquiring expected system performance data;
determining a difference between the predicted system performance data and the expected system performance data;
judging whether the difference is within a preset allowable range;
and if the difference is within the preset allowable range, processing the service data to be processed by using the current configuration data of the system.
3. The method of claim 2, further comprising:
if the difference is not within the preset allowable range, removing data of a first data amount from the to-be-processed service data by using the predicted system performance data and the expected system performance data to obtain reduced service data;
and inputting the cut-down service data and the current configuration data of the system into the system performance prediction model so that the system performance prediction model continuously outputs newly predicted system performance data.
4. The method of claim 3, wherein the predicted system performance data comprises a plurality of types of first elements, wherein the expected system performance data comprises a plurality of types of second elements, and wherein the step of culling a first amount of data from the pending traffic data using the predicted system performance data and the expected system performance data comprises:
calculating a ratio of the first element and the second element for either of the first element and the second element of the same type;
determining the data volume of the service data to be processed;
calculating the first data volume by using the ratio and the data volume of the service data to be processed;
and eliminating the data of the first data amount from the service data to be processed.
5. The method of claim 1, wherein the step of obtaining a system performance prediction model comprises:
determining an optimal model from a plurality of preliminary performance prediction models as the system performance prediction model;
the performance prediction models are obtained by training the following steps:
acquiring a plurality of batches of the historical parameters;
and training each preset algorithm in the preset algorithms by using the plurality of batches of historical parameters to obtain a plurality of preliminary performance prediction models, wherein each preset algorithm corresponds to one preliminary performance prediction model.
6. The method of claim 5, wherein the step of determining an optimal model from the plurality of preliminary performance prediction models as the system performance prediction model comprises:
calculating a loss function of each of the plurality of preliminary performance prediction models to obtain a plurality of loss functions;
determining a minimum loss function from a plurality of said loss functions;
and taking a preliminary performance prediction model corresponding to the minimum loss function in the plurality of preliminary performance prediction models as the system performance prediction model.
7. An apparatus for predicting system performance, the apparatus comprising:
the first acquisition module is used for responding to a system performance prediction instruction and acquiring to-be-processed service data and system current configuration data;
the system performance prediction model is obtained by training at least one preset algorithm according to multiple batches of historical parameters, and the historical parameters comprise historical service data, historical system configuration data and historical system performance data;
and the prediction module is used for inputting the service data to be processed and the current configuration data of the system into the system performance prediction model so as to enable the system performance prediction model to output predicted system performance data.
8. The apparatus of claim 7, further comprising:
a third obtaining module, configured to obtain expected system performance data;
a determination module to determine a difference between the predicted system performance data and the expected system performance data;
the judging module is used for judging whether the difference is within a preset allowable range;
and the processing module is used for processing the service data to be processed by using the current configuration data of the system when the difference is within the preset allowable range.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 6 are implemented by the processor when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, is adapted to carry out the steps of the method of any one of claims 1 to 6.
CN202010105712.5A 2020-02-21 2020-02-21 System performance prediction method and device, computer equipment and storage medium Pending CN111338921A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010105712.5A CN111338921A (en) 2020-02-21 2020-02-21 System performance prediction method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010105712.5A CN111338921A (en) 2020-02-21 2020-02-21 System performance prediction method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111338921A true CN111338921A (en) 2020-06-26

Family

ID=71183857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010105712.5A Pending CN111338921A (en) 2020-02-21 2020-02-21 System performance prediction method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111338921A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106502889A (en) * 2016-10-13 2017-03-15 华为技术有限公司 The method and apparatus of prediction cloud software performance
US9766996B1 (en) * 2013-11-26 2017-09-19 EMC IP Holding Company LLC Learning-based data processing job performance modeling and prediction
CN108763010A (en) * 2018-06-07 2018-11-06 厦门美图移动科技有限公司 Performance prediction method and device and data analysis equipment
CN109672795A (en) * 2018-11-14 2019-04-23 平安科技(深圳)有限公司 Call center resource management method and device, electronic equipment, storage medium
CN109829115A (en) * 2019-02-14 2019-05-31 上海晓材科技有限公司 Search engine keywords optimization method
CN110276446A (en) * 2019-06-26 2019-09-24 北京百度网讯科技有限公司 The method and apparatus of model training and selection recommendation information
CN110635952A (en) * 2019-10-14 2019-12-31 中兴通讯股份有限公司 Method, system and computer storage medium for fault root cause analysis of communication system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9766996B1 (en) * 2013-11-26 2017-09-19 EMC IP Holding Company LLC Learning-based data processing job performance modeling and prediction
CN106502889A (en) * 2016-10-13 2017-03-15 华为技术有限公司 The method and apparatus of prediction cloud software performance
CN108763010A (en) * 2018-06-07 2018-11-06 厦门美图移动科技有限公司 Performance prediction method and device and data analysis equipment
CN109672795A (en) * 2018-11-14 2019-04-23 平安科技(深圳)有限公司 Call center resource management method and device, electronic equipment, storage medium
CN109829115A (en) * 2019-02-14 2019-05-31 上海晓材科技有限公司 Search engine keywords optimization method
CN110276446A (en) * 2019-06-26 2019-09-24 北京百度网讯科技有限公司 The method and apparatus of model training and selection recommendation information
CN110635952A (en) * 2019-10-14 2019-12-31 中兴通讯股份有限公司 Method, system and computer storage medium for fault root cause analysis of communication system

Similar Documents

Publication Publication Date Title
CN108876122B (en) Batch work order processing method and device, computer equipment and storage medium
CN110430068B (en) Characteristic engineering arrangement method and device
CN110503385B (en) Service processing method, device, computer equipment and storage medium
EP3961384A1 (en) Automatic derivation of software engineering artifact attributes from product or service development concepts
CN103970587A (en) Resource scheduling method, device and system
CN111723018A (en) Performance pressure testing method, device, equipment and storage medium
CN110874634A (en) Neural network optimization method and device, equipment and storage medium
CN111935140A (en) Abnormal message identification method and device
CN113268335B (en) Model training and execution duration estimation method, device, equipment and storage medium
CN112700064B (en) Accompanying post-processing method and device for air quality forecast numerical output
CN112540837B (en) Service processing component calling method, system, electronic equipment and storage medium
CN113254153A (en) Process task processing method and device, computer equipment and storage medium
CN117035374A (en) Force cooperative scheduling method, system and medium for coping with emergency
CN111338921A (en) System performance prediction method and device, computer equipment and storage medium
EP4060435A1 (en) Method and system for infrastructure monitoring
CN113986495A (en) Task execution method, device, equipment and storage medium
CN113238846A (en) Task scheduling method and device
CN113850428A (en) Job scheduling prediction processing method and device and electronic equipment
CN116134387A (en) Method and system for determining the compression ratio of an AI model for an industrial task
CN111783487A (en) Fault early warning method and device for card reader equipment
CN112560938A (en) Model training method and device and computer equipment
CN113093702B (en) Fault data prediction method and device, electronic equipment and storage medium
CN117240907B (en) Function management method and system on SaaS platform
CN112241754B (en) Online model learning method, system, device and computer readable storage medium
US20240193538A1 (en) Temporal supply-related forecasting using artificial intelligence techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200626

WD01 Invention patent application deemed withdrawn after publication