WO2019153487A1 - ***性能的度量方法、装置、存储介质和服务器 - Google Patents

***性能的度量方法、装置、存储介质和服务器 Download PDF

Info

Publication number
WO2019153487A1
WO2019153487A1 PCT/CN2018/082833 CN2018082833W WO2019153487A1 WO 2019153487 A1 WO2019153487 A1 WO 2019153487A1 CN 2018082833 W CN2018082833 W CN 2018082833W WO 2019153487 A1 WO2019153487 A1 WO 2019153487A1
Authority
WO
WIPO (PCT)
Prior art keywords
performance
service interface
server
score
allocation ratio
Prior art date
Application number
PCT/CN2018/082833
Other languages
English (en)
French (fr)
Inventor
谭智文
洪宇明
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019153487A1 publication Critical patent/WO2019153487A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3452Performance evaluation by statistical analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3006Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3051Monitoring arrangements for monitoring the configuration of the computing system or of the computing system component, e.g. monitoring the presence of processing resources, peripherals, I/O links, software programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3065Monitoring arrangements determined by the means or processing involved in reporting the monitored data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3414Workload generation, e.g. scripts, playback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3433Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment for load management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3447Performance evaluation by modeling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/3476Data logging

Definitions

  • the present application relates to the field of computer application systems, and in particular, to a method, device, storage medium and server for measuring system performance.
  • An application system usually consists of a large number of functional modules.
  • the performance of a single function or several individual functions cannot be used as a measure of the overall performance of the application system.
  • the impact of each version upgrade on the overall performance of the system is also difficult to measure.
  • the embodiment of the present application provides a method, a device, a storage medium, and a server for measuring system performance, so as to solve the problem that the existing performance evaluation method does not have quantitative calculation and evaluation, and the accuracy is not guaranteed.
  • a first aspect of the embodiments of the present application provides a method for measuring system performance, including:
  • a performance evaluation report is generated according to the quantized score and the parameter value of the performance indicator of the server service interface.
  • a second aspect of an embodiment of the present application provides a server comprising a memory and a processor, the memory storing computer readable instructions executable on the processor, the processor executing the computer readable instructions The following steps are implemented:
  • a performance evaluation report is generated according to the quantized score and the parameter value of the performance indicator of the server service interface.
  • a third aspect of embodiments of the present application provides a computer readable storage medium storing computer readable instructions that, when executed by a processor, implement the following steps:
  • a performance evaluation report is generated according to the quantized score and the parameter value of the performance indicator of the server service interface.
  • a fourth aspect of the embodiments of the present application provides a device performance measurement apparatus, including:
  • a log file obtaining unit configured to obtain a log file of the server in the system
  • An allocation ratio determining unit configured to acquire usage information of the server service interface according to the log file, and determine a resource allocation ratio of the system according to the usage situation information;
  • a simulation scenario establishing unit configured to establish a simulation scenario of the current environment of the system according to the resource allocation ratio
  • a first parameter value collecting unit configured to perform a stress test based on the simulated scenario, and collect a parameter value of a performance indicator of the server service interface
  • a first indicator quantization unit configured to quantize a performance index of the collected service interface into a score according to the resource allocation ratio and a preset performance indicator score table
  • the evaluation report generating unit is configured to generate a performance evaluation report according to the quantized score and the parameter value of the performance indicator of the server service interface.
  • the log file of the server in the system is obtained, the usage information of the service interface of the server is obtained according to the log file, and the resource allocation ratio of the system is determined according to the usage information, and then The resource allocation ratio is used to establish a simulation scenario of the current environment of the system, and the stress test is performed based on the simulation scenario, and the parameter values of the performance indicators of the service interface of the server are collected, and then according to the resource allocation ratio and the preset performance index.
  • the score table quantifies the performance index of the collected service interface as a score, which is convenient for different users to understand the performance of the server.
  • a performance evaluation report is generated according to the quantized score and the parameter value of the performance index of the server service interface, and the service is displayed visually.
  • the parameter values of the performance indicators of the end improve the accuracy of the performance evaluation, which not only facilitates the user to quickly understand the performance of the server, but also facilitates the user to obtain performance indicators that need to be optimized for performance, and improve the efficiency of server performance optimization.
  • FIG. 1 is a flowchart of an implementation of a method for measuring system performance provided by an embodiment of the present application
  • FIG. 2 is a flowchart of an implementation of a method for measuring system performance provided by another embodiment of the present application.
  • FIG. 3 is a structural block diagram of a system performance measuring apparatus according to an embodiment of the present application.
  • FIG. 4 is a structural block diagram of a device for measuring performance of a system according to another embodiment of the present application.
  • FIG. 5 is a schematic diagram of a server provided by an embodiment of the present application.
  • FIG. 1 shows an implementation flow of a measurement method of system performance provided by an embodiment of the present application, where the method flow includes steps S101 to S106.
  • the specific implementation principles of each step are as follows:
  • a server reads data such as a user request through an interface.
  • the log file is a file set of an operation event of the server in the recording system.
  • S102 Obtain usage information of the server service interface according to the log file, and determine a resource allocation ratio of the system according to the usage information.
  • the service interface includes a real-time transaction type interface, such as a transaction page in a shopping website; a real-time query class interface, such as a product search page.
  • the service interface also includes a batch service function interface.
  • the usage information of the real-time transaction class interface and the real-time query class interface includes a request sending time, a request concurrent number, an average response time, a TPS (Transaction Per Second), a TPS fluctuation range, a timeout probability, and The probability of error, where TPS represents the number of transactions or transactions that the system can process per second, which is an important indicator of the performance of the system server.
  • Concurrency means that multiple users have made requests or operations to the system. These requests and or operations may be the same or different.
  • the number of requests for concurrency refers to the number of simultaneous requests or operations.
  • the response time refers to the time elapsed from the time when the client sends a request and the time when the client receives the response returned from the server.
  • the response time consists of the request sending time, the network transmission time, and the server processing time.
  • the usage information of the batch service interface includes the number of records processed per minute.
  • the server includes more than one type of service interface, and the foregoing S102 specifically includes:
  • A1 Obtain historical usage information of the service interface in the specified time period according to the log file. Specifically, the historical usage information of the service interface in the specified time period is obtained from the log file, where the specified time period is a time period required by the user to select statistics, in order to obtain sufficient historical usage. Information, the specified time period is generally in units of weeks, that is, the specified time period is at least one week.
  • A2 statistically analyze the usage of various service interfaces in the specified time period based on the historical usage information.
  • the statistical analysis refers to analyzing and researching the historical usage information of various service interfaces of the server, and knowing and revealing the usage of various service interfaces in the specified time period, so as to achieve various services. Correct interpretation and prediction of the performance of the interface.
  • A3 Determine the resource allocation ratio of the system based on the statistical analysis result.
  • the resource allocation ratio refers to the distribution ratio of the usage of each service interface in the system.
  • the cluster analysis method may be used to perform statistical analysis on the historical usage information of the service interface in a specified time period.
  • Cluster analysis is an ideal multivariate statistical technique, which is divided into two types: hierarchical clustering and iterative clustering.
  • the purpose of the cluster analysis is to cluster the historical usage information according to a certain rule, and the clustering category is not preset, but is based on the characteristics of the historical usage information (such as the historical request time). definite.
  • the historical usage information of the service interfaces of the service end in the past period of time is obtained according to the log file, and the peak time period and the idle time period of the service interface are determined based on the historical usage information.
  • the foregoing step A2 includes:
  • cluster analysis refers to an analysis process of grouping a collection of physical or abstract objects into a plurality of classes composed of similar objects. It is a process of classifying data into different classes or clusters, so objects in the same cluster have great similarities, and objects between different clusters have great dissimilarity.
  • A22 Predict the peak time period and idle time period of each type of service interface according to the cluster analysis result of the historical usage information of each type of service interface.
  • clustering analysis is performed on historical usage information of various service interfaces, and peak hours and idle periods of various service interfaces are obtained according to the result of cluster analysis, so as to be used according to various service interfaces. Analyze performance.
  • the peak time of each type of service interface is determined according to the date of the holiday, the date of the business activity cycle, and the cluster analysis result by determining the date of the holiday and the date of the business activity period in the future. And free time.
  • the business activity cycle includes promotional activities, celebration activities and so on.
  • a simulation scenario is established according to a resource allocation ratio of the system, in which a real user's service processing is simulated, including simulating multiple users to do the same thing or operation at the same time, and the operation is generally performed for the same type of service. (that is, for the same type of business interface), or all users do exactly the same.
  • the usage of various service interfaces is simulated in a simulation scenario according to the resource allocation ratio.
  • S104 Perform a stress test based on the simulated scenario, and collect a parameter value of a performance indicator of the server service interface.
  • a stress test is performed in a simulation scenario.
  • the stress test is to collect the parameter values of the server performance indicators in the actual application software and hardware environment and the user's use.
  • performance indicators include network load such as bandwidth consumption, application system load and database load, network load including bandwidth consumption, application system load including CPU usage, CPU load, JVM Perm area memory consumption and garbage collection GC frequency, database
  • the load includes the number of connections, memory usage, disk IO and CPU usage.
  • the server performance indicators also include timeout probability, error probability, TPS (Transaction Per Second) and response time.
  • S105 Quantify the performance index of the collected service interface into a score according to the resource allocation ratio and the preset performance indicator score table.
  • a performance indicator score table is preset, and the parameter value of the performance indicator, the parameter value of the performance indicator (or the parameter value interval of the performance indicator), and the score value are included in the preset performance indicator score table.
  • the mapping relationship between the performance index and the performance index of the parameter can be used to find the score corresponding to the performance index of a certain parameter value, thereby quantifying the performance index, so that the user can quickly understand the performance status of the service interface of the server.
  • TPS number of transactions per second
  • the specific implementation process of the system performance measurement method S105 is as follows:
  • B1 Set weights of various service interfaces according to the resource allocation ratio.
  • B2 Searching for a score corresponding to the service interface in a preset performance indicator score table according to the collected parameter value of the performance indicator of the service interface of the server.
  • B3 Quantify the performance index of the collected service interface into a score according to the weight of each service interface and the score corresponding to the service interface.
  • the service interface includes a real-time transaction type interface, a real-time query type interface, and a batch processing service function interface.
  • the resource allocation ratio reflects the usage of various service interfaces, and the weights of various service interfaces are set according to the resource allocation ratio. Simply put, the higher the proportion of business interfaces in the same time period, the greater the weight.
  • the value of the parameter value of the performance indicator is fixed, but the usage of different types of service interfaces may be different. Therefore, the weights of various service interfaces are set according to the resource allocation ratio, so that the weights of various service interfaces and service interfaces are based on the weights of various service interfaces. Corresponding scores are used to quantify the performance indicators of the collected service interfaces into fractions.
  • the performance index of the same type of service interface is more than one, and the score corresponding to each performance indicator parameter of the service interface is queried according to the preset performance indicator score table, and each performance indicator of the service interface of the service type is The scores corresponding to the parameters are determined, and the initial performance scores of the service interfaces are determined. The corresponding weights of the service interfaces are multiplied by the initial performance scores to determine the scores of the performance indicators of the service interfaces.
  • the weight of the real-time query class interface is set to A1
  • the performance score is b1
  • the weight of the real-time transaction interface is set to A2
  • the initial performance score is b2
  • the weight of the batch service function interface is A3
  • the initial performance score is b3.
  • the score of the performance indicator of the server service interface is A1*b1+ A2*b2+ A3*b3.
  • S106 Generate a performance evaluation report according to the quantized score and the parameter value of the performance indicator of the server service interface.
  • the generated performance evaluation report includes a score of performance indicators of various service interfaces and a total performance score of the server service interface.
  • the performance evaluation report also includes the time of the parameter of the performance indicator collected during the statistical analysis.
  • the performance indicator parameters are displayed using a statistical chart, so that the display of the performance indicator parameters is more intuitive.
  • the performance evaluation report is stored in the server in the form of a text, a picture, and a form, that is, the performance evaluation report may be stored in the server in the form of a WORD document, a PDF document, or an EXCEL document.
  • the performance evaluation report is simultaneously sent to the central server for storage while being saved on the server side.
  • the performance evaluation report can also be displayed in the form of a WEB webpage, and the webpage link is stored in the server to save the storage space of the server.
  • the method before the step S106, the method further includes:
  • the parameter value of the performance indicator of the server service interface that is collected is compared with a preset alarm value, and whether the parameter value of the performance indicator of the service interface of the server exceeds the warning value.
  • the CPU usage is 15%
  • the JVM memory usage is 95%
  • the second resource is analyzed
  • the host memory usage is 12.5%
  • the network bandwidth usage is 8%.
  • the parameter value of the above performance indicator is compared with a preset warning value to determine whether the warning value is exceeded.
  • the warning value is the percentage limit of the resource usage rate. If the warning value is exceeded, there is a risk of affecting the performance of the server, prompting the user to perform performance optimization.
  • multiple alert levels are set.
  • the number of performance indicators corresponding to the alarm value corresponding to one warning level is different.
  • the warning level is that the parameter values of 1 to 2 performance indicators exceed the warning value
  • the warning level is that the parameter values of 3 to 5 performance indicators exceed the warning value, and so on. According to the warning prompts and the level of the warning, it is judged whether the performance optimization needs to be performed immediately.
  • the embodiment of the present application facilitates the understanding of the performance of the server by different users, generates a performance evaluation report according to the quantized score and the parameter value of the performance index of the service interface of the server, and visually displays the parameter value of the performance indicator of the server, and improves the performance evaluation.
  • Accuracy not only facilitates users to quickly understand the performance of the server, but also facilitates users to obtain performance indicators that require performance optimization and improve the efficiency of server performance optimization.
  • the method for measuring system performance further includes:
  • S107 Optimize performance of the server service interface according to the score quantified by the performance index of the service interface. Specifically, in the embodiment of the present application, the performance index of the performance optimization is determined by the score of the server performance indicator in the performance evaluation report.
  • S108 Collect parameter values of performance indicators of the server service interface after performance optimization.
  • the step refers to step S104.
  • a simulation scenario is established for the performance-optimized system, and a stress test is performed in the simulation scenario to collect parameter values of the performance indicators of the service-side service interface after performance optimization.
  • S109 Quantify, according to the resource allocation ratio and the preset performance indicator score table, a performance index of the server service interface that is optimized after the performance optimization is a score.
  • the step may refer to step S105.
  • S1010 Generate a performance optimization evaluation report according to the quantified result and the performance evaluation report before performance optimization.
  • the generated performance optimization evaluation report includes the score of the performance index of the service interface before the performance optimization and the performance index of the service interface after the performance optimization. fraction. Service metrics that further correspond to performance optimization can be highlighted so that users can quickly understand the optimized content.
  • FIG. 3 is a structural block diagram of a system performance measuring apparatus provided by an embodiment of the present application. For convenience of description, only relevant embodiments of the present application are shown. section.
  • the system performance measuring apparatus includes: a log file obtaining unit 61, an allocation ratio determining unit 62, a simulated scene establishing unit 63, a first parameter value collecting unit 64, a second index quantizing unit 65, and an evaluation report generating unit 66. ,among them:
  • a log file obtaining unit 61 configured to acquire a log file of the server in the system
  • the distribution ratio determining unit 62 is configured to acquire usage information of the server service interface according to the log file, and determine a resource allocation ratio of the system according to the usage information.
  • the simulation scenario establishing unit 63 is configured to establish a simulation scenario of the current environment of the system according to the resource allocation ratio
  • the first parameter value collection unit 64 is configured to perform a stress test based on the simulation scenario, and collect a parameter value of a performance indicator of the server service interface;
  • the first indicator quantization unit 65 is configured to quantize the performance index of the collected service interface into a score according to the resource allocation ratio and the preset performance indicator score table;
  • the evaluation report generating unit 66 is configured to generate a performance evaluation report according to the quantized score and the parameter value of the performance indicator of the server service interface.
  • the allocation ratio determining unit 62 includes:
  • a history information obtaining module configured to acquire historical usage information of the service interface in a specified time period according to the log file
  • a statistical analysis module configured to statistically analyze usage of various service interfaces in the specified time period based on the historical usage information
  • an allocation ratio determining module configured to determine a resource allocation ratio of the system based on a statistical analysis result.
  • the first indicator quantization unit 65 includes:
  • a weight setting module configured to set weights of various service interfaces according to the resource allocation ratio
  • a score finding module configured to search for a score corresponding to the service interface in a preset performance indicator score table according to the collected parameter value of the performance indicator of the service interface of the server;
  • the indicator quantification module is configured to quantize the performance index of the collected service interface into a score according to the weight of each service interface and the score corresponding to the service interface.
  • the measuring device of the system performance further includes:
  • the warning judging unit is configured to compare the parameter value of the performance indicator of the server service interface with the preset warning value, and determine whether the parameter value of the performance indicator of the service interface of the server exceeds the warning value;
  • the optimization prompting unit is configured to prompt the user to perform performance optimization if the parameter value of the performance indicator of the server service interface exceeds the warning value.
  • the system performance measurement apparatus further includes:
  • the performance optimization unit 71 is configured to optimize performance of the server service interface according to the score quantified by the performance index of the service interface.
  • the second parameter value collection unit 72 is configured to collect parameter values of the performance indicators of the server service interface after performance optimization
  • the second indicator quantization unit 73 is configured to quantize the performance index of the server service interface that is optimized after the performance optimization according to the resource allocation ratio and the preset performance indicator score table into a score;
  • the optimization evaluation report generating unit 74 is configured to generate a performance optimization evaluation report according to the quantized result and the performance evaluation report before the performance optimization.
  • the embodiments of the present application can facilitate the understanding of the performance of the server by different users, visually display the parameter values of the performance indicators of the server, and improve the accuracy of the performance evaluation, which not only facilitates the user to quickly understand the performance of the server, but also facilitates the user to obtain performance optimization. Performance metrics to improve the efficiency of server performance optimization.
  • FIG. 5 is a schematic diagram of a server according to an embodiment of the present application.
  • the server 8 of this embodiment includes a processor 80, a memory 81, and computer readable instructions 82 stored in the memory 81 and executable on the processor 80, such as a measure of system performance. program.
  • the processor 80 when executing the computer readable instructions 82, implements the functions of the various modules/units in the various apparatus embodiments described above, such as the functions of the modules 61-66 shown in FIG.
  • the computer readable instructions 82 may be partitioned into one or more modules/units that are stored in the memory 81 and executed by the processor 80, To complete this application.
  • the one or more modules/units may be a series of computer readable instruction instruction segments capable of performing a particular function for describing the execution of the computer readable instructions 82 in the server 8.
  • the server 8 can be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the server may include, but is not limited to, a processor 80, a memory 81. It will be understood by those skilled in the art that FIG. 8 is merely an example of the server 8, does not constitute a limitation of the server 8, may include more or less components than those illustrated, or combine some components, or different components, such as
  • the server may also include an input and output device, a network access device, a bus, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Mathematical Physics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

本申请提供了一种***性能的度量方法、装置、存储介质和服务器,包括:获取***中服务端的日志文件;根据所述日志文件获取所述服务端业务接口的使用情况信息,并根据所述使用情况信息,确定***的资源分配比例;根据所述资源分配比例建立所述***当前环境的模拟场景;基于所述模拟场景进行压力测试,采集所述服务端业务接口的性能指标的参数值;根据所述资源分配比例与预设的性能指标评分表对采集的业务接口的性能指标量化为分数;根据量化的分数以及所述服务端业务接口的性能指标的参数值生成性能评价报告。本申请将性能指标量化,可方便用户评估***的性能,提高服务端性能优化的效率。

Description

***性能的度量方法、装置、存储介质和服务器
本申请要求于2018年02月07日提交中国专利局、申请号为CN 201810121898.6、发明名称为“***性能的度量方法、存储介质和服务器”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机应用***领域,尤其涉及一种***性能的度量方法、装置、存储介质和服务器。
背景技术
随着企业业务的发展,应用***的交易量越来越大,应用***的性能直接影响着交易业务的稳定安全运行。一个应用***通常由众多的功能模块组成,单个功能或个别几个功能的性能好与坏不能作为这个应用***整体性能好与坏的衡量标准。每次版本升级对***整体性能的影响也难以度量。
目前,业界在应用***性能的评估分析方面还相对分散,没有相关量化指标体系,没有形成规范的评估模型,现有的性能评估方法没有量化的计算评估,准确性得不到保证。
技术问题
本申请实施例提供了一种***性能的度量方法、装置、存储介质和服务器,以解决现有技术中,现有的性能评估方法没有量化的计算评估,准确性得不到保证的问题。
技术解决方案
本申请实施例的第一方面提供了一种***性能的度量方法,包括:
获取***中服务端的日志文件;
根据所述日志文件获取所述服务端业务接口的使用情况信息,并根据所述使用情况信息,确定***的资源分配比例;
根据所述资源分配比例建立所述***当前环境的模拟场景;
基于所述模拟场景进行压力测试,采集所述服务端业务接口的性能指标的参数值;
根据所述资源分配比例与预设的性能指标评分表对采集的业务接口的性能指标量化为分数;
根据量化的分数以及所述服务端业务接口的性能指标的参数值生成性能评价报告。
本申请实施例的第二方面提供了一种服务器,包括存储器以及处理器,所述存储器存储有可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:
获取***中服务端的日志文件;
根据所述日志文件获取所述服务端业务接口的使用情况信息,并根据所述使用情况信息,确定***的资源分配比例;
根据所述资源分配比例建立所述***当前环境的模拟场景;
基于所述模拟场景进行压力测试,采集所述服务端业务接口的性能指标的参数值;
根据所述资源分配比例与预设的性能指标评分表对采集的业务接口的性能指标量化为分数;
根据量化的分数以及所述服务端业务接口的性能指标的参数值生成性能评价报告。
本申请实施例的第三方面提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,所述计算机可读指令被处理器执行时实现如下步骤:
获取***中服务端的日志文件;
根据所述日志文件获取所述服务端业务接口的使用情况信息,并根据所述使用情况信息,确定***的资源分配比例;
根据所述资源分配比例建立所述***当前环境的模拟场景;
基于所述模拟场景进行压力测试,采集所述服务端业务接口的性能指标的参数值;
根据所述资源分配比例与预设的性能指标评分表对采集的业务接口的性能指标量化为分数;
根据量化的分数以及所述服务端业务接口的性能指标的参数值生成性能评价报告。
本申请实施例的第四方面提供了一种***性能的度量装置,包括:
日志文件获取单元,用于获取***中服务端的日志文件;
分配比例确定单元,用于根据所述日志文件获取所述服务端业务接口的使用情况信息,并根据所述使用情况信息,确定***的资源分配比例;
模拟场景建立单元,用于根据所述资源分配比例建立所述***当前环境的模拟场景;
第一参数值采集单元,用于基于所述模拟场景进行压力测试,采集所述服务端业务接口的性能指标的参数值;
第一指标量化单元,用于根据所述资源分配比例与预设的性能指标评分表对采集的业务接口的性能指标量化为分数;
评价报告生成单元,用于根据量化的分数以及所述服务端业务接口的性能指标的参数值生成性能评价报告。
有益效果
本申请实施例中,通过获取***中服务端的日志文件,根据所述日志文件获取所述服务端业务接口的使用情况信息,并根据所述使用情况信息,确定***的资源分配比例,然后根据所述资源分配比例建立所述***当前环境的模拟场景,基于所述模拟场景进行压力测试,采集所述服务端业务接口的性能指标的参数值,再根据所述资源分配比例与预设的性能指标评分表对采集的业务接口的性能指标量化为分数,方便不同用户对服务端性能的了解,最后根据量化的分数以及所述服务端业务接口的性能指标的参数值生成性能评价报告,直观展示服务端的性能指标的参数值,提高性能评估的准确性,不仅方便用户快速了解服务端的性能,还能方便用户获取需要进行性能优化的性能指标,提高服务端性能优化的效率。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的***性能的度量方法的实现流程图;
图2是本申请另一实施例提供的***性能的度量方法的实现流程图;
图3是本申请实施例提供的***性能的度量装置的结构框图;
图4是本申请另一实施例提供的***性能的度量装置的结构框图;
图5是本申请实施例提供的服务器的示意图。
本发明的实施方式
基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
图1示出了本申请实施例提供的***性能的度量方法的实现流程,该方法流程包括步骤S101至S106。各步骤的具体实现原理如下:
S101:获取***中服务端的日志文件。
具体地,在分布式的***中,服务端通过接口读取用户请求等数据。在本申请实施例中,所述日志文件是记录***中所述服务端的操作事件的文件集合。
S102:根据所述日志文件获取所述服务端业务接口的使用情况信息,并根据所述使用情况信息,确定***的资源分配比例。
在本申请实施例中,业务接口包括实时交易类接口,例如购物网站中的交易页面;实时查询类接口,如商品查找页面。所述业务接口还包括批处理业务功能接口。具体地,所述实时交易类接口和实时查询类接口的使用情况信息包括请求发送时间、请求并发数、平均响应时间、TPS(Transaction Per Second,每秒交易数)、TPS波动范围、超时概率以及错误概率,其中,TPS表示每秒钟***能够处理的交易或者事务的数量,它是衡量***服务端性能的重要指标。并发是指多个用户对***发出了请求或者进行了操作,这些请求和或者操作可以相同,也可以不同。请求并发数是指同时发出请求或进行操作的个数。响应时间是指从客户端发一个请求开始计时,到客户端接收到从服务器端返回的响应结果结束所经历的时间,响应时间由请求发送时间、网络传输时间和服务器处理时间三部分组成;所述批处理业务接口的使用情况信息包括每分钟处理记录数。
作为本申请的一个实施例,服务端包括不止一类业务接口,上述S102具体包括:
A1:根据所述日志文件,获取指定时间段内所述业务接口的历史使用情况信息。具体地,从所述日志文件中获取指定时间段内所述业务接口的历史使用情况信息,其中,所述指定时间段是由用户自行选择统计所需的时间段,为获取足够的历史使用情况信息,指定时间段一般为以周为单元,即指定时间段至少为一周。
A2:基于所述历史使用情况信息统计分析各类业务接口在所述指定时间段内的使用情况。在本申请实施例中,统计分析是指通服务端各类业务接口历史使用情况信息进行分析研究,认识和揭示各类业务接口在所述指定时间段内的使用情况,借以达到对各类业务接口的性能的正确解释和预测。
A3:基于统计分析结果,确定所述***的资源分配比例。其中,资源分配比例是指***中各业务接口使用情况的分布比例。
在本申请实施例中,可采用聚类分析法对指定时间段内所述业务接口的历史使用情况信息进行统计分析。聚类分析法是理想的多变量统计技术,分为分层聚类法和迭代聚类法两类。在本申请实施例中,聚类分析的目的是把历史使用情况信息按一定的规则聚类,聚类的类别不是预设的,而是根据历史使用情况信息的特征(比如历史请求时间)而确定的。
可选地,在本申请实施例中,根据日志文件获取过去一段时间内服务端各类业务接口的历史使用情况信息,基于所述历史使用情况信息,确定业务接口的高峰时段和空闲时段,具体地,在本申请实施例中,上述步骤A2包括:
A21:将同一类业务接口的历史使用情况信息进行聚类分析。具体地,聚类分析是指将物理或抽象对象的集合分组为由类似的对象组成的多个类的分析过程。是将数据分类到不同的类或者簇这样的一个过程,所以同一个簇中的对象有很大的相似性,而不同簇间的对象有很大的相异性。
A22:根据每一类业务接口的历史使用情况信息的聚类分析结果,预测每一类业务接口的高峰时段和空闲时段。
在本申请实施例中,通过对各类业务接口的历史使用情况信息进行聚类分析,根据聚类分析的结果获取各类业务接口的高峰时段和空闲时段,以便根据各类业务接口的使用情况分析性能。
进一步地,由于节假日或者网站活动日的影响,使用服务端的用户数量也会受到影响。因此,在本申请实施例中,通过确定未来一段时间内节假日的日期以及业务活动周期的日期,根据节假日的日期、业务活动周期的日期以及聚类分析结果,确定每一类业务接口的高峰时段和空闲时段。其中,业务活动周期包括促销活动、庆典活动等。
S103:根据所述资源分配比例建立所述***当前环境的模拟场景。
具体地,根据***的资源分配比例建立模拟场景,在该模拟场景中,模拟真实用户的业务处理,包括模拟多个用户在同一时刻做同一件事情或操作,这种操作一般针对同一类型的业务(也即针对同一类业务接口),或者所有用户进行完全一样的操作。通过根据所述资源分配比例在模拟场景中模拟各类业务接口的使用情况。
S104:基于所述模拟场景进行压力测试,采集所述服务端业务接口的性能指标的参数值。
具体地,在模拟场景中进行压力测试。压力测试是在实际应用的软硬件环境及用户使用过程中采集服务端性能指标的各项参数值。其中,性能指标包括网络负载如带宽的消耗,应用***负载和数据库负载,网络负载包括带宽的消耗,应用***负载包括CPU使用率、CPU负载情况、JVM Perm区内存消耗以及垃圾回收GC频率,数据库负载包括连接数、内存使用量、磁盘IO和CPU使用率,服务端性能指标还包括超时概率、错误概率、TPS(Transaction Per Second,每秒交易数)以及响应时间。
S105:根据所述资源分配比例与预设的性能指标评分表对采集的业务接口的性能指标量化为分数。
在本申请实施例中,预先设置性能指标评分表,在所述预设的性能指标评分表中包括性能指标名称、性能指标的参数值(或性能指标的参数值区间)以及分值三者之间的映射关系,从所述预设的性能指标评分表中,可查找具有某一参数值的性能指标对应的分值,从而将性能指标量化,方便用户能快速了解服务端业务接口的性能状况。例如,当TPS(每秒交易数)≤150时,对应的分值为0,当TPS的值位于[150,200]之间,对应的分值为10。当TPS>300时,对应的分值为30。
作为本申请的一个实施例,本申请实施例提供的***性能的度量方法S105的具体实现流程,详述如下:
B1:根据所述资源分配比例设置各类业务接口的权重。
B2:根据采集的所述服务端的业务接口的性能指标的参数值,在预设的性能指标评分表中查找所述业务接口对应的分值。
B3:根据各类业务接口的权重与业务接口对应的分值,将采集的业务接口的性能指标量化为分数。
在本申请实施例中,业务接口包括实时交易类接口、实时查询类接口以及批处理业务功能接口。所述资源分配比例体现了各类业务接口的使用情况,根据所述资源分配比例设置各类业务接口的权重。简单的说,在同一时间段内,所占比例越高的业务接口所设的权重越大。性能指标的参数值对应的分值是固定的,但是不同类别的业务接口的使用情况可能不同,因此,根据资源分配比例设置各类业务接口的权重,从而根据各类业务接口的权重与业务接口对应的分值,将采集的业务接口的性能指标量化为分数。
进一步地,同一类业务接口的性能指标不止一个,根据所述预设的性能指标评分表,查询该类业务接口的每一个性能指标参数对应的分值,将该类业务接口的每一个性能指标参数对应的分值求和,确定为该类业务接口的初始性能评分,将该类业务接口的对应的权重与初始性能评分相乘,确定该类业务接口的性能指标的分数。
示例性地,设置实时查询类接口的权重为A1,性能评分为b1,设置实时交易接口的权重为A2,初始性能评分为b2,设置批处理业务功能接口权重为A3,初始性能评分为b3。服务端业务接口的性能指标的分数A1*b1+ A2*b2+ A3*b3。
S106:根据量化的分数以及所述服务端业务接口的性能指标的参数值生成性能评价报告。
具体地,生成的性能评价报告中包括各类业务接口的性能指标的分数以及服务端业务接口总的性能分数。所述性能评价报告中还包括进行统计分析时所采集的性能指标的参数的时间。在本申请实施例中,将性能指标参数使用统计图表展示,从而使得性能指标参数的显示更为直观。在本申请实施例中,性能评价报告以文字、图片、表格的形式存储在服务端中,即该性能评价报告可以WORD文档、PDF文档或者EXCEL文档的形式存储在服务端中。进一步地,保存在服务端的同时将性能评价报告同时发送至中心服务器进行保存。可选地,所述性能评价报告还能以WEB网页形式展示,将网页链接存储在服务端中,以节省服务端的存储空间。
可选地,作为本申请的一个实施例,在所述步骤S106之前还包括:
C1、将采集的所述服务端业务接口的性能指标的参数值与预设的预警值进行比较,判断所述服务端业务接口的性能指标的参数值是否超过预警值。
C2、若所述服务端业务接口的性能指标的参数值超过预警值,则提示用户进行性能优化。
例如,通过对采集的性能指标的参数值分析可知,CPU使用率是 15%,JVM 内存使用率是 95%,分析第二资源得出主机内存使用率是 12.5%,网络带宽使用率是 8%。将上述性能指标的参数值与预先设置的预警值进行比较,判断是否超过预警值。预警值即资源使用率百分值限制,若该预警值被超过,则存在影响服务端性能的风险,提示用户进行性能优化。
可选地,设置多个预警等级。一个预警等级对应的超过告警值的性能指标的数目不同。例如,预警一级是1到2个性能指标的参数值都超过预警值,预警二级是3到5个性能指标的参数值超过预警值,以此类推。根据出现的预警提示以及预警的等级判断是否需要立即进行性能优化。
本申请实施例方便不同用户对服务端性能的了解,根据量化的分数以及所述服务端业务接口的性能指标的参数值生成性能评价报告,直观展示服务端的性能指标的参数值,提高性能评估的准确性,不仅方便用户快速了解服务端的性能,还能方便用户获取需要进行性能优化的性能指标,提高服务端性能优化的效率。
进一步地,基于上述图1实施例中所提供的***性能的度量方法,提出本申请的另一实施例。在本申请实施例中,在图1所示的步骤S101-S106的基础上,如图2所示,所述***性能的度量方法还包括:
S107:根据所述业务接口的性能指标量化的分数,对所述服务端业务接口的性能进行优化。具体地,在本申请实施例中,通过所述性能评价报告中服务端性能指标的分数,确定需要进行性能优化的性能指标。
S108:采集进行性能优化后的服务端业务接口的性能指标的参数值。
在本申请实施例中,该步骤参考步骤S104。通过对经过性能优化后的***建立模拟场景,在模拟场景中进行压力测试,采集进行性能优化后的服务端业务接口的性能指标的参数值。
S109:根据所述资源分配比例与预设的性能指标评分表对进行性能优化后的采集的服务端业务接口的性能指标量化为分数。
在本申请实施例中,该步骤可参考步骤S105。
S1010:根据量化的结果与性能优化前的所述性能评价报告,生成性能优化评价报告。
在本申请实施例中,根据量化的结果与性能优化前的所述性能评价报告,生成的性能优化评价报告中包括性能优化前业务接口的性能指标的分数以及性能优化后业务接口的性能指标的分数。进一步地对应进行性能优化的服务指标可着重标记,以便用户快速了解优化的内容。
对应于上文实施例所述的***性能的度量方法,图3示出了本申请实施例提供的***性能的度量装置的结构框图,为了便于说明,仅示出了与本申请实施例相关的部分。
参照图3,该***性能的度量装置包括:日志文件获取单元61,分配比例确定单元62,模拟场景建立单元63,第一参数值采集单元64,第二指标量化单元65,评价报告生成单元66,其中:
日志文件获取单元61,用于获取***中服务端的日志文件;
分配比例确定单元62,用于根据所述日志文件获取所述服务端业务接口的使用情况信息,并根据所述使用情况信息,确定***的资源分配比例;
模拟场景建立单元63,用于根据所述资源分配比例建立所述***当前环境的模拟场景;
第一参数值采集单元64,用于基于所述模拟场景进行压力测试,采集所述服务端业务接口的性能指标的参数值;
第一指标量化单元65,用于根据所述资源分配比例与预设的性能指标评分表对采集的业务接口的性能指标量化为分数;
评价报告生成单元66,用于根据量化的分数以及所述服务端业务接口的性能指标的参数值生成性能评价报告。
可选地,所述分配比例确定单元62包括:
历史信息获取模块,用于根据所述日志文件,获取指定时间段内所述业务接口的历史使用情况信息;
统计分析模块,用于基于所述历史使用情况信息统计分析各类业务接口在所述指定时间段内的使用情况;
分配比例确定模块,用于基于统计分析结果,确定所述***的资源分配比例。
可选地,所述第一指标量化单元65包括:
权重设置模块,用于根据所述资源分配比例设置各类业务接口的权重;
分值查找模块,用于根据采集的所述服务端的业务接口的性能指标的参数值,在预设的性能指标评分表中查找所述业务接口对应的分值;
指标量化模块,用于根据各类业务接口的权重与业务接口对应的分值,将采集的业务接口的性能指标量化为分数。
可选地,所述***性能的度量装置还包括:
预警判断单元,用于将采集的所述服务端业务接口的性能指标的参数值与预设的预警值进行比较,判断所述服务端业务接口的性能指标的参数值是否超过预警值;
优化提示单元,用于若所述服务端业务接口的性能指标的参数值超过预警值,则提示用户进行性能优化。
可选地,如图4所示,所述***性能的度量装置还包括:
性能优化单元71,用于根据所述业务接口的性能指标量化的分数,对所述服务端业务接口的性能进行优化;
第二参数值采集单元72,用于采集进行性能优化后的服务端业务接口的性能指标的参数值;
第二指标量化单元73,用于根据所述资源分配比例与预设的性能指标评分表对进行性能优化后的采集的服务端业务接口的性能指标量化为分数;
优化评价报告生成单元74,用于根据量化的结果与性能优化前的所述性能评价报告,生成性能优化评价报告。
本申请实施例可方便不同用户对服务端性能的了解,直观展示服务端的性能指标的参数值,提高性能评估的准确性,不仅方便用户快速了解服务端的性能,还能方便用户获取需要进行性能优化的性能指标,提高服务端性能优化的效率。
图5是本申请一实施例提供的服务器的示意图。如图5所示,该实施例的服务器8包括:处理器80、存储器81以及存储在所述存储器81中并可在所述处理器80上运行的计算机可读指令82,例如***性能的度量程序。所述处理器80执行所述计算机可读指令82时实现上述各个***性能的度量方法实施例中的步骤,例如图1所示的步骤101至106。或者,所述处理器80执行所述计算机可读指令82时实现上述各装置实施例中各模块/单元的功能,例如图3所示模块61至66的功能。
示例性的,所述计算机可读指令82可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器81中,并由所述处理器80执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机可读指令指令段,该指令段用于描述所述计算机可读指令82在所述服务器8中的执行过程。
所述服务器8可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述服务器可包括,但不仅限于,处理器80、存储器81。本领域技术人员可以理解,图8仅仅是服务器8的示例,并不构成对服务器8的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述服务器还可以包括输入输出设备、网络接入设备、总线等。

Claims (20)

  1. 一种***性能的度量方法,其特征在于,包括:
    获取***中服务端的日志文件;
    根据所述日志文件获取所述服务端业务接口的使用情况信息,并根据所述使用情况信息,确定***的资源分配比例;
    根据所述资源分配比例建立所述***当前环境的模拟场景;
    基于所述模拟场景进行压力测试,采集所述服务端业务接口的性能指标的参数值;
    根据所述资源分配比例与预设的性能指标评分表对采集的业务接口的性能指标量化为分数;
    根据量化的分数以及所述服务端业务接口的性能指标的参数值生成性能评价报告。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述日志文件获取所述服务端业务接口的使用情况信息,并根据所述使用情况信息,确定***的资源分配比例的步骤,包括:
    根据所述日志文件,获取指定时间段内所述业务接口的历史使用情况信息;
    基于所述历史使用情况信息统计分析各类业务接口在所述指定时间段内的使用情况;
    基于统计分析结果,确定所述***的资源分配比例。
  3. 根据权利要求1所述的方法,其特征在于,所述根据所述资源分配比例与预设的性能指标评分表对采集的业务接口的性能指标量化为分数的步骤,包括:
    根据所述资源分配比例设置各类业务接口的权重;
    根据采集的所述服务端的业务接口的性能指标的参数值,在预设的性能指标评分表中查找所述业务接口对应的分值;
    根据各类业务接口的权重与业务接口对应的分值,将采集的业务接口的性能指标量化为分数。
  4. 根据权利要求1所述的方法,其特征在于,在所述根据量化的分数以及所述服务端业务接口的性能指标的参数值生成性能评价报告的步骤之前,还包括:
    将采集的所述服务端业务接口的性能指标的参数值与预设的预警值进行比较,判断所述服务端业务接口的性能指标的参数值是否超过预警值;
    若所述服务端业务接口的性能指标的参数值超过预警值,则提示用户进行性能优化。
  5. 根据权利要求1至4任一项所述的方法,其特征在于,在所述根据量化的分数以及所述服务端业务接口的性能指标的参数值生成性能评价报告的步骤之后,包括:
    根据所述业务接口的性能指标量化的分数,对所述服务端业务接口的性能进行优化;
    采集进行性能优化后的服务端业务接口的性能指标的参数值;
    根据所述资源分配比例与预设的性能指标评分表对进行性能优化后的采集的服务端业务接口的性能指标量化为分数;
    根据量化的结果与性能优化前的所述性能评价报告,生成性能优化评价报告。
  6. 一种***性能的度量装置,其特征在于,包括:
    日志文件获取单元,用于获取***中服务端的日志文件;
    分配比例确定单元,用于根据所述日志文件获取所述服务端业务接口的使用情况信息,并根据所述使用情况信息,确定***的资源分配比例;
    模拟场景建立单元,用于根据所述资源分配比例建立所述***当前环境的模拟场景;
    第一参数值采集单元,用于基于所述模拟场景进行压力测试,采集所述服务端业务接口的性能指标的参数值;
    第一指标量化单元,用于根据所述资源分配比例与预设的性能指标评分表对采集的业务接口的性能指标量化为分数;
    评价报告生成单元,用于根据量化的分数以及所述服务端业务接口的性能指标的参数值生成性能评价报告。
  7. 根据权利要求6所述的装置,其特征在于,所述分配比例确定单元包括:
    历史信息获取模块,用于根据所述日志文件,获取指定时间段内所述业务接口的历史使用情况信息;
    统计分析模块,用于基于所述历史使用情况信息统计分析各类业务接口在所述指定时间段内的使用情况;
    分配比例确定模块,用于基于统计分析结果,确定所述***的资源分配比例。
  8. 根据权利要求6所述的装置,其特征在于,所述第一指标量化单元包括:
    权重设置模块,用于根据所述资源分配比例设置各类业务接口的权重;
    分值查找模块,用于根据采集的所述服务端的业务接口的性能指标的参数值,在预设的性能指标评分表中查找所述业务接口对应的分值;
    指标量化模块,用于根据各类业务接口的权重与业务接口对应的分值,将采集的业务接口的性能指标量化为分数。
  9. 根据权利要求6所述的装置,其特征在于,所述***性能的度量装置还包括:包括:
    预警判断单元,用于将采集的所述服务端业务接口的性能指标的参数值与预设的预警值进行比较,判断所述服务端业务接口的性能指标的参数值是否超过预警值;
    优化提示单元,用于若所述服务端业务接口的性能指标的参数值超过预警值,则提示用户进行性能优化。
  10. 根据权利要求6至9任一项所述的装置,其特征在于,所述***性能的度量装置还包括:
    性能优化单元,用于根据所述业务接口的性能指标量化的分数,对所述服务端业务接口的性能进行优化;
    第二参数值采集单元,用于采集进行性能优化后的服务端业务接口的性能指标的参数值;
    第二指标量化单元,用于根据所述资源分配比例与预设的性能指标评分表对进行性能优化后的采集的服务端业务接口的性能指标量化为分数;
    优化评价报告生成单元,用于根据量化的结果与性能优化前的所述性能评价报告,生成性能优化评价报告。
  11. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,其特征在于,所述计算机可读指令被处理器执行时实现如下步骤:
  12. 根据权利要求11所述的计算机可读存储介质,其特征在于,所述根据所述日志文件获取所述服务端业务接口的使用情况信息,并根据所述使用情况信息,确定***的资源分配比例的步骤,包括:
    根据所述日志文件,获取指定时间段内所述业务接口的历史使用情况信息;
    基于所述历史使用情况信息统计分析各类业务接口在所述指定时间段内的使用情况;
    基于统计分析结果,确定所述***的资源分配比例。
  13. 根据权利要求11所述的计算机可读存储介质,其特征在于,所述根据所述资源分配比例与预设的性能指标评分表对采集的业务接口的性能指标量化为分数的步骤,包括:
    根据所述资源分配比例设置各类业务接口的权重;
    根据采集的所述服务端的业务接口的性能指标的参数值,在预设的性能指标评分表中查找所述业务接口对应的分值;
    根据各类业务接口的权重与业务接口对应的分值,将采集的业务接口的性能指标量化为分数。
  14. 根据权利要求11所述的计算机可读存储介质,其特征在于,在所述根据量化的分数以及所述服务端业务接口的性能指标的参数值生成性能评价报告的步骤之前,还包括:
    将采集的所述服务端业务接口的性能指标的参数值与预设的预警值进行比较,判断所述服务端业务接口的性能指标的参数值是否超过预警值;
    若所述服务端业务接口的性能指标的参数值超过预警值,则提示用户进行性能优化。
  15. 根据权利要求11至14任一项所述的计算机可读存储介质,其特征在于,在所述根据量化的分数以及所述服务端业务接口的性能指标的参数值生成性能评价报告的步骤之后,包括:
    根据所述业务接口的性能指标量化的分数,对所述服务端业务接口的性能进行优化;
    采集进行性能优化后的服务端业务接口的性能指标的参数值;
    根据所述资源分配比例与预设的性能指标评分表对进行性能优化后的采集的服务端业务接口的性能指标量化为分数;
    根据量化的结果与性能优化前的所述性能评价报告,生成性能优化评价报告。
  16. 一种服务器,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现如下步骤:
    获取***中服务端的日志文件;
    根据所述日志文件获取所述服务端业务接口的使用情况信息,并根据所述使用情况信息,确定***的资源分配比例;
    根据所述资源分配比例建立所述***当前环境的模拟场景;
    基于所述模拟场景进行压力测试,采集所述服务端业务接口的性能指标的参数值;
    根据所述资源分配比例与预设的性能指标评分表对采集的业务接口的性能指标量化为分数;
    根据量化的分数以及所述服务端业务接口的性能指标的参数值生成性能评价报告。
  17. 如权利要求16所述的服务器,其特征在于,所述根据所述日志文件获取所述服务端业务接口的使用情况信息,并根据所述使用情况信息,确定***的资源分配比例的步骤,包括:
    根据所述日志文件,获取指定时间段内所述业务接口的历史使用情况信息;
    基于所述历史使用情况信息统计分析各类业务接口在所述指定时间段内的使用情况;
    基于统计分析结果,确定所述***的资源分配比例。
  18. 如权利要求16所述的服务器,其特征在于,所述根据所述资源分配比例与预设的性能指标评分表对采集的业务接口的性能指标量化为分数的步骤,包括:
    根据所述资源分配比例设置各类业务接口的权重;
    根据采集的所述服务端的业务接口的性能指标的参数值,在预设的性能指标评分表中查找所述业务接口对应的分值;
    根据各类业务接口的权重与业务接口对应的分值,将采集的业务接口的性能指标量化为分数。
  19. 如权利要求16所述的服务器,其特征在于,所述处理器执行所述计算机可读指令时还实现如下步骤:
    将采集的所述服务端业务接口的性能指标的参数值与预设的预警值进行比较,判断所述服务端业务接口的性能指标的参数值是否超过预警值;
    若所述服务端业务接口的性能指标的参数值超过预警值,则提示用户进行性能优化。
  20. 如权利要求16至19任一项所述的服务器,其特征在于,所述处理器执行所述计算机可读指令时还实现如下步骤:
    根据所述业务接口的性能指标量化的分数,对所述服务端业务接口的性能进行优化;
    采集进行性能优化后的服务端业务接口的性能指标的参数值;
    根据所述资源分配比例与预设的性能指标评分表对进行性能优化后的采集的服务端业务接口的性能指标量化为分数;
    根据量化的结果与性能优化前的所述性能评价报告,生成性能优化评价报告。
PCT/CN2018/082833 2018-02-07 2018-04-12 ***性能的度量方法、装置、存储介质和服务器 WO2019153487A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810121898.6A CN108446210B (zh) 2018-02-07 2018-02-07 ***性能的度量方法、存储介质和服务器
CN201810121898.6 2018-02-07

Publications (1)

Publication Number Publication Date
WO2019153487A1 true WO2019153487A1 (zh) 2019-08-15

Family

ID=63191696

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/082833 WO2019153487A1 (zh) 2018-02-07 2018-04-12 ***性能的度量方法、装置、存储介质和服务器

Country Status (2)

Country Link
CN (1) CN108446210B (zh)
WO (1) WO2019153487A1 (zh)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109408364A (zh) * 2018-08-28 2019-03-01 深圳壹账通智能科技有限公司 软件产品的性能分析方法、装置、终端及计算机存储介质
CN109446041B (zh) * 2018-09-25 2022-10-28 平安普惠企业管理有限公司 一种服务器压力预警方法、***及终端设备
CN109358968B (zh) * 2018-10-08 2021-06-25 北京数码视讯软件技术发展有限公司 一种服务器资源配置方法及装置
CN109298990B (zh) * 2018-10-17 2023-04-14 平安科技(深圳)有限公司 日志存储方法、装置、计算机设备及存储介质
CN109712266B (zh) * 2018-11-21 2021-12-14 斑马网络技术有限公司 蓄电池用电行为评估方法、装置、存储介质及电子设备
CN109729155A (zh) * 2018-12-13 2019-05-07 平安医疗健康管理股份有限公司 一种业务请求的分配方法及相关装置
CN110008101A (zh) * 2019-04-04 2019-07-12 网易(杭州)网络有限公司 客户端性能评价方法、装置、存储介质及电子设备
CN110209577A (zh) * 2019-05-20 2019-09-06 深圳壹账通智能科技有限公司 一种测试方法及装置
CN110377503A (zh) * 2019-06-19 2019-10-25 平安银行股份有限公司 压力测试方法、装置、计算机设备及存储介质
CN110727472A (zh) * 2019-09-10 2020-01-24 平安普惠企业管理有限公司 应用服务器性能优化方法、装置、存储介质及电子设备
CN110633194B (zh) * 2019-09-26 2023-03-28 中国民用航空总局第二研究所 一种硬件资源在特定环境下的性能评估方法
CN111488271B (zh) * 2020-03-10 2023-10-27 中移(杭州)信息技术有限公司 消息中间件的调优方法、***、电子设备及存储介质
CN111625436A (zh) * 2020-05-26 2020-09-04 泰康保险集团股份有限公司 保险业务容量的管理方法、装置、电子设备及存储介质
CN112559271B (zh) * 2020-12-24 2023-10-20 北京百度网讯科技有限公司 分布式应用的接口性能监测方法、装置、设备及存储介质
CN112905431A (zh) * 2021-03-05 2021-06-04 上海中通吉网络技术有限公司 ***性能问题自动定位方法、装置及设备
CN113282471B (zh) * 2021-05-17 2022-09-27 多点(深圳)数字科技有限公司 设备性能测试方法、装置、终端设备
CN115858177B (zh) * 2023-02-08 2023-10-24 成都数联云算科技有限公司 一种渲染机资源分配方法、装置、设备及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002044901A2 (en) * 2000-11-29 2002-06-06 Netuitive Inc. Computer performance forecasting system
CN103778050A (zh) * 2013-12-30 2014-05-07 国网山东省电力公司 一种数据库服务器高可用性能检测***
CN105404581A (zh) * 2015-12-25 2016-03-16 北京奇虎科技有限公司 一种数据库的评测方法和装置
CN106021079A (zh) * 2016-05-06 2016-10-12 华南理工大学 一种基于用户频繁访问序列模型的Web应用性能测试方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10031829B2 (en) * 2009-09-30 2018-07-24 International Business Machines Corporation Method and system for it resources performance analysis
CN103544103A (zh) * 2013-09-02 2014-01-29 烟台中科网络技术研究所 一种软件性能测试模拟并发方法及***
CN103577328B (zh) * 2013-11-20 2016-08-17 北京奇虎科技有限公司 一种应用的性能分析方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002044901A2 (en) * 2000-11-29 2002-06-06 Netuitive Inc. Computer performance forecasting system
CN103778050A (zh) * 2013-12-30 2014-05-07 国网山东省电力公司 一种数据库服务器高可用性能检测***
CN105404581A (zh) * 2015-12-25 2016-03-16 北京奇虎科技有限公司 一种数据库的评测方法和装置
CN106021079A (zh) * 2016-05-06 2016-10-12 华南理工大学 一种基于用户频繁访问序列模型的Web应用性能测试方法

Also Published As

Publication number Publication date
CN108446210B (zh) 2021-04-30
CN108446210A (zh) 2018-08-24

Similar Documents

Publication Publication Date Title
WO2019153487A1 (zh) ***性能的度量方法、装置、存储介质和服务器
US11956137B1 (en) Analyzing servers based on data streams generated by instrumented software executing on the servers
US7035766B1 (en) System and method for diagnosing computer system operational behavior
US10116534B2 (en) Systems and methods for WebSphere MQ performance metrics analysis
US10031829B2 (en) Method and system for it resources performance analysis
US8224624B2 (en) Using application performance signatures for characterizing application updates
CN109615129B (zh) 房地产客户成交概率预测方法、服务器及计算机存储介质
US8725741B2 (en) Assessing application performance with an operational index
KR20110081060A (ko) 클라우드 컴퓨팅을 이용한 특정 워크로드의 처리시 예상 값 및 예상 노력 분석
US11550762B2 (en) Implementation of data access metrics for automated physical database design
US20090307347A1 (en) Using Transaction Latency Profiles For Characterizing Application Updates
CN110851465A (zh) 数据查询方法及***
WO2022252782A1 (zh) 云计算索引推荐方法及***
CN115562978A (zh) 基于业务场景的性能测试***及方法
CN113987086A (zh) 数据处理方法、数据处理装置、电子设备以及存储介质
US7617313B1 (en) Metric transport and database load
CN113553341A (zh) 多维数据分析方法、装置、设备及计算机可读存储介质
US8065256B2 (en) System and method for detecting system relationships by correlating system workload activity levels
CN111274112B (zh) 应用程序压测方法、装置、计算机设备和存储介质
CN111930604B (zh) 联机交易性能分析方法及装置、电子设备和可读存储介质
US11475008B2 (en) Systems and methods for monitoring user-defined metrics
CN113656391A (zh) 数据检测方法及装置、存储介质及电子设备
CN112069017A (zh) 业务***监控方法及装置
CN112148491B (zh) 数据处理方法及装置
US20230132670A1 (en) Metrics-based on-demand anomaly detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18905530

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03.11.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18905530

Country of ref document: EP

Kind code of ref document: A1