CN115145797A - Application performance testing method, device, equipment and storage medium - Google Patents

Application performance testing method, device, equipment and storage medium Download PDF

Info

Publication number
CN115145797A
CN115145797A CN202210843207.XA CN202210843207A CN115145797A CN 115145797 A CN115145797 A CN 115145797A CN 202210843207 A CN202210843207 A CN 202210843207A CN 115145797 A CN115145797 A CN 115145797A
Authority
CN
China
Prior art keywords
application
tested
time
equipment
running
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210843207.XA
Other languages
Chinese (zh)
Inventor
张莉婷
孟阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210843207.XA priority Critical patent/CN115145797A/en
Publication of CN115145797A publication Critical patent/CN115145797A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application provides a method, a device, equipment and a storage medium for testing application performance. The method comprises the following steps: determining a preset number of target processes which are running in the equipment based on running state monitoring information of the equipment where the application to be tested is located; and determining the operation occupancy rate of each target process in the equipment to obtain the performance test data of the application to be tested. According to the technical scheme, the performance test data of the application to be tested are obtained more comprehensively, the limitation of application performance test is avoided, and the accuracy of application performance test is improved by comprehensively analyzing the performance test data of the application to be tested from different aspects.

Description

Application performance testing method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of data processing, in particular to a method, a device, equipment and a storage medium for testing application performance.
Background
In order to improve the operation performance of various client applications, it is generally necessary to test the operation performance data of the various client applications on the device in advance to determine whether the performance index provided by the user can be achieved.
Currently, when testing an application according to a device, the occupancy rate of a Central Processing Unit (CPU) of each process in the device is usually monitored by directly executing an adb shell top command during the running process of the application. Then, the CPU occupancy of the process in which the application is located is used as performance test data of the application to analyze the running performance of the application.
However, when executing the adb shell top command in the device, the adb shell top command may additionally occupy the CPU resource of the device. That is, compared with the normal operation process of the application, the performance test process of the application adds an additional process and additionally occupies a certain amount of CPU resources, so that a certain deviation exists between the CPU occupancy rate obtained by the application in the performance test process and the actual CPU occupancy rate of the application in normal operation. Therefore, when the application performance is tested according to the CPU occupancy rate of the process in which the application is located, the accuracy of the application performance test is greatly affected.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for testing application performance, so that the performance test data of the application to be tested can be more comprehensively acquired, and the accuracy and the effectiveness of the application performance test can be improved.
In a first aspect, an embodiment of the present application provides a method for testing application performance, where the method includes:
determining a preset number of target processes which are running in the equipment based on running state monitoring information of the equipment where the application to be tested is located;
and determining the operation occupancy rate of each target process in the equipment to obtain the performance test data of the application to be tested.
In a second aspect, an embodiment of the present application provides an apparatus for testing application performance, where the apparatus includes:
the target process determining module is used for determining a preset number of target processes which are running in the equipment based on running state monitoring information of the equipment where the application to be tested is located;
and the performance testing module is used for determining the operation occupancy rate of each target process in the equipment to obtain the performance testing data of the application to be tested.
In a third aspect, an embodiment of the present application provides an electronic device, including:
a processor and a memory, the memory being configured to store a computer program, the processor being configured to invoke and run the computer program stored in the memory to perform the method of application performance testing provided in the first aspect of the present application.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium for storing a computer program, where the computer program enables a computer to execute the method for testing application performance as provided in the first aspect of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, comprising a computer program/instructions, wherein the computer program/instructions, when executed by a processor, implement the method for application performance testing as provided in the first aspect of the present application.
According to the application performance testing method, device, equipment and storage medium, in the running process of the equipment where the application to be tested is located, based on the running state monitoring information of the equipment where the application to be tested is located, the preset number of target processes running in the equipment are determined, and then the running occupancy rate of each target process in the equipment is determined, so that the performance testing data of the application to be tested is obtained more comprehensively, the limitation of application performance testing is avoided, and the accuracy of application performance testing is improved by comprehensively analyzing the performance testing data of the application to be tested from different aspects.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart illustrating a method for testing application performance according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating operation state monitoring information obtained through an adb shell top command according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a variation of an FPS parameter of an application under test in a device according to an embodiment of the present application;
fig. 4 and fig. 5 are schematic diagrams of changes in GPU frequency and GPU proportion in a device where an application to be tested is located according to an embodiment of the present application, respectively;
FIG. 6 is a flow chart illustrating another method for application performance testing according to an embodiment of the present application;
FIG. 7 is a diagram illustrating an embodiment of determining a user mode time and a kernel mode time of a target process through a proc/[ pid ] >/stat command;
FIG. 8 is a schematic block diagram of an apparatus for performance testing according to an embodiment of the present disclosure;
fig. 9 is a schematic block diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in other sequences than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Considering that an additional process is added to an adb shell top command executed during performance testing and occupies a certain amount of additional CPU resources, a certain deviation exists between the CPU occupancy rate obtained during the application performance testing process and the actual CPU occupancy rate during normal operation of the application, and the accuracy of the application performance testing is affected. Therefore, the application designs a new application performance test scheme. The method comprises the steps of determining a plurality of running target processes in the equipment through running state monitoring information of the equipment where the application to be tested is located, and then determining the running occupancy rate of each target process in the equipment, so that performance test data of the application to be tested are obtained more comprehensively, limitation of application performance test is avoided, and the accuracy of application performance test is improved through comprehensively analyzing the performance test data of the application to be tested from different aspects.
Fig. 1 is a flowchart illustrating a method for testing application performance according to an embodiment of the present application. The method can be executed by the apparatus for applying performance test provided by the present disclosure, wherein the apparatus for applying performance test can be implemented by any software and/or hardware manner. For example, the apparatus for applying performance test may be applied to any electronic device, including but not limited to tablet computers, mobile phones (e.g., folding screen mobile phones, large screen mobile phones, etc.), wearable devices, vehicle-mounted devices, augmented Reality (AR)/Virtual Reality (VR) devices, notebook computers, ultra-mobile personal computers (UMPC), netbooks, personal Digital Assistants (PDA), smart televisions, smart screens, high definition televisions, 4K televisions, smart speakers, smart projectors, and the like, and the application does not limit the specific type of the electronic device.
Specifically, as shown in fig. 1, the method may include the following steps:
s110, determining a preset number of target processes which are running in the equipment based on the running state monitoring information of the equipment where the application to be tested is located.
The application to be tested is any client application which is developed by a developer and needs to test the actual running performance of the application, so that possible problems of the application can be optimized in advance based on a test result before the application is deployed, and the actual running stability of the application is ensured. For example, the application to be tested in the present application may be a newly developed Virtual Reality (VR) game suitable for various android devices.
In the application, the application to be tested is subjected to performance test, and mainly the actual operation condition of the application to be tested on a certain device is tested. Therefore, the application to be tested is pre-installed on a certain device, so that a user can open the test application on the device and execute various application operations. The device may be any terminal device that supports the successful operation of the application to be tested. Then, a corresponding communication connection is established between the device and a Personal Computer (PC) end for application testing, so that the PC end can know the actual operation condition of the application to be tested on the device in real time. That is to say, the application performance testing scheme in the application is mainly applied to the PC terminal, and the PC terminal executes each performance testing step for the application to be tested, so as to avoid influencing the actual operation of the device, thereby ensuring the accuracy of the application performance testing as much as possible.
As an optional implementation scheme in the present application, when performing a performance test on an application to be tested, a connected device with the application to be tested installed is determined first. Then, when the application to be tested is required to be started on the device and corresponding application operation is executed, the application to be tested is in an actual running state on the device. Furthermore, in the actual operation process of the application to be tested on the device, the resource occupation situation of each process existing in the device can be dynamically monitored in real time in a certain mode, and the operation state monitoring information of the device can be obtained.
The running state monitoring information of the device may include, but is not limited to, the total number of processes configured in the device, the number of processes running, and a user identifier, a process priority, a CPU occupancy, and the like of a process owner when each process actually runs.
Then, it is considered that the application to be tested is executed by using a certain process on the device, and the execution conditions of other processes may also affect the actual execution condition of the process in which the application to be tested is located. Therefore, the running state monitoring information of the equipment where the application to be tested is located is analyzed, and the actual running condition of each process in the equipment can be obtained. Then, according to the actual operation condition of each process, a preset number of running target processes can be screened out from all the processes in the device, so that whether the operation performance of the application to be tested is abnormal or not can be comprehensively analyzed by using the operation conditions of a plurality of target processes in the following.
For example, as shown in fig. 2, in the present application, a top command (that is, an adb shell top command) in a terminal command line mode may be used to obtain a preset number of target processes running in a device where an application to be tested is located. The process selected by the frame in fig. 2 is an additionally added process when the adb shell top command is executed in the device, and the resource occupation condition of other processes may be affected. At this time, each target process with a CPU occupancy that is ranked at the top 100 and is not zero (i.e., [% CPU ] in fig. 2) may be filtered out by the adb shell top command. I.e. the preset number is 100 in the present application.
And S120, determining the operation occupancy rate of each target process in the equipment to obtain the performance test data of the application to be tested.
After the preset number of target processes running in the equipment where the application to be tested is located is determined, the running occupancy rate of each target process in the equipment can be determined according to the resource occupation condition of each target process existing in the equipment. The running occupancy may be an occupancy of the CPU actual running time of the target process in the current running period.
At this time, it is considered that the operation performance of the application to be tested may be interfered by the operation of other processes. Therefore, the running occupancy rates of the target processes can be unified as the performance test data of the application to be tested, so that whether the performance abnormality of the process in which the application to be tested is located is caused by the self degree or the running abnormality of other processes can be comprehensively analyzed through the running conditions of different processes, and the performance test of the application to be tested can be more accurately realized.
It should be noted that, since the same application runs on different devices, the performance of the application running may be affected by the configuration of the device itself. Therefore, in order to ensure the comprehensiveness of the application performance test, the application to be tested tests the running performance of the application to be tested on the devices with different versions so as to acquire the service conditions of the application to be tested on the devices with different versions. Furthermore, according to the performance test results of the to-be-tested application on a plurality of different versions of systems, possible problems of the to-be-tested application are optimized, so that the to-be-tested application can be suitable for devices of more versions, and the running performance of the to-be-tested application on any device is improved.
As an optional implementation scheme in the present application, in order to further ensure accuracy of the application performance test, a performance index threshold is preset for each target process according to performance test data of the application to be tested, that is, the operation occupancy rate of each target process, so as to determine whether each target process in the device where the application to be tested is located has a performance abnormality, and further analyze whether the performance of the process where the application to be tested is located is affected.
Optionally, in order to ensure the accuracy of the performance index threshold, the application to be tested is respectively tested and trained on at least two versions of equipment to obtain test training data of the application to be tested on each version of equipment; and determining a performance index threshold of the application to be tested based on the test training data of the application to be tested on each version of equipment, so as to determine a performance test result of the application to be tested according to the performance test data and the performance index threshold of the application to be tested.
That is, at least two versions of the device are first set up before actual testing. Then, the application performance testing step can be executed once on each version of equipment, so as to obtain the running occupancy rate of each target process on each version of equipment in the version of equipment. And then, the running occupancy rate of each target process on each version device in the version device is used as test training data of the to-be-tested application on the version device. Therefore, for the same target process, the running occupancy rate of the target process exists on each version device. At this time, for each target process, the operation occupancy of the target process on each version device may be averaged. Then, the average of the running occupancy obtained for each target process may represent a running reference index of the target process on any device. Therefore, according to the operation occupancy rate average value of each target process on each version device, the operation occupancy rate threshold corresponding to each target process can be set and unified as the performance index threshold of the application to be tested. Furthermore, in the subsequent actual performance test process, the operation occupancy rate of each target process in the performance test data of the application to be tested and the operation occupancy rate threshold of each target process in the performance index threshold can be compared to judge the operation condition of each target process in a certain device in the actual operation process of the application to be tested. And then, comprehensively analyzing whether the performance abnormity of the process in which the application to be tested is positioned is caused by the self degree or other processes by utilizing the running condition of each target process in the equipment, thereby more accurately testing the performance of the application to be tested and obtaining the performance test result of the application to be tested. In addition, the performance index threshold value of the application to be tested, which is determined by the test training on the multi-version equipment, has good guiding value for the product standard formulation or version performance evaluation during the actual deployment of the subsequent application.
In addition, in order to ensure the comprehensiveness of the application performance test, in the present application, besides using the operation occupancy rate (i.e., CPU occupancy rate) of each target process in the device where the application to be tested is located as the performance test data of the application to be tested, the present application also analyzes the Frame Per Second (FPS) of the device where the application to be tested is located and the Graphics Processing parameters (e.g., the frequency and the duty ratio of a Graphics Processing Unit (GPU) in the device) where the application to be tested is located).
For example, for the FPS parameter of the device where the application to be tested is located, when the device is a VR device, the logcat command may be directly adopted to obtain log (log) information of the application to be tested on the FPS. By continuously acquiring the log information and analyzing the log information, the maximum value, the minimum value, the mean value, the variance and the like of the FPS parameters in the testing process are calculated. As shown in fig. 3, the variation of the FPS parameter in the application to be tested can be shown in a graphic form, and then whether the performance of the application to be tested is abnormal or not can be analyzed. At this time, the FPS parameters are continuously monitored, and the data analysis not only covers the FPS mean to evaluate the overall frame rate level of the application, but also can evaluate the frame rate stability in the whole application test process by using the FPS variance.
For the graphic processing parameters of the device where the application to be tested is located, the specific text in which the GPU frequency and the proportion are recorded in the device can be obtained in a circulating mode in the application testing process. Then, the maximum value, the minimum value, the mean value, the variance and the like of the GPU frequency and the ratio in the test process are calculated by analyzing the specific text. As shown in fig. 4 and 5, the change of the GPU frequency and the duty ratio in the application to be tested can be shown in a graph form, and then whether the application to be tested has performance abnormality or not is analyzed.
In summary, the application running performance can be integrally evaluated through the FPS value in the application testing process, the running occupancy rate (namely CPU occupancy rate) of each target process in the device and the GPU frequency and the occupation ratio in the device, and the application performance level can be effectively evaluated. Moreover, when the application operation is in a problem, effective log information and monitoring information can be provided for development analysis positioning.
According to the technical scheme provided by the embodiment of the application, in the running process of the device where the application to be tested is located, the running state monitoring information of the device where the application to be tested is located is used for determining the preset number of target processes running in the device, and then the running occupancy rate of each target process in the device is determined, so that the performance test data of the application to be tested can be obtained more comprehensively, the limitation of the application performance test is avoided, and the accuracy of the application performance test is improved by comprehensively analyzing the performance test data of the application to be tested from different aspects.
As an optional implementation scheme in the present application, when the adb shell top command is executed to obtain the running state monitoring information of the device, a certain deviation may exist between the CPU occupancy rate obtained in the application performance test process and the actual CPU occupancy rate when the application runs normally. Therefore, in order to ensure the accuracy of the operation occupancy rate of each target process in the device and avoid the problem of deviation from the actual CPU occupancy rate when the application normally operates, the application may further obtain the specific operation data of the target process represented by the pid through a prco/[ pid ]/stat command, and the specific operation data includes not only the data information of the target process as a main process, but also the data information of each sub-process under the target process. Then, the running occupancy rate of each target process in the device is further accurately calculated. The specific calculation process of the operation occupancy of each target process in the device is explained in detail below.
Fig. 6 is a flowchart illustrating another method for testing application performance according to an embodiment of the present application. Referring to fig. 6, the method may specifically include the following steps:
s610, periodically acquiring running state monitoring information of the equipment where the application to be tested is located at the current moment according to a preset interval.
In consideration of the problem that the CPU occupancy rate of each target obtained by using the adb shell top command has a certain deviation from the actual CPU occupancy rate in normal operation, the method and the device can analyze the actual CPU operation time of the process in the time period of different times by comparing the CPU occupancy rates of the same process at different times, so as to accurately calculate the operation occupancy rate of each target process in the device.
In the application test method, a preset interval is preset, and in the application test process, the running state monitoring information of the equipment where the application to be tested is located is obtained periodically after every preset interval. That is, according to the preset interval, the running state monitoring information of the device at different times can be obtained.
And S620, determining a preset number of processes to be selected, which are running at the moment, of the equipment based on the running state monitoring information at each moment.
After the running state monitoring information of the equipment at the current time is obtained, an adb shell top command is adopted to analyze the running state monitoring information, and the processes of the preset number, which are running at the current time, of the equipment are determined. At this time, at each time point when the device operation state monitoring information is periodically obtained according to the preset interval, a preset number of processes in which the device is operating at the time point are determined and used as the processes to be selected in the application.
S630, according to the intersection of the processes to be selected at the starting time and the ending time of each preset interval, determining the target process at the preset interval.
Since the processes running by the device at different times may be different, the running occupancy of the process can only be analyzed for the processes continuously running in the preset interval. Therefore, the application can determine the starting time and the ending time of each preset interval. At this time, a preset number of processes to be selected exist at the starting time and the ending time of the preset interval, and intersection operation is performed on the processes to be selected at the starting time and the ending time of each preset interval to obtain the processes to be selected existing at the starting time and the ending time of the preset interval, which are used as target processes at the preset interval.
It can be seen that, in the same manner as described above, in each preset interval, the corresponding target process running in the preset interval is determined.
And S640, determining the total running time of each target process in the preset interval at the starting time and the ending time of the preset interval according to each preset interval.
As an optional implementation scheme in the present application, for each preset interval, a specific running state of each target process at the starting time and the ending time of the preset interval may be analyzed, that is, a total running time of each target process at the starting time and the ending time of the preset interval may be determined, so as to subsequently determine an actual running time of each target process within the preset interval.
For example, as shown in fig. 7, for any one of the start time and the end time of each preset interval, the application may use a proc/[ pid ] >/stat command to obtain the user state time and the kernel state time of each target process in the preset interval at the time. Wherein the pid in the proc/[ pid ] >/stat command represents the current target process identification.
Furthermore, all the running result sequences of a target process from the system start to the current time can be included in the proc/[ pid ]/stat file as shown in FIG. 7. The 14 th value of the result sequence represents the total running time utime of the main process in the user state, represented by the target process, the 15 th value of the result sequence represents the total running time period still of the main process in the kernel state, represented by the target process, the 16 th value of the result sequence represents the total running time cutime of the sub-process in the user state, represented by the 17 th value of the result sequence represents the total running time cstime of the sub-process in the kernel state. Therefore, the user-mode time of each target process at any one of the start time and the end time of each preset interval may be utime + cutime, and the kernel-mode time may be stime + cstime.
Then, the total running time processCPUTime = time + cutime + cstime of each target process at any one of the starting time and the ending time of each preset interval can be obtained by calculating the sum of the user state time and the kernel state time of each target process at the starting time and the ending time of each preset interval.
S650, calculating the running occupancy rate of each target process in the equipment according to the difference value between the total running time of each target process at the starting time and the ending time of the preset interval and the preset interval, and obtaining the performance test data of the application to be tested.
Optionally, for each preset interval, the actual running time of each target process in the preset interval may be obtained by calculating a difference between running total times of each target process in the preset interval at the starting time and the ending time of the preset interval. And then, calculating the ratio of the difference to a preset interval to obtain the running occupancy rate of the target process in the equipment.
For example, the total running time of the target process at the starting time of each preset interval may be denoted as processcputim _ start, and the total running time at the ending time of the preset interval may be denoted as processcputim _ end. Then, the operation occupancy of the target process at the preset interval can be expressed as (processCPUTime _ end-processCPUTime _ start)/preset interval.
According to the same calculation steps, the operation occupancy rate of each target process in each preset interval can be accurately calculated. And then, by continuously analyzing the change condition of the operation occupancy rate of each target process in a plurality of preset intervals, whether the application to be tested has abnormal operation can be judged.
According to the technical scheme provided by the embodiment of the application, in the running process of the device where the application to be tested is located, the running state monitoring information of the device where the application to be tested is located is used for determining the preset number of target processes running in the device, and then the running occupancy rate of each target process in the device is determined, so that the performance test data of the application to be tested can be obtained more comprehensively, the limitation of the application performance test is avoided, and the accuracy of the application performance test is improved by comprehensively analyzing the performance test data of the application to be tested from different aspects.
Fig. 8 is a schematic block diagram of an apparatus for applying performance testing according to an embodiment of the present application. As shown in fig. 8, the apparatus 800 may include:
a target process determining module 810, configured to determine a preset number of target processes running in a device to be tested based on running state monitoring information of the device;
a performance testing module 820, configured to determine an operation occupancy rate of each target process in the device, so as to obtain performance testing data of the application to be tested.
Further, the target process determining module 810 may be specifically configured to:
acquiring running state monitoring information of equipment where the application to be tested is located at the current moment regularly according to preset intervals;
determining a preset number of processes to be selected, which are running at each moment, of the equipment based on the running state monitoring information at each moment;
and determining the target process in the preset interval according to the intersection of the processes to be selected at the starting time and the ending time of each preset interval.
Further, the performance testing module 820 may be specifically configured to:
determining the total running time of each target process in the preset interval at the starting time and the ending time of the preset interval aiming at each preset interval;
and calculating the operation occupancy rate of each target process in the equipment according to the difference value between the total operation time of each target process at the starting time and the ending time of the preset interval and the preset interval.
Further, the performance testing module 820 may be specifically configured to:
and aiming at any one of the starting time and the ending time of each preset interval, acquiring the user mode time and the kernel mode time of each target process at the time in the preset interval, and taking the sum of the user mode time and the kernel mode time of the target process at the time as the total running time of the target process at the time.
Further, the preset number of target processes running in the device is determined by a top command in a terminal command line mode.
Further, the performance test data of the application to be tested further includes a transmission frame number per second in the device where the application to be tested is located and a graphic processing parameter of the device where the application to be tested is located.
Further, the apparatus 800 for testing application performance may further include:
the test training module is used for respectively carrying out test training on the applications to be tested on at least two versions of equipment to obtain test training data of the applications to be tested on each version of equipment;
and the performance index determining module is used for determining a performance index threshold of the application to be tested based on the test training data of the application to be tested on each version of equipment, so as to determine a performance test result of the application to be tested according to the performance test data of the application to be tested and the performance index threshold.
In the embodiment of the application, in the running process of the equipment where the to-be-tested application is located, based on the running state monitoring information of the equipment where the to-be-tested application is located, the preset number of target processes running in the equipment are determined, and then the running occupancy rate of each target process in the equipment is determined, so that the performance test data of the to-be-tested application can be obtained more comprehensively, the limitation of the application performance test is avoided, and the accuracy of the application performance test is improved by comprehensively analyzing the performance test data of the to-be-tested application from different aspects.
It is to be understood that apparatus embodiments and method embodiments may correspond to one another and that similar descriptions may refer to method embodiments. To avoid repetition, the description is omitted here. Specifically, the apparatus 800 shown in fig. 8 may perform any method embodiment provided in the present application, and the foregoing and other operations and/or functions of each module in the apparatus 800 are respectively for implementing corresponding processes in each method of the embodiment of the present application, and are not described herein again for brevity.
The apparatus 800 of the embodiments of the present application is described above in connection with the figures from the perspective of a functional block. It should be understood that the functional modules may be implemented by hardware, by instructions in software, or by a combination of hardware and software modules. Specifically, the steps of the method embodiments in the present application may be implemented by integrated logic circuits of hardware in a processor and/or instructions in the form of software, and the steps of the method disclosed in conjunction with the embodiments in the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. Alternatively, the software modules may be located in random access memory, flash memory, read only memory, programmable read only memory, electrically erasable programmable memory, registers, and the like, as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps in the above method embodiments in combination with hardware thereof.
Fig. 9 is a schematic block diagram of an electronic device 900 provided in an embodiment of the present application.
As shown in fig. 9, the electronic device 900 may include:
a memory 910 and a processor 920, the memory 910 being configured to store computer programs and to transfer the program codes to the processor 920. In other words, the processor 920 may call and run a computer program from the memory 910 to implement the method in the embodiment of the present application.
For example, the processor 920 may be configured to perform the above-described method embodiments according to instructions in the computer program.
In some embodiments of the present application, the processor 920 may include, but is not limited to:
general purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like.
In some embodiments of the present application, the memory 910 includes, but is not limited to:
volatile memory and/or non-volatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), double Data Rate Synchronous Dynamic random access memory (DDR SDRAM), enhanced Synchronous SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct Rambus RAM (DR RAM).
In some embodiments of the present application, the computer program may be divided into one or more modules, which are stored in the memory 910 and executed by the processor 920 to perform the methods provided herein. The one or more modules may be a series of computer program instruction segments capable of performing certain functions, the instruction segments describing the execution of the computer program in the electronic device.
As shown in fig. 9, the electronic device may further include:
a transceiver 930, the transceiver 930 being connectable to the processor 920 or the memory 910.
The processor 920 may control the transceiver 930 to communicate with other devices, and in particular, may transmit information or data to the other devices or receive information or data transmitted by the other devices. The transceiver 930 may include a transmitter and a receiver. The transceiver 930 may further include one or more antennas.
It should be understood that the various components in the electronic device are connected by a bus system that includes a power bus, a control bus, and a status signal bus in addition to a data bus.
The present application also provides a computer storage medium having stored thereon a computer program which, when executed by a computer, enables the computer to perform the method of the above-described method embodiments. Alternatively, the present application also provides a computer program product containing instructions, which when executed by a computer, cause the computer to execute the method of the above method embodiment.
When implemented in software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Video Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the module is merely a logical division, and other divisions may be realized in practice, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
Modules described as separate parts may or may not be physically separate, and parts shown as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. For example, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and all the changes or substitutions should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method for application performance testing, comprising:
determining a preset number of target processes which are running in the equipment based on running state monitoring information of the equipment where the application to be tested is located;
and determining the operation occupancy rate of each target process in the equipment to obtain the performance test data of the application to be tested.
2. The method of claim 1, wherein the determining a preset number of target processes running in the device based on the running state monitoring information of the device to which the application to be tested is located comprises:
acquiring running state monitoring information of equipment where the application to be tested is located at the current moment regularly according to preset intervals;
determining a preset number of processes to be selected, which are running at each moment, of the equipment based on the running state monitoring information at each moment;
and determining the target process in the preset interval according to the intersection of the processes to be selected at the starting time and the ending time of each preset interval.
3. The method of claim 2, wherein the determining the operating occupancy of each of the target processes within the device comprises:
determining the total running time of each target process in the preset interval at the starting time and the ending time of the preset interval aiming at each preset interval;
and calculating the operation occupancy rate of each target process in the equipment according to the difference value between the total operation time of each target process at the starting time and the ending time of the preset interval and the preset interval.
4. The method according to claim 3, wherein the determining, for each preset interval, a total running time of each target process at a start time and an end time of the preset interval for the preset interval comprises:
and aiming at any one of the starting time and the ending time of each preset interval, acquiring the user mode time and the kernel mode time of each target process at the time in the preset interval, and taking the sum of the user mode time and the kernel mode time of the target process at the time as the total running time of the target process at the time.
5. The method of claim 1, wherein the preset number of target processes running in the device is determined by a top command in a terminal command line mode.
6. The method of claim 1, wherein the performance test data of the application under test further comprises a number of transmission frames per second within the device at which the application under test is located and a graphics processing parameter of the device at which the application under test is located.
7. The method of claim 1, further comprising:
respectively carrying out test training on the to-be-tested application on at least two versions of equipment to obtain test training data of the to-be-tested application on each version of equipment;
and determining a performance index threshold of the application to be tested based on the test training data of the application to be tested on each version of equipment, so as to determine a performance test result of the application to be tested according to the performance test data of the application to be tested and the performance index threshold.
8. An apparatus for application performance testing, comprising:
the target process determining module is used for determining a preset number of target processes which are running in the equipment based on running state monitoring information of the equipment where the application to be tested is located;
and the performance testing module is used for determining the operation occupancy rate of each target process in the equipment to obtain the performance testing data of the application to be tested.
9. An electronic device, comprising:
a processor and a memory, the memory for storing a computer program, the processor for calling and executing the computer program stored in the memory to perform the method of application performance testing of any of claims 1-7.
10. A computer-readable storage medium for storing a computer program for causing a computer to perform the method of applying a performance test as claimed in any one of claims 1 to 7.
CN202210843207.XA 2022-07-18 2022-07-18 Application performance testing method, device, equipment and storage medium Pending CN115145797A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210843207.XA CN115145797A (en) 2022-07-18 2022-07-18 Application performance testing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210843207.XA CN115145797A (en) 2022-07-18 2022-07-18 Application performance testing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115145797A true CN115145797A (en) 2022-10-04

Family

ID=83411528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210843207.XA Pending CN115145797A (en) 2022-07-18 2022-07-18 Application performance testing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115145797A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117520129A (en) * 2023-11-21 2024-02-06 北京东青互联科技有限公司 Data center equipment monitoring method, device, equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117520129A (en) * 2023-11-21 2024-02-06 北京东青互联科技有限公司 Data center equipment monitoring method, device, equipment and medium
CN117520129B (en) * 2023-11-21 2024-05-10 北京东青互联科技有限公司 Data center equipment monitoring method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN111090536B (en) Method, device, medium and electronic equipment for acquiring memory leakage information
CN108427616B (en) Background program monitoring method and monitoring device
US10025694B1 (en) Monitoring activity of software development kits using stack trace analysis
US10802847B1 (en) System and method for reproducing and resolving application errors
CN109831351B (en) Link tracking method, device, terminal and storage medium
US20220066896A1 (en) Method, system, and apparatus for monitoring blockchain smart contract
CN107045475B (en) Test method and device
CN107145446B (en) Application program APP test method, device and medium
CN107391362A (en) Application testing method, mobile terminal and storage medium
CN115145797A (en) Application performance testing method, device, equipment and storage medium
CN113127314A (en) Method and device for detecting program performance bottleneck and computer equipment
CN110889116A (en) Advertisement blocking method and device and electronic equipment
US20230315620A1 (en) System and Method for Diagnosing a Computing Device in Safe Mode
CN113127329B (en) Script debugging method and device and computer storage medium
CN110941549B (en) Memory leak detection method, device, medium and electronic equipment
CN115202946A (en) Automated testing method, apparatus, device, storage medium, and program product
CN116185799A (en) Interrupt time acquisition method, device, system, communication equipment and storage medium
CN114598547A (en) Data analysis method applied to network attack recognition and electronic equipment
US10936469B2 (en) Software component verification using random selection on pooled devices
CN108459940B (en) Configuration information modification method and device of application performance management system and electronic equipment
US9792202B2 (en) Identifying a configuration element value as a potential cause of a testing operation failure
CN108845932B (en) Unit testing method and device of network library, storage medium and terminal
CN112860224B (en) Function execution environment construction method and device, electronic equipment and storage medium
CN112925693B (en) System monitoring method, device, computer equipment and storage medium
CN115617675A (en) Automatic testing method, device and equipment of application program and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination