CN116955118B - Performance analysis method, system, computing device and storage medium - Google Patents

Performance analysis method, system, computing device and storage medium Download PDF

Info

Publication number
CN116955118B
CN116955118B CN202311211583.8A CN202311211583A CN116955118B CN 116955118 B CN116955118 B CN 116955118B CN 202311211583 A CN202311211583 A CN 202311211583A CN 116955118 B CN116955118 B CN 116955118B
Authority
CN
China
Prior art keywords
debug
debugging
breakpoint
code
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311211583.8A
Other languages
Chinese (zh)
Other versions
CN116955118A (en
Inventor
卢桢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uniontech Software Technology Co Ltd
Original Assignee
Uniontech Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uniontech Software Technology Co Ltd filed Critical Uniontech Software Technology Co Ltd
Priority to CN202311211583.8A priority Critical patent/CN116955118B/en
Publication of CN116955118A publication Critical patent/CN116955118A/en
Application granted granted Critical
Publication of CN116955118B publication Critical patent/CN116955118B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/362Software debugging
    • G06F11/3644Software debugging by instrumenting at runtime

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a performance analysis method, a performance analysis system, computing equipment and a storage medium, and relates to the technical field of application development. The method comprises the following steps: running the code to be debugged, pausing until the first breakpoint is hit, and taking the first breakpoint as an initial breakpoint; starting from the starting breakpoint, automatically executing a single step operation to respectively send a debugging command request to a debugging server for each line of codes until the next breakpoint is hit as an ending breakpoint, and taking a plurality of lines of codes between the starting breakpoint and the ending breakpoint as a measuring unit; acquiring the running time of each line of code returned by the debugging server based on the debugging adaptation protocol, and obtaining initial state performance data of the measuring unit; repeating the steps to obtain initial state performance data of a plurality of measuring units so as to perform performance analysis on the measuring units and each row of codes in the measuring units based on the initial state performance data. The invention can realize the automatic fine granularity performance analysis of the code to be debugged, and improves the usability.

Description

Performance analysis method, system, computing device and storage medium
Technical Field
The present invention relates to the field of application development technologies, and in particular, to a performance analysis method, a performance analysis system, a computing device, and a storage medium.
Background
In the background of the prior art, various performance analysis tools such as perf, valgrind are layered, so that in order to facilitate the performance analysis of applications in the application development process for research and development personnel, many IDE (Integrated Development Environment ) development applications integrate performance analysis tools such as Visual Studio 2013 and performance analysis diagnostic tools, but the tools perform performance analysis on apps, and the granularity of analysis is large and cannot meet the research and development requirements.
To solve this problem, microsoft developed a PerfTips tool in Visual Studio 2015, which can help users to perform performance analysis at the code line level and support to use in the debugging process, and this way can view the code running efficiency at any time, so as to avoid the situation of serious performance problems in the later stage of application. However, there are some limitations on the operating mode of the PerfTips, which requires manual debugging to check the running time of each line of code, and each time Step operation is performed, the PerfTips displays the running time on the right side of the code, and the operating mode requires a single Step of manual execution by a developer, has insufficient usability, mainly performs analysis on each line of code, and lacks global data analysis in a certain time. Another problem is that there is often a difference in performance at different run phases for the same piece of code, and the time consumed by different execution scenarios may be different, whereas PerfTips cannot make a longitudinal performance comparison over time.
In the prior art, in the Visual Studio, a breakpoint is made at a certain position of the editor, the program is suspended from running to that position, and the time taken to run from the last position to the current position is displayed at the right position of the code. Compared with the traditional mode of adding time statistics into codes, the function obviously simplifies a plurality of operations, and can enable research and development personnel to conduct performance analysis in the debugging process, so that the function is convenient and fast compared with most tools which can only analyze afterwards. In addition, the tool refines the granularity of analysis from the function level to the code level, so that the applicability is wider. This solution has the following drawbacks: in the debugging process, once Step operation is manually executed, the PerfTips is displayed on the right side of the code, and the operation mode requires single-Step manual execution by research personnel, so that the usability is not enough; a piece of code may need to create resources or perform initial operations when it is first executed, and these steps are omitted during subsequent execution, so that the same code often has performance differences in different execution phases, and PerfTips does not provide the longitudinal analysis function in this respect.
Therefore, a performance analysis method is needed to solve the problems in the above technical solutions.
Disclosure of Invention
Accordingly, the present invention provides a performance analysis method and system to solve or at least alleviate the above-mentioned problems.
According to a first aspect of the present invention, there is provided a performance analysis method performed at a debug client adapted to communicate with a debug server based on a debug adaptation protocol, the debug client comprising an automatic debug module, the method comprising: the automatic debugging module operates a code to be debugged, pauses until a first breakpoint is hit, and takes the first breakpoint as an initial breakpoint; starting from the initial breakpoint, automatically executing a single step operation through an automatic debugging module to respectively send a debugging command request to a debugging server for each line of codes until the next breakpoint is hit, taking the next breakpoint as an ending breakpoint, and taking a plurality of lines of codes between the initial breakpoint and the ending breakpoint as a measuring unit; acquiring the running time of each row of codes returned by the debugging server based on a debugging adaptation protocol; obtaining initial state performance data of the measuring unit according to the running time of each row of codes; repeating the steps to obtain initial state performance data of a plurality of measuring units so as to perform performance analysis on the measuring units and each row of codes in the measuring units based on the initial state performance data.
Optionally, in the performance analysis method according to the present invention, before the automatic debugging module runs the code to be debugged, the performance analysis method further includes: acquiring all breakpoint data in the code to be debugged, and storing the breakpoint data as a breakpoint data set; determining the number of breakpoints contained in the breakpoint data set; and if the number of the breakpoints is odd, deleting the last breakpoint in the code to be debugged.
Optionally, in the performance analysis method according to the present invention, the step operation is automatically performed by the automatic debug module to send debug command requests to the debug server for each line of code, respectively, until the next breakpoint is hit, including: automatically executing single step operation through an automatic debugging module, and sending a debugging command request for the current line of codes to the debugging server side every time when one line of codes is executed; recording a file path and a code line number corresponding to the next code, and performing matching operation with breakpoint data in the breakpoint data set; and if the file path and the code line number corresponding to the next line code are successfully matched with one breakpoint data in the breakpoint data set, namely, the next breakpoint is hit.
Optionally, in the performance analysis method according to the present invention, sending a debug command request for a current line of code to the debug server includes: and executing pause operation and continuous operation on the current line of codes through an automatic debugging module so as to send a debugging command request to the debugging server.
Optionally, in the performance analysis method according to the present invention, further comprising: taking a file path and a code line number corresponding to the initial breakpoint of each measuring unit as key values of the measuring units, taking initial state performance data of each measuring unit as a mapping value of the measuring units, and creating an initial Map container; in the running process of the code to be debugged, obtaining running state performance data of each measuring unit according to the performance analysis method, and updating the mapping value of each measuring unit in the initial Map container to obtain a new Map container; and comparing and analyzing the performance difference of each measuring unit in the running state and the initial state according to the new Map container and the initial Map container.
Optionally, in the performance analysis method according to the present invention, comparing and analyzing the performance difference between the running state and the initial state of each measurement unit according to the new Map container and the initial Map container includes: drawing and generating an initial state performance datum line of each measuring unit based on the mapping value of each measuring unit in the initial Map container and displaying the initial state performance datum line; drawing and generating running state performance floating lines of each measuring unit based on the mapping value of each measuring unit in the new Map container and displaying the running state performance floating lines; and comparing and analyzing the performance difference of the measuring unit in the running state and the initial state according to the running state performance floating line and the initial state performance datum line.
Optionally, in the performance analysis method according to the present invention, the code to be debugged corresponds to one or more code files.
Optionally, in the performance analysis method according to the present invention, the debug server is adapted to: and responding to the debugging command request aiming at each line of codes, calling a corresponding debugging command, and recording the starting calling time and the ending calling time of the debugging command to obtain the running time of each line of codes.
Optionally, in the performance analysis method according to the present invention, the debug server includes a backend performance analysis module; the debug server is further adapted to: when a debugging command request sent by a debugging client for each line of code is received, recording the current first system time as the starting calling time of the debugging command through a back-end performance analysis module; and when the debugging command is executed, recording the current second system time as the ending calling time of the debugging command through the back-end performance analysis module.
Optionally, in the performance analysis method according to the present invention, the debug server is further adapted to: and attaching the start calling time and the end calling time to the tail part of the debugging adaptation protocol through the back-end performance analysis module, and sending the start calling time and the end calling time to the debugging client.
Optionally, in the performance analysis method according to the present invention, the debug adaptation protocol is a DAP debug adaptation protocol; the debugging client and the debugging server are suitable for communicating based on the debugging adaptation protocol through a protocol agent.
According to a second aspect of the present invention, there is provided a performance analysis method performed at a debug server adapted to communicate with a debug client based on a debug adaptation protocol, the method comprising: responding to a debugging command request sent by a debugging client for each line of codes, calling a corresponding debugging command, and recording the starting calling time and the ending calling time of the debugging command to obtain the running time of each line of codes; transmitting the running time of each line of codes to a debugging client based on a debugging adaptation protocol; wherein the commissioning client is adapted to perform the method according to the first aspect of the present invention.
Optionally, in the performance analysis method according to the present invention, the debug server includes a backend performance analysis module; recording the starting call time and the ending call time of the debug command, including: when a debugging command request sent by a debugging client for each line of codes in a measuring unit is received, recording the current first system time as the starting calling time of the debugging command through a back-end performance analysis module; and when the debugging command is executed, recording the current second system time as the ending calling time of the debugging command through a back-end performance analysis module.
Optionally, in the performance analysis method according to the present invention, the sending the runtime of each line of code to the debug client based on the debug adaptation protocol includes: and adding the start calling time and the end calling time to the tail part of the debugging adaptation protocol through a back-end performance analysis module, and sending the start calling time and the end calling time to the debugging client.
Optionally, in the performance analysis method according to the present invention, the debug server is adapted to access a language debug plug-in corresponding to each debug language.
According to a third aspect of the present invention, there is provided a performance analysis system comprising: a commissioning client comprising an automatic commissioning module adapted to perform the method according to the first aspect of the present invention; the debugging server is adapted to communicate with the debugging client based on a debugging adaptation protocol and to perform the method according to the second aspect of the invention.
According to a fourth aspect of the present invention there is provided a computing device comprising: at least one processor; a memory storing program instructions, wherein the program instructions are configured to be adapted to be executed by the at least one processor, the program instructions comprising instructions for performing the performance analysis method as described above.
According to a fifth aspect of the present invention, there is provided a readable storage medium storing program instructions that, when read and executed by a computing device, cause the computing device to perform a performance analysis method as described above.
According to the technical scheme of the invention, the automatic debugging module of the debugging client operates the code to be debugged, pauses until the code hits a first breakpoint, takes the first breakpoint as an initial breakpoint, and starts from the initial breakpoint, automatically executes single-step operation through the automatic debugging module to respectively send a debugging command request to a debugging server for each line of code until the next breakpoint is hit, takes the next breakpoint as an end breakpoint, and takes a plurality of lines of codes between the initial breakpoint and the end breakpoint as a measuring unit. The method comprises the steps that the running time of each row of codes returned by a debugging server based on a debugging adaptation protocol is obtained, initial state performance data of a measuring unit are obtained according to the running time of each row of codes, and accordingly the initial state performance data of a plurality of measuring units can be obtained, so that performance analysis can be conducted on the measuring units and each row of codes in the measuring units based on the initial state performance data. Therefore, according to the technical scheme of the invention, the single-step operation is automatically executed at the debugging client through the automatic debugging module, so that a debugging command request is respectively sent to the debugging server for each row of codes, and the running time of each row of codes returned by the debugging server based on the debugging adaptation protocol is obtained. In this way, by automatically executing a single step operation by the automatic debugging module, usability is improved on the basis of realizing collection of performance data for each line of code for fine-grained performance analysis, wherein the granularity of the performance analysis is refined to a code level, thereby realizing finer performance analysis.
Furthermore, the invention can support the performance analysis of the code to be debugged by dynamically judging the start breakpoint and the end breakpoint corresponding to each measuring unit, and realize the automatic performance data collection and analysis of each line of code, thereby being easier to find out the code with performance problem.
In addition, the invention stores the initial state performance data of each measuring unit through the Map container and updates the initial state performance data according to the running state performance data, so that the performance difference between the running state and the initial state of each measuring unit in the code to be debugged can be compared and analyzed. In this way, a longitudinal comparison analysis of the performance of the same measuring unit at different operating phases can be achieved.
In addition, the performance difference between the running state and the initial state of the same measuring unit can be intuitively presented by drawing and displaying the initial state performance datum line and the running state performance floating line of each measuring unit.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which set forth the various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to fall within the scope of the claimed subject matter. The above, as well as additional objects, features, and advantages of the present disclosure will become more apparent from the following detailed description when read in conjunction with the accompanying drawings. Like reference numerals generally refer to like parts or elements throughout the present disclosure.
FIG. 1 shows a schematic diagram of a performance analysis system 100 according to one embodiment of the invention;
FIG. 2 shows a schematic diagram of a computing device 200 according to one embodiment of the invention;
FIG. 3 is a flow chart of a method 300 of performance analysis according to one embodiment of the invention;
FIG. 4 illustrates an interface schematic of an IDE (Integrated development Environment) development application according to one embodiment of the invention;
FIG. 5 shows a performance versus schematic diagram according to one embodiment of the invention;
FIG. 6 shows a flow diagram of a second method 600 of performance analysis according to one embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
FIG. 1 shows a schematic diagram of a performance analysis system 100 according to one embodiment of the invention. According to the performance analysis system 100 of the present invention, it is possible to automatically perform fine-grained performance analysis on code to be debugged.
As shown in fig. 1, performance analysis system 100 includes a debug client 110, a debug server 120. Debug client 110 may be communicatively coupled to debug server 120. In particular, debug client 110 may communicate with a debug server based on a debug adaptation protocol.
In some embodiments, debug client 110 may be integrated in an IDE development application.
In some embodiments, the debug adaptation protocol may be a DAP debug adaptation protocol in particular.
In some embodiments, the debug client 110 and the debug server 120 may communicate via a Protocol proxy (Protocol proxy) based on a debug adaptation Protocol (DAP debug adaptation Protocol). Specifically, the debug client 110 and the debug server 120 are respectively encapsulated with a protocol agent, and a full amount of DAP debug adaptation protocols can be realized based on the protocol agents, so that the protocol agent of the debug client 110 can communicate with the protocol agent of the debug server 120 based on the DAP debug adaptation protocols.
It should be noted that, based on the Protocol proxy (Protocol proxy) encapsulated by each of the debug client 110 and the debug server 120, the communication function between the debug client 110 and the debug server 120 may be implemented, for example, the debug server 120 may implement TCP monitoring, and the debug client 110 may implement TCP connection with the debug server 120. And, based on the protocol agent, the DAP debugging adaptation protocol can be encapsulated, so that serialization and deserialization of the DAP debugging adaptation protocol are realized. In addition, protocol-based agents may provide registration callback functions so that various events and requests may be handled in the callback functions.
In some embodiments, debug server 120 includes a back-end protocol agent 121, a back-end performance analysis module 122, and a debug command module 123. Among other things, debug command module 123 may include a plurality of debug commands, including, for example, launch, stepIn, stepOut, continue. The backend performance analysis module 122 may be disposed between the backend protocol agent 121 and the debug command module 123 of the debug server 120, and the backend performance analysis module 122 is coupled to the backend protocol agent 121 and the debug command module 123, respectively.
In some embodiments, debug client 110 includes a front-end protocol agent 111, an auto-debug module 112, a front-end performance analysis module 113, a contrast analysis module 114. Here, the auto-debug module 112, the front-end performance analysis module 113 are each coupled with the front-end protocol agent 111 to establish a communication connection with the back-end protocol agent 121 (based on the debug adaptation protocol) of the debug server 120 via the front-end protocol agent 111. In this way, the debugging client 110 may run the code to be debugged through the automatic debugging module 112, pause until the first breakpoint is hit, take the first breakpoint as the initial breakpoint, and automatically execute a single Step operation (Step operation) from the initial breakpoint through the automatic debugging module 112, so as to send a debugging command request to the debugging server 120 for each line of code respectively until the next breakpoint is hit, take the next breakpoint as the ending breakpoint, and take the lines of code between the initial breakpoint and the ending breakpoint as a measurement unit. Also, the debug client 110 may obtain the runtime of each line of code returned by the debug server 120 (through the backend protocol proxy 121) based on the debug adaptation protocol through the front end performance analysis module 113.
It should be noted that, based on the DAP debug adaptation protocol, a common debug command, for example, including a debug command such as Launch, stepIn, stepOut, continue, may be abstracted, so that the method is applicable to debug tools of multiple debug languages, and a unified call of an upper layer to the debug command is realized.
Different debug languages correspond to different debug tools, in other words, each debug language corresponds to a respective one of the debug tools. For example, the C++ language corresponds to the GDB debug tool and the Java language corresponds to the JDB debug tool. In order to enhance the expansibility of the debug server 120, the debug tool corresponding to each debug language may be accessed by way of a plug-in. That is, the debug server 120 may access a language debug plug-in (Language debugger) corresponding to each debug language. In this way, only the corresponding language debug plug-in is needed to be accessed under the condition of increasing the debug language, and the code of the debug server 120 is not needed to be modified.
As shown in FIG. 1, in some embodiments of the invention, debug server 120 may have access to one or more language debug plugins, including, for example, a C++ language debug plugin, a Java language debug plugin, a Python language debug plugin, and the like.
In an embodiment of the present invention, the debug client 110 may be configured to perform the performance analysis method one 300 of the present invention, and the debug server 120 may be configured to perform the performance analysis method two 600 of the present invention. The first and second performance analysis methods 300 and 600 of the present invention will be described in detail below.
In one embodiment of the present invention, the debug client 110 and the debug server 120 may be implemented as the computing device 200 as described below, respectively, such that the performance analysis method one 300 and the performance analysis method two 600 of the present invention may be executed in the computing device 200.
FIG. 2 shows a schematic diagram of a computing device 200 according to one embodiment of the invention. As shown in FIG. 2, in a basic configuration, computing device 200 includes at least one processing unit 202 and a system memory 204. According to one aspect, the processing unit 202 may be implemented as a processor, depending on the configuration and type of computing device. The system memory 204 includes, but is not limited to, volatile storage (e.g., random access memory), non-volatile storage (e.g., read only memory), flash memory, or any combination of such memories. According to one aspect, an operating system 205 is included in system memory 204.
According to one aspect, operating system 205 is suitable for controlling the operation of computing device 200, for example. Further, examples are practiced in connection with a graphics library, other operating systems, or any other application program and are not limited to any particular application or system. This basic configuration is illustrated in fig. 2 by those components within the dashed line. According to one aspect, computing device 200 has additional features or functionality. For example, according to one aspect, computing device 200 includes additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated in fig. 2 by removable storage device 209 and non-removable storage device 210.
As set forth hereinabove, according to one aspect, program modules 203 are stored in system memory 204. According to one aspect, program module 203 may include one or more applications, the invention is not limited to the type of application, for example, the application may include: email and contacts applications, word processing applications, spreadsheet applications, database applications, slide show applications, drawing or computer-aided application, web browser applications, etc. In an embodiment according to the present invention, the program module 203 includes a plurality of program instructions for executing the performance analysis method 300 and/or the performance analysis method 600 of the present invention.
According to one aspect, the examples may be practiced in a circuit comprising discrete electronic components, a packaged or integrated electronic chip containing logic gates, a circuit utilizing a microprocessor, or on a single chip containing electronic components or a microprocessor. For example, examples may be practiced via a system on a chip (SOC) in which each or many of the components shown in fig. 2 may be integrated on a single integrated circuit. According to one aspect, such SOC devices may include one or more processing units, graphics units, communication units, system virtualization units, and various application functions, all of which are integrated (or "burned") onto a chip substrate as a single integrated circuit. When operating via an SOC, the functionality described herein may be operated via dedicated logic integrated with other components of computing device 200 on a single integrated circuit (chip). Embodiments of the invention may also be practiced using other techniques capable of performing logical operations (e.g., AND, OR, AND NOT), including but NOT limited to mechanical, optical, fluidic, AND quantum techniques. In addition, embodiments of the invention may be practiced within a general purpose computer or in any other circuit or system.
According to one aspect, the computing device 200 may also have one or more input devices 212, such as a keyboard, mouse, pen, voice input device, touch input device, and the like. Output device(s) 214 such as a display, speakers, printer, etc. may also be included. The foregoing devices are examples and other devices may also be used. Computing device 200 may include one or more communication connections 216 that allow communication with other computing devices 218. Examples of suitable communication connections 216 include, but are not limited to: RF transmitter, receiver and/or transceiver circuitry; universal Serial Bus (USB), parallel and/or serial ports.
The term computer readable media as used herein includes computer storage media. Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information (e.g., computer readable instructions, data structures, or program modules). System memory 204, removable storage 209, and non-removable storage 210 are all examples of computer storage media (i.e., memory storage). Computer storage media may include Random Access Memory (RAM), read Only Memory (ROM), electrically erasable read only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other article of manufacture that can be used to store information and that can be accessed by computing device 200. According to one aspect, any such computer storage media may be part of computing device 200. Computer storage media does not include a carrier wave or other propagated data signal.
According to one aspect, communication media is embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal (e.g., carrier wave or other transport mechanism) and includes any information delivery media. According to one aspect, the term "modulated data signal" describes a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio Frequency (RF), infrared, and other wireless media.
In an embodiment according to the invention, the computing device 200 is configured to perform the performance analysis method one 300 and/or the performance analysis method two 600 according to the invention. Computing device 200 includes one or more processors and one or more readable storage media storing program instructions that, when configured to be executed by the one or more processors, cause computing device 200 to perform performance analysis method one 300 and/or performance analysis method two 600 in an embodiment of the invention to automatically perform fine-grained performance analysis of code to be debugged.
FIG. 3 illustrates a flow diagram of a performance analysis method 300 according to one embodiment of the invention. The performance analysis method 300 is suitable for execution in a debug client 110 (e.g. the computing device 200 described above). The debugging client 110 can automatically perform fine-grained performance analysis on the code to be debugged by executing the performance analysis method 300 of the invention.
As described above, the performance analysis system 100 according to the present invention includes the debug client 110 and the debug server 120. Debug client 110 may be communicatively coupled to debug server 120. In particular, the debug client 110 may communicate with the debug server based on a debug adaptation protocol (which may be a DAP in particular). In some embodiments, debug client 110 includes an auto-debug module 112.
In some embodiments, debug client 110 and debug server 120 may communicate via a protocol proxy based on a debug adaptation protocol.
In some embodiments, debug client 110 may be integrated in an IDE development application. FIG. 4 illustrates an interface schematic of an IDE (Integrated development Environment) development application according to one embodiment of the invention.
As shown in FIG. 3, a performance analysis method 300 begins at step 310.
In step 310, the automatic debug module 112 runs the code to be debugged until the first breakpoint is hit, and pauses with the first breakpoint as the starting breakpoint.
Subsequently, debug client 110 may automatically send a debug command request to debug server 120 separately for each line of code.
Specifically, in Step 320, from the hit start breakpoint, a single Step operation (Step operation) is automatically performed by the automatic debug module 112 to send a debug command request to the debug server 120 for each line of code, respectively, until the next breakpoint is hit, and the next breakpoint is taken as the end breakpoint. Also, the auto-debug module 112 may take a plurality of lines of code between a start breakpoint and an end breakpoint as one measurement unit.
In some embodiments, one or more measurement units may be included in the code to be debugged.
It should be noted that, before the code to be debugged is executed by the automatic debugging module 112, a plurality of breakpoints are inserted into the code to be debugged, and a plurality of lines of code between every two adjacent breakpoints are used as one measurement unit, so that automatic debugging and performance analysis are performed for each measurement unit. In an embodiment, the measuring unit comprises a number of code lines not exceeding the threshold value.
It will be appreciated that each measurement unit corresponds to a start breakpoint and an end breakpoint, respectively. For each measurement unit, the measurement unit includes a plurality of lines of code between the corresponding start breakpoint and end breakpoint.
As shown in fig. 4, lines 5 and 9 are the start breakpoint and the end breakpoint, respectively, corresponding to one measurement unit. The start breakpoint corresponds to the line 5 code and the end breakpoint corresponds to the line 9 code. The code lines between the 5 th row and the 9 th row can be used as a measuring unit.
In one embodiment, debug client 110 includes a front-end protocol proxy 111 and debug server 120 includes a back-end protocol proxy 121, and debug client 110 may establish a communication connection (based on a debug adaptation protocol) with back-end protocol proxy 121 of debug server 120 through front-end protocol proxy 111. Based on this, in step 320, debug client 110 may send a debug command request for each line of code to backend protocol agent 121 of debug server 120 via front-end protocol agent 111 (based on the debug adaptation protocol).
In step 330, the runtime of each line of code returned by debug server 120 based on the debug adaptation protocol is obtained.
In one embodiment, debug client 110 includes front-end performance analysis module 113, by which step 330 may be performed by front-end performance analysis module 113.
Here, it should be understood that, for a debug command request sent by the debug client 110 to the debug server 120 for each line of code, the debug server 120 will return the runtime of the corresponding line of code to the debug client 110 based on the debug adaptation protocol. That is, for a debug command request that is sent by the debug client 110 to the debug server 120 for each line of code, the debug client 110 may obtain the runtime of the corresponding line of code returned by the debug server 120 based on the debug adaptation protocol, respectively.
In some embodiments, when receiving a debug command request sent by the debug client 110 for each line of code, the debug server 120 may call a corresponding debug command in a callback function in response to the debug command request for each line of code, and record a start call time and an end call time of the debug command, so that the runtime of each line of code may be obtained.
In one particular embodiment, debug server 120 includes a back-end performance analysis module 122. The debug server 120 is further adapted to: when receiving a debug command request sent by the debug client 110 for each line of code, the current first system time is recorded by the backend performance analysis module 122 as a start call time of the debug command. When the debug command is executed, the current second system time is recorded by the backend performance analysis module 122 as the end call time of the debug command.
In addition, in one embodiment, the debug server 120 may also append the start call time and the end call time of the debug command to the tail of the debug adaptation protocol through the back-end performance analysis module 122 and send to the debug client 110. Since the running time of a line of code corresponding to the debug command can be obtained according to the start call time and the end call time of the debug command, the running time of each line of code can be sent to the debug client 110 based on the debug adaptation protocol.
Further, the debug client 110 may determine, through the front-end performance analysis module 113, whether there is a start call time and an end call time at the tail of the debug adaptation protocol, and if so, may acquire the start call time and the end call time from the tail of the debug adaptation protocol and save the start call time and the end call time, so as to obtain the running time of each line of code.
In step 340, initial state performance data (including the run time of each line of code) of the measurement unit is derived from the run time of each line of code (between the start breakpoint and the end breakpoint).
Further, the automatic debugging module 112 continues to run the code to be debugged, and the steps 310 to 340 are repeatedly executed. Finally, in step 350, initial state performance data for the plurality of measurement units may be obtained. In this way, performance analysis can be performed for each measurement unit and each row of codes in the measurement unit based on the initial state performance data of each measurement unit. Here, it can be understood that from the running time of each line code in the measurement unit, the total running time of all the line codes in the measurement unit can be determined.
Specifically, debug client 110 may automatically analyze the measurement unit and the operating efficiency of each line of code in the measurement unit based on the initial state performance data of the measurement unit.
In this way, for each measurement unit in the code to be debugged, automatic performance analysis of the measurement unit and each line of code in the measurement unit can be achieved.
According to the technical scheme of the invention, a single Step operation (Step operation) is automatically executed by the automatic debugging module 112 at the debugging client 110, so as to respectively send a debugging command request to the debugging server 120 for each line of codes, and obtain the running time of each line of codes returned by the debugging server 120 based on a debugging adaptation protocol. In this way, by way of the auto-debug module 112 automatically performing a single step operation, ease of use is improved upon enabling collection of performance data for each line of code for fine-grained performance analysis, where the granularity of the performance analysis is refined to the code level, thereby enabling finer performance analysis.
In some embodiments, all breakpoint data in the code to be debugged may be obtained and stored as a breakpoint data set before the automatic debug module 112 runs the code to be debugged. Here, the breakpoint data may include a file path and a code line number corresponding to a breakpoint (a line code where the breakpoint is located). Then, the number of breakpoints contained in the breakpoint data set is determined, and whether the number of breakpoints is odd or not is judged. And if the number of the breakpoints is odd, deleting the last breakpoint in the code to be debugged. In this way, it is possible to avoid a situation where there is only one breakpoint, resulting in a failure to hit a subsequent ending breakpoint.
In some embodiments, the execution order cannot be determined whether the code is executed inside or outside the function. Based on this, in the debugging process, the start breakpoint and the end breakpoint can be dynamically determined.
Specifically, in step 320, the step operation is automatically performed by the automatic debug module 112 to send a debug command request to the debug server 120 for each line of code, until the ending breakpoint corresponding to the measurement unit is hit, which may be implemented according to the following method.
Starting from the start breakpoint, a single step operation is automatically performed by the auto-debug module 112, and a debug command request for the current line of code is sent to the debug server 120 each time the next line of code is executed. In this way, it may be achieved that debug command requests are sent to the debug server 120 separately for each line of code.
And each time the next line of code is executed, the file path and the line number of the code corresponding to the next line of code can be recorded, and the matching operation is carried out on the file path and the line number of the code corresponding to the next line of code and the breakpoint data in the breakpoint data set, so as to judge whether the next line of code corresponds to the next breakpoint.
If the file path and the code line number corresponding to the next line code are not matched with all breakpoint data in the breakpoint data set, the single step operation can be continuously and automatically executed through the automatic debugging module 112, when the new next line code in the measurement unit is hit, a debugging command request for the new current line code is sent to the debugging server 120, the file path and the code line number corresponding to the new next line code are recorded, and the file path and the code line number corresponding to the new next line code are matched with the breakpoint data in the breakpoint data set until the matching with one breakpoint data in the breakpoint data set is successful.
If the file path and the code line number corresponding to the next line code are successfully matched with one breakpoint data in the breakpoint data set, the next line code is indicated to correspond to the next breakpoint, namely, the next breakpoint is hit, so that the hit next breakpoint can be used as an ending breakpoint. Here, the breakpoint data that is successfully matched may be stored as a single set, and the next breakpoint may be hit according to the breakpoint data that is successfully matched. So far, the process of transmitting the debug command request to the debug server 120 for each line of code is completed.
In embodiments of the invention, code to be debugged may correspond to one or more code files. In the debugging process, the method and the device can support the performance analysis of the code to be debugged by crossing files by dynamically judging the starting breakpoint and the ending breakpoint. That is, code to be debugged may correspond to a plurality of code files. And moreover, the performance data collection and analysis of each row of codes are automatically realized, and codes with performance problems can be found out more easily.
In one embodiment, sending the debug command request for the current line of code to debug server 120 in step 320 may be accomplished in the following manner.
By performing a pause operation and a continue operation for the current line of code through the auto-debug module 112, sending a debug command request for the current line of code to the debug server 120 may be accomplished. Thus, by automatically performing a suspend operation and a resume operation for each line of code, the runtime of the corresponding line of code returned by the debug server 120 based on the debug adaptation protocol may be obtained.
As shown in fig. 4, assuming the 5 th action in fig. 4 is a breakpoint, the auto-debug module 112 runs to the 5 th line, taking it as the starting breakpoint. A pause operation may be performed at the current line of code (i.e., line 5 code) corresponding to this breakpoint, followed by a continue operation to continue stepping to the next line of code (line 6 code), and then again. Each time a pause operation is executed, a debug command request may be sent to the debug server 120 for the corresponding current line of code, and the runtime of the current line of code returned by the debug server 120 based on the debug adaptation protocol may be obtained.
In some embodiments, debug client 110 also includes contrast analysis module 114. After obtaining the initial state performance data of the measurement unit in step 340, the following steps may also be performed to implement longitudinal comparison analysis for the performance of the same measurement unit at different operation phases.
The debug client 110 may use the file path and the number of code lines corresponding to the start breakpoint of each measurement unit as a Key Value (Key) of the measurement unit, and use the initial state performance data of each measurement unit as a mapping Value (Value) of the measurement unit, and may create and obtain an initial Map container based on the initial state performance data. Specifically, by creating a Map container, storing a file path and a code line number corresponding to a start breakpoint of each measurement unit in the Map container in association with initial state performance data by using the file path and the code line number corresponding to the start breakpoint of each measurement unit as a Key Value (Key) of the measurement unit and using initial state performance data of each measurement unit as a Map Value (Value) of the measurement unit, an initial Map container can be obtained.
In the running process (different running stages) of the code to be debugged, the running state performance data (corresponding to the initial state performance data) of each measurement unit can be obtained according to the first performance analysis method 300, the mapping value of each measurement unit in the initial Map container can be updated based on the running state performance data of each measurement unit, and a new Map container can be obtained after updating.
Here, the initial state performance data and the operation state performance data will be described: for code to be debugged, the first time performance analysis method 300 of the present invention is performed, initial state performance data of each measurement unit can be generated. In the running process of the code to be debugged, the performance analysis method 300 can be executed once or multiple times, and each time the performance analysis method 300 is executed, the running state performance data of each measuring unit can be generated.
Furthermore, the performance difference between the running state and the initial state of each measurement unit in the code to be debugged can be compared and analyzed by the comparison and analysis module 114 according to the new Map container and the initial Map container.
In this way, a longitudinal comparison of the performance of the same measuring unit at different operating phases is achieved.
FIG. 5 shows a performance versus schematic diagram according to one embodiment of the invention. The initial state performance baseline and the operating state performance float line are shown.
As shown in fig. 5, in one embodiment, when comparing and analyzing the performance difference between the running state and the initial state of each measurement unit in the code to be debugged according to the new Map container and the initial Map container, the initial state performance reference line of each measurement unit may be drawn and generated and displayed by the comparison analysis module 114 based on the mapping value (initial state performance data) of each measurement unit in the initial Map container. Here, the initial state performance benchmark may be specifically displayed on an interface of the debug client 110 (IDE development application interface).
The running state performance floating line of each measurement unit is drawn and displayed based on the mapped value (running state performance data) of each measurement unit in the new Map container. Here, the running state performance floating line may be displayed on an interface (IDE development application interface) of the debug client 110 in particular.
It should be noted that, in the running process of the code to be debugged, multiple performance analyses may be performed on the code to be debugged according to the first performance analysis method 300, and each performance analysis may obtain running state performance data of each measurement unit in the code to be debugged, and obtain a new Map container. And drawing a running state performance floating line of each test unit according to the new Map container obtained each time.
Thus, according to the running state performance floating line and the initial state performance datum line of each test unit, the performance difference of the test unit in the running state and the initial state can be intuitively compared and analyzed.
As shown in fig. 5, according to the initial state performance reference line and the running state performance floating line of the measuring unit, the performance difference between the running state and the initial state of the same measuring unit can be intuitively presented.
FIG. 6 shows a flow diagram of a second method 600 of performance analysis according to one embodiment of the invention. The performance analysis method two 600 is suitable for execution in the debug server 120 (e.g. the computing device 200 described above). The debugging server 120 executes the performance analysis method II 600 of the invention to automatically perform fine-grained performance analysis on the code to be debugged.
As described above, in an embodiment of the present invention, the debug server 120 may communicate with the debug client 110 based on a debug adaptation protocol, and in particular, the debug server 120 and the debug client 110 may communicate with each other based on a debug adaptation protocol through a protocol proxy. Wherein the debug client 110 is configured to perform the performance analysis method one 300 described above.
As shown in FIG. 6, a performance analysis method 600 begins at step 610.
In step 610, when receiving the debug command request sent by the debug client 110 for each line of code, the debug server 120 may call the corresponding debug command in the callback function in response to the debug command request sent by the debug client 110 for each line of code, and record the start call time and the end call time of the debug command, so that the running time of each line of code may be obtained.
In step 620, the runtime of each line of code is sent to the debug client 110 based on the debug adaptation protocol, such that the debug client 110 can obtain initial state performance data (including the runtime of each line of code) of the measurement unit from the runtime of each line of code to perform performance analysis on the measurement unit and each line of code in the measurement unit based on the initial state performance data. Here, it can be understood that from the run time of each line code, the total run time of all line codes in the measurement unit can be determined.
Specifically, debug client 110 may automatically analyze the measurement unit and the operating efficiency of each line of code in the measurement unit based on the initial state performance data of the measurement unit.
In some embodiments, debug server 120 includes a back-end performance analysis module 122, and back-end performance analysis module 122 may obtain the call time of each debug command through a buried point.
When the debug server 120 receives a debug command request sent by the debug client 110 for each line of code, the current first system time may be recorded by the backend performance analysis module 122 as a start call time of the debug command. When the debug command is executed, the current second system time is recorded by the backend performance analysis module 122 as the end call time of the debug command. In this way, recording the start call time and the end call time of the debug command can be achieved.
In addition, since there is no time parameter in the original debug adaptation protocol (DAP debug adaptation protocol). In order not to change the data structure of the original debug adaptation protocol, in one embodiment, the debug server 120 may further append the start call time and the end call time of the debug command to the tail of the debug adaptation protocol through the back-end performance analysis module 122, and send to the debug client 110. Since the running time of a line of code corresponding to the debug command can be obtained according to the start call time and the end call time of the debug command, the running time of each line of code can be sent to the debug client 110 based on the debug adaptation protocol.
Here, by acquiring the running time of each line of code from the debug server 120, the time consumption in the communication process is prevented from being calculated in the code running time, and thus the performance analysis accuracy can be improved.
Since the debug command is generally executed asynchronously, it can only be determined whether the execution is completed by the output of the process, and thus the debug server 120 can asynchronously return the execution result to the debug client 110 by sending a signal. And, the execution result needs to be encapsulated by the back-end protocol agent 121 and then sent to the front-end protocol agent 111 of the debug client 110.
In some embodiments, the debug server 120 may access a language debug plug-in corresponding to each debug language, so that only the corresponding language debug plug-in is required to be accessed under the condition of increasing the debug language, and the code of the debug server 120 is not required to be modified.
According to the performance analysis method of the invention, the automatic debugging module of the debugging client runs the code to be debugged, pauses until the first breakpoint is hit, takes the first breakpoint as the initial breakpoint, and starts from the initial breakpoint, automatically executes single-step operation through the automatic debugging module to respectively send debugging command requests to the debugging server for each line of code until the next breakpoint is hit, takes the next breakpoint as the ending breakpoint, and takes a plurality of lines of codes between the initial breakpoint and the ending breakpoint as a measuring unit. The method comprises the steps that the running time of each row of codes returned by a debugging server based on a debugging adaptation protocol is obtained, initial state performance data of a measuring unit are obtained according to the running time of each row of codes, and accordingly the initial state performance data of a plurality of measuring units can be obtained, so that performance analysis can be conducted on the measuring units and each row of codes in the measuring units based on the initial state performance data. Therefore, according to the technical scheme of the invention, the single-step operation is automatically executed at the debugging client through the automatic debugging module, so that a debugging command request is respectively sent to the debugging server for each row of codes, and the running time of each row of codes returned by the debugging server based on the debugging adaptation protocol is obtained. In this way, by automatically executing a single step operation by the automatic debugging module, usability is improved on the basis of realizing collection of performance data for each line of code for fine-grained performance analysis, wherein the granularity of the performance analysis is refined to a code level, thereby realizing finer performance analysis.
Furthermore, the invention can support the performance analysis of the code to be debugged by dynamically judging the start breakpoint and the end breakpoint corresponding to each measuring unit, and realize the automatic performance data collection and analysis of each line of code, thereby being easier to find out the code with performance problem.
In addition, the invention stores the initial state performance data of each measuring unit through the Map container and updates the initial state performance data according to the running state performance data, so that the performance difference between the running state and the initial state of each measuring unit in the code to be debugged can be compared and analyzed. In this way, a longitudinal comparison analysis of the performance of the same measuring unit at different operating phases can be achieved.
In addition, the performance difference between the running state and the initial state of the same measuring unit can be intuitively presented by drawing and displaying the initial state performance datum line and the running state performance floating line of each measuring unit.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions of the methods and apparatus of the present invention, may take the form of program code (i.e., instructions) embodied in tangible media, such as removable hard drives, U-drives, floppy diskettes, CD-ROMs, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the mobile terminal will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to perform the performance analysis method of the invention in accordance with instructions in said program code stored in the memory.
By way of example, and not limitation, readable media comprise readable storage media and communication media. The readable storage medium stores information such as computer readable instructions, data structures, program modules, or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of readable media.
In the description provided herein, algorithms and displays are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with examples of the invention. The required structure for a construction of such a system is apparent from the description above. Furthermore, the present invention is not directed to any particular debug language. It should be appreciated that the teachings of the present invention as described herein may be implemented in a variety of debug languages, and the above descriptions of specific languages are provided for disclosure of preferred embodiments of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment, or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into a plurality of sub-modules.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments.
Furthermore, some of the embodiments are described herein as methods or combinations of method elements that may be implemented by a processor of a computer system or by other means of performing the functions. Thus, a processor with the necessary instructions for implementing the described method or method element forms a means for implementing the method or method element. Furthermore, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is for carrying out the functions performed by the elements for carrying out the objects of the invention.
As used herein, unless otherwise specified the use of the ordinal terms "first," "second," "third," etc., to describe a general object merely denote different instances of like objects, and are not intended to imply that the objects so described must have a given order, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of the above description, will appreciate that other embodiments are contemplated within the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter.

Claims (17)

1. A performance analysis method performed at a debug client, wherein the debug client is adapted to communicate with a debug server based on a debug adaptation protocol, the debug client comprising an automatic debug module, the method comprising:
the automatic debugging module operates a code to be debugged, pauses until a first breakpoint is hit, and takes the first breakpoint as an initial breakpoint;
starting from the initial breakpoint, automatically executing a single step operation through an automatic debugging module to respectively send a debugging command request to a debugging server for each line of codes until the next breakpoint is hit, taking the next breakpoint as an ending breakpoint, and taking a plurality of lines of codes between the initial breakpoint and the ending breakpoint as a measuring unit;
Acquiring the running time of each row of codes returned by the debugging server based on a debugging adaptation protocol;
obtaining initial state performance data of the measuring unit according to the running time of each row of codes;
repeating the steps to obtain initial state performance data of a plurality of measuring units so as to perform performance analysis on the measuring units and each row of codes in the measuring units based on the initial state performance data, wherein the initial state performance data is obtained by executing the steps for the first time;
in the running process of the code to be debugged, the running state performance data of each measuring unit are obtained by executing the steps;
taking a file path and a code line number corresponding to the initial breakpoint of each measuring unit as key values of the measuring units, taking initial state performance data of each measuring unit as a mapping value of the measuring units, and creating an initial Map container;
updating the mapping value of each measuring unit in the initial Map container according to the running state performance data of each measuring unit to obtain a new Map container;
and comparing and analyzing the performance difference of each measuring unit in the running state and the initial state according to the new Map container and the initial Map container.
2. The method of claim 1, wherein prior to the automatic debugging module running code to be debugged, further comprising:
acquiring all breakpoint data in the code to be debugged, and storing the breakpoint data as a breakpoint data set;
determining the number of breakpoints contained in the breakpoint data set;
and if the number of the breakpoints is odd, deleting the last breakpoint in the code to be debugged.
3. The method of claim 2, wherein automatically performing a single step operation by the auto-debug module to send debug command requests to the debug server for each line of code, respectively, until a next breakpoint is hit, comprises:
automatically executing single step operation through an automatic debugging module, and sending a debugging command request for the current line of codes to the debugging server side every time when one line of codes is executed;
recording a file path and a code line number corresponding to the next code, and performing matching operation with breakpoint data in the breakpoint data set;
and if the file path and the code line number corresponding to the next line code are successfully matched with one breakpoint data in the breakpoint data set, namely, the next breakpoint is hit.
4. The method of claim 3, wherein sending a debug command request for a current line of code to the debug server comprises:
and executing pause operation and continuous operation on the current line of codes through an automatic debugging module so as to send a debugging command request to the debugging server.
5. The method of any one of claims 1-4, wherein comparing the performance difference of each of the measurement units in the operational state versus the initial state based on the new Map container, the initial Map container, comprises:
drawing and generating an initial state performance datum line of each measuring unit based on the mapping value of each measuring unit in the initial Map container and displaying the initial state performance datum line;
drawing and generating running state performance floating lines of each measuring unit based on the mapping value of each measuring unit in the new Map container and displaying the running state performance floating lines;
and comparing and analyzing the performance difference of the measuring unit in the running state and the initial state according to the running state performance floating line and the initial state performance datum line.
6. The method of claim 3, wherein the code to be debugged corresponds to one or more code files.
7. The method of any one of claims 1-4, wherein the debug server is adapted to:
and responding to the debugging command request aiming at each line of codes, calling a corresponding debugging command, and recording the starting calling time and the ending calling time of the debugging command to obtain the running time of each line of codes.
8. The method of claim 7, wherein the debug server includes a backend performance analysis module; the debugging server is adapted to:
when a debugging command request sent by a debugging client for each line of code is received, recording the current first system time as the starting calling time of the debugging command through a back-end performance analysis module;
and when the debugging command is executed, recording the current second system time as the ending calling time of the debugging command through the back-end performance analysis module.
9. The method of claim 8, wherein the debug server is further adapted to:
and attaching the start calling time and the end calling time to the tail part of the debugging adaptation protocol through the back-end performance analysis module, and sending the start calling time and the end calling time to the debugging client.
10. The method of any of claims 1-4, wherein the debug adaptation protocol is a DAP debug adaptation protocol;
The debugging client and the debugging server are suitable for communicating based on the debugging adaptation protocol through a protocol agent.
11. A performance analysis method performed at a debug server, wherein the debug server is adapted to communicate with a debug client based on a debug adaptation protocol, the method comprising:
responding to a debugging command request sent by a debugging client for each line of codes, calling a corresponding debugging command, and recording the starting calling time and the ending calling time of the debugging command to obtain the running time of each line of codes;
transmitting the running time of each line of codes to a debugging client based on a debugging adaptation protocol;
wherein the commissioning client is adapted to perform the method of any of claims 1-10.
12. The method of claim 11, wherein the debug server includes a back-end performance analysis module; recording the starting call time and the ending call time of the debug command, including:
when a debugging command request sent by a debugging client for each line of codes in a measuring unit is received, recording the current first system time as the starting calling time of the debugging command through a back-end performance analysis module;
And when the debugging command is executed, recording the current second system time as the ending calling time of the debugging command through a back-end performance analysis module.
13. The method of claim 11 or 12, wherein sending the runtime of each line of code to the debug client based on the debug adaptation protocol comprises:
and adding the start calling time and the end calling time to the tail part of the debugging adaptation protocol through a back-end performance analysis module, and sending the start calling time and the end calling time to the debugging client.
14. The method according to claim 11 or 12, wherein the debug server is adapted to access a language debug plug-in corresponding to each debug language.
15. A performance analysis system, comprising:
a commissioning client comprising an automatic commissioning module adapted to perform the method of any one of claims 1-10;
a debug server adapted to communicate with said debug client based on a debug adaptation protocol and adapted to perform the method according to any of claims 11-14.
16. A computing device, comprising:
at least one processor; and
a memory storing program instructions, wherein the program instructions are configured to be adapted to be executed by the at least one processor, the program instructions comprising instructions for performing the method of any of claims 1-14.
17. A readable storage medium storing program instructions which, when read and executed by a computing device, cause the computing device to perform the method of any of claims 1-14.
CN202311211583.8A 2023-09-19 2023-09-19 Performance analysis method, system, computing device and storage medium Active CN116955118B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311211583.8A CN116955118B (en) 2023-09-19 2023-09-19 Performance analysis method, system, computing device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311211583.8A CN116955118B (en) 2023-09-19 2023-09-19 Performance analysis method, system, computing device and storage medium

Publications (2)

Publication Number Publication Date
CN116955118A CN116955118A (en) 2023-10-27
CN116955118B true CN116955118B (en) 2023-12-29

Family

ID=88454935

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311211583.8A Active CN116955118B (en) 2023-09-19 2023-09-19 Performance analysis method, system, computing device and storage medium

Country Status (1)

Country Link
CN (1) CN116955118B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110647467A (en) * 2019-09-23 2020-01-03 上海创景信息科技有限公司 Target code coverage rate testing method, system and medium based on single step exception
CN114625660A (en) * 2022-03-22 2022-06-14 阿里巴巴(中国)有限公司 Debugging method and device
CN115470138A (en) * 2022-09-22 2022-12-13 南京大学 Debugger defect detection method based on different debugging levels cross validation
CN115934566A (en) * 2022-12-29 2023-04-07 上海艺赛旗软件股份有限公司 Debugging information display method and device, electronic equipment and storage medium
CN116662161A (en) * 2023-05-08 2023-08-29 南京南瑞继保电气有限公司 Function debugging method, device, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10095603B2 (en) * 2017-01-09 2018-10-09 International Business Machines Corporation Pre-fetching disassembly code for remote software debugging

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110647467A (en) * 2019-09-23 2020-01-03 上海创景信息科技有限公司 Target code coverage rate testing method, system and medium based on single step exception
CN114625660A (en) * 2022-03-22 2022-06-14 阿里巴巴(中国)有限公司 Debugging method and device
CN115470138A (en) * 2022-09-22 2022-12-13 南京大学 Debugger defect detection method based on different debugging levels cross validation
CN115934566A (en) * 2022-12-29 2023-04-07 上海艺赛旗软件股份有限公司 Debugging information display method and device, electronic equipment and storage medium
CN116662161A (en) * 2023-05-08 2023-08-29 南京南瑞继保电气有限公司 Function debugging method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN116955118A (en) 2023-10-27

Similar Documents

Publication Publication Date Title
US9111033B2 (en) Compiling source code for debugging with user preferred snapshot locations
US10445216B2 (en) Debugging program code at instruction level through emulation
US20130132933A1 (en) Automated compliance testing during application development
CN105247493A (en) Identifying impacted tests from statically collected data
US20120246624A1 (en) Debugger-set identifying breakpoints after coroutine yield points
CN109471851B (en) Data processing method, device, server and storage medium
CN108319575B (en) Page component checking method, device, server and storage medium
CN112732576B (en) Automatic testing method, device and equipment based on user interface
CN101354675A (en) Method for detecting embedded software dynamic memory
CN110597704B (en) Pressure test method, device, server and medium for application program
WO2014200803A1 (en) Using a static analysis for configuring a follow-on dynamic analysis for the evaluation of program code
CN112230904A (en) Code generation method and device based on interface document, storage medium and server
CN110888731A (en) Route data acquisition method, device, equipment and storage medium
CN109753437B (en) Test program generation method and device, storage medium and electronic equipment
CN111143434A (en) Intelligent data checking method, device, equipment and storage medium
CN116955118B (en) Performance analysis method, system, computing device and storage medium
US11119892B2 (en) Method, device and computer-readable storage medium for guiding symbolic execution
CN116795712A (en) Reverse debugging method, computing device and storage medium
CN116383021A (en) Software package performance testing method, system, computing device and readable storage medium
CN116450398A (en) Exception backtracking method, device, equipment and medium
CN112506871B (en) Automated task management and log management method, system, electronic device and medium
CN115269285A (en) Test method and device, equipment and computer readable storage medium
CN108334313A (en) Continuous integrating method, apparatus and code management system for large-scale SOC research and development
CN113760696A (en) Program problem positioning method and device, electronic equipment and storage medium
Kranzlmüller et al. An integrated record&replay mechanism for nondeterministic message passing programs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant