CN113268243A - Memory prediction method and device, storage medium and electronic equipment - Google Patents

Memory prediction method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113268243A
CN113268243A CN202110511788.2A CN202110511788A CN113268243A CN 113268243 A CN113268243 A CN 113268243A CN 202110511788 A CN202110511788 A CN 202110511788A CN 113268243 A CN113268243 A CN 113268243A
Authority
CN
China
Prior art keywords
memory
linked list
source code
code file
list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110511788.2A
Other languages
Chinese (zh)
Other versions
CN113268243B (en
Inventor
陈沫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110511788.2A priority Critical patent/CN113268243B/en
Publication of CN113268243A publication Critical patent/CN113268243A/en
Application granted granted Critical
Publication of CN113268243B publication Critical patent/CN113268243B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/44Encoding
    • G06F8/443Optimisation
    • G06F8/4434Reducing the memory space required by the program code
    • G06F8/4435Detection or removal of dead or redundant code

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

The disclosure relates to a memory prediction method and device, a storage medium and an electronic device. Wherein, the method comprises the following steps: acquiring a source code file; converting the source code file into a target data structure, wherein each leaf node in the target data structure is used for representing the logic sequence and the execution data of the corresponding code block in the source code file; and predicting the use condition of the source code file to the memory based on the target data structure to obtain a prediction result. The memory detection method and the memory detection device solve the technical problems that a memory detection scheme in the prior art is not suitable for a design scene in a previous development stage and is difficult to improve memory application release architecture design in an auxiliary mode based on a memory leakage problem.

Description

Memory prediction method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of memory prediction, and in particular, to a memory prediction method and apparatus, a storage medium, and an electronic device.
Background
Aiming at the technical problem that the source of the memory leakage cannot be simply and directly positioned in the prior art, one general solution is to perform reinforced test on a scene with the possibility of the memory leakage problem in the daily software engineering activity, and the other technical solution is to directly pile in a function code to achieve the control on the memory application and release in a mode of reducing the operation efficiency by a small amount.
When the memory application occurs to the software, the program flow direction is changed, and before and after the actual memory application, some actions predefined by the user are executed, including but not limited to the recording of the memory application, and the like. Similarly, when the software is released from the memory, some actions that the user wants to define, including but not limited to the recording of the memory release, are also executed before and after the actual release. As shown in fig. 1 below, the prior art solves the problem of memory tracking, and achieves the purpose of recording the memory in a manner of sacrificing efficiency.
However, the method for implementing dynamic tracking of the memory as shown in fig. 1 needs to compile codes and then record the application and release of the memory in the actual operation process, which hardly helps the early design of the memory application release architecture, that is, the memory detection scheme in the prior art is more suitable for the debugging scenario and is not suitable for the design scenario in the early development stage, and it is difficult to improve the architecture design based on the memory leakage problem.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the disclosure provides a memory prediction method and device, a storage medium and an electronic device, so as to at least solve the technical problems that a memory detection scheme in the prior art is not suitable for a design scene in a previous development stage and is difficult to assist in improving a memory application release architecture design based on a memory leakage problem.
According to an aspect of the embodiments of the present disclosure, there is provided a memory prediction method, including: acquiring a source code file; converting the source code file into a target data structure, wherein each leaf node in the target data structure is used for representing the logic sequence and the execution data of the corresponding code block in the source code file; and predicting the use condition of the source code file to the memory based on the target data structure to obtain a prediction result.
According to another aspect of the embodiments of the present disclosure, there is also provided a memory prediction apparatus, including: the acquisition module is used for acquiring a source code file; a conversion module, configured to convert the source code file into a target data structure, where each leaf node in the target data structure is used to represent a logic sequence and execution data of a corresponding code block in the source code file; and the prediction module is used for predicting the use condition of the source code file to the memory based on the target data structure to obtain a prediction result.
According to another aspect of the embodiments of the present disclosure, a non-volatile storage medium is further provided, where the non-volatile storage medium includes a stored program, and when the program runs, the device where the non-volatile storage medium is located is controlled to execute any one of the above memory prediction methods.
According to another aspect of the embodiments of the present disclosure, there is also provided an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program to perform any one of the memory prediction methods.
In the embodiment of the disclosure, a source code file is obtained; converting the source code file into a target data structure, wherein each leaf node in the target data structure is used for representing the logic sequence and the execution data of the corresponding code block in the source code file; the method comprises the steps of predicting the use condition of a source code file for a memory based on the target data structure to obtain a prediction result, and achieving the purpose of predicting the use condition of the source code file for the memory in the early design process of a memory application release architecture, thereby achieving the technical effect of assisting in improving the design of the memory application release architecture based on the memory leakage problem, and further solving the technical problems that a memory detection scheme in the prior art is not suitable for the design scene in the early development stage and is difficult to assist in improving the design of the memory application release architecture based on the memory leakage problem.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the disclosure and together with the description serve to explain the disclosure and not to limit the disclosure. In the drawings:
FIG. 1 is a flow chart of a memory detection method according to the prior art;
FIG. 2 is a flow chart of a memory prediction method according to an embodiment of the present disclosure;
FIG. 3(a) is a representation of an alternative abstract syntax tree-based generation of an inverted linked list according to an embodiment of the present disclosure;
FIG. 3(b) is a schematic diagram of an alternative filter list and picklist according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an alternative post-flag linked list according to an embodiment of the present disclosure;
FIG. 5(a) is a schematic diagram of an alternative traversal pending linked list according to an embodiment of the present disclosure;
FIG. 5(b) is a flowchart of an alternative processing based on the traversal result of the pending linked list according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a conversion of an abstract syntax tree into black boxes, according to an embodiment of the present disclosure;
FIG. 7 is a flow chart of obtaining a memory application and release curve based on black box simulation according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a memory prediction apparatus according to an embodiment of the disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those skilled in the art, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only some embodiments of the present disclosure, not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, in order to facilitate understanding of the embodiments of the present disclosure, some terms or nouns referred to in the present disclosure will be explained below:
memory leak: in the normal operation process of a stationed process, the occupied memory is continuously increased until the allocable memory of the operating system is used up.
Abstract syntax tree: the code source file is converted into a data structure with logic, and each leaf node of the data structure expresses the logic sequence and execution data of each code block of the source code.
In accordance with an embodiment of the present disclosure, there is provided an embodiment of a memory prediction method, it should be noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
The technical scheme of the method embodiment can be executed in a mobile terminal, a computer terminal or a similar arithmetic device. Taking the example of the Mobile terminal running on the Mobile terminal, the Mobile terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet device (MID for short), a PAD, and the like. The mobile terminal may include one or more processors (which may include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), an Artificial Intelligence (AI) type processor, etc.) and a memory for storing data. Optionally, the mobile terminal may further include a transmission device, an input/output device, and a display device for a communication function. It will be understood by those skilled in the art that the foregoing structural description is only illustrative and not restrictive of the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than described above, or have a different configuration than described above.
The memory may be used to store a computer program, for example, a software program and a module of an application software, such as a computer program corresponding to the memory prediction method in the embodiments of the present disclosure, and the processor executes various functional applications and data processing by running the computer program stored in the memory, so as to implement the memory prediction method. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the mobile terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner. The technical scheme of the embodiment of the method can be applied to various communication systems, such as: a Global System for Mobile communications (GSM) System, a Code Division Multiple Access (CDMA) System, a Wideband Code Division Multiple Access (WCDMA) System, a General Packet Radio Service (GPRS), a Long Term Evolution (Long Term Evolution, LTE) System, a Frequency Division Duplex (FDD) System, a Time Division Duplex (TDD), a Universal Mobile Telecommunications System (UMTS), a Worldwide Interoperability for Microwave Access (WiMAX) communication System, or a 5G System. Optionally, Device-to-Device (D2D for short) communication may be performed between multiple mobile terminals. Alternatively, the 5G system or the 5G network is also referred to as a New Radio (NR) system or an NR network.
The display device may be, for example, a touch screen type Liquid Crystal Display (LCD) and a touch display (also referred to as a "touch screen" or "touch display screen"). The liquid crystal display may enable a user to interact with a user interface of the mobile terminal. In some embodiments, the mobile terminal has a Graphical User Interface (GUI) with which a user can interact by touching finger contacts and/or gestures on a touch-sensitive surface, where the human-machine interaction function optionally includes the following interactions: executable instructions for creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, emailing, call interfacing, playing digital video, playing digital music, and/or web browsing, etc., for performing the above-described human-computer interaction functions, are configured/stored in one or more processor-executable computer program products or readable non-volatile storage media.
Fig. 2 is a flowchart of a memory prediction method according to an embodiment of the disclosure, as shown in fig. 2, the method includes the following steps:
step S102, acquiring a source code file;
step S104, converting the source code file into a target data structure, wherein each leaf node in the target data structure is used for representing the logic sequence and execution data of a corresponding code block in the source code file;
and step S106, predicting the use condition of the source code file to the memory based on the target data structure to obtain a prediction result.
In the embodiment of the disclosure, a source code file is obtained; converting the source code file into a target data structure, wherein each leaf node in the target data structure is used for representing the logic sequence and the execution data of the corresponding code block in the source code file; the method comprises the steps of predicting the use condition of a source code file for a memory based on the target data structure to obtain a prediction result, and achieving the purpose of predicting the use condition of the source code file for the memory in the early design process of a memory application release architecture, thereby achieving the technical effect of assisting in improving the design of the memory application release architecture based on the memory leakage problem, and further solving the technical problems that a memory detection scheme in the prior art is not suitable for the design scene in the early development stage and is difficult to assist in improving the design of the memory application release architecture based on the memory leakage problem.
In an optional embodiment, the usage of the memory by the source code file includes: and applying and/or releasing part of source codes in the source code file to the memory, wherein the target data structure is an abstract syntax tree.
Optionally, the source code file, that is, the function code file of the operating system, directly analyzes the source code file to convert the code into an abstract syntax tree by using a static syntax analysis method, obtains a relevant node related to memory application release in the abstract syntax tree, establishes an input/output model, and outputs an execution parameter of the input node to the outside to be adjustable, so as to output the execution parameter as a predicted memory occupation size, thereby achieving the purpose of predicting the memory, and helping developers find a suspected memory leakage problem in a previous design stage to help improve the architecture design.
Optionally, each leaf node in the target data structure is configured to characterize a logical order and execution data of a corresponding code block (e.g., a static code block) in the source code file.
In an optional embodiment, the predicting the use condition of the source code file for the memory based on the target data structure to obtain the prediction result includes:
step S202, traversing the target data structure to obtain the root node of each leaf node;
step S204, generating an inverse hanging linked list according to each leaf node and the root node of each leaf node;
step S206, predicting the use condition of the source code file for the memory based on the inverted linked list, and obtaining the memory prediction result.
As shown in fig. 3(a), by converting the source code file into an abstract syntax tree, traversing the abstract syntax tree to obtain parent nodes of each leaf node, and inversely hanging root nodes of each leaf node to generate an inversely hung linked list, it can be ensured that the nodes are quickly traced when being subsequently assembled/queried. By adopting the scheme, the method and the device can help developers to verify and expose the hidden danger code files, so that the developers can more scientifically stand on a local site to understand the whole software.
In an optional embodiment, before predicting the use condition of the source code file for the memory based on the inverted linked list to obtain the memory prediction result, the method further includes:
step S302, a screening list and a selection list are obtained, wherein the screening list comprises: the memory application screening list and the memory release screening list comprise the following selection lists: a memory application selection list and a memory release selection list;
step S304, screening out a first leaf node in all leaf nodes by adopting the memory application list, and storing the first leaf node into the memory application selection list; and
step S306, screen out a second leaf node of all leaf nodes by using the memory release list, and store the second leaf node in the memory release selected list.
In the embodiment of the present application, as shown in fig. 3(b), two screening lists (i.e., a memory application screening list for screening a first leaf node related to a memory application, i.e., a memory growth, a memory release screening list for screening a first leaf node related to a memory release, i.e., a memory drop, and a leaf node meeting a screening requirement are respectively marked as a first leaf node and a second leaf node, where it should be noted that marking an implementation operation related to a memory application and/or release may be referred to as a coloring operation.
As also shown in fig. 3(b), the memory application filter list only includes the first leaf node where the memory growth occurs, such as malloc; the memory release filter list only contains the second leaf node, such as free, where the memory release occurs.
In this embodiment, the leaf nodes that conform to the respective filter lists are respectively stored in the corresponding selection lists, that is, the first leaf node is stored in the selection list of the memory application (fig. 3(b) the black solid line frame represents the first leaf node set that is dyed to be memory growth), and the second leaf node is stored in the selection list of the memory release (for example, fig. 3(b) the black dashed line frame represents the second leaf node set that is dyed to be memory release).
In an optional embodiment, the predicting the use condition of the source code file for the memory based on the inverted linked list to obtain the memory prediction result includes:
step S402, the inverted linked list is marked according to the type of the leaf node stored in the selected list, and a marked linked list is obtained, wherein the marked linked list comprises: the method comprises the following steps of (1) storing a memory application linked list, a memory release linked list and an undetermined linked list;
step S404, recursively traversing all the undetermined linked lists to determine whether the undetermined linked lists have a public intersection with the memory application linked list and/or the memory release linked list to obtain a traversal result;
step S406, predicting the use condition of the source code file for the memory based on the traversal result, and obtaining the memory prediction result.
In this embodiment of the present application, according to a type of a leaf node stored in the selected list, that is, whether a first leaf node or a second leaf node is stored, the inverted linked list is marked to obtain a marked linked list, where the marked linked list includes: the system comprises a memory application linked list, a memory release linked list and an undetermined linked list.
Optionally, in this embodiment of the application, the marking process is configured to mark whether the inverted linked list is related to the increase and/or release of the memory, where the memory application linked list is positively related to the increase of the memory, the memory release linked list is positively related to the decrease of the memory, and the pending linked list is temporarily indirectly related to the increase/decrease of the memory.
As an alternative embodiment, the obtained inverse linked list as shown in fig. 3(a) is traversed, and the selected list as shown in fig. 3(b) in the foregoing embodiment is used to mark the inverse linked list, so as to obtain three types of post-marked linked lists as shown in fig. 4, that is, two types of linked lists related to the increase/release of the memory: the system comprises a memory application linked list, a memory release linked list and an undetermined linked list which is temporarily irrelevant to or relevant to the two kinds of linked lists.
It should be clear that, since the pending linked list may include linked lists related to both the memory application linked list and the memory release linked list, the pending linked list needs to be reprocessed subsequently, for example, all the pending linked lists are traversed recursively to determine whether the pending linked list has a public intersection with the memory application linked list and/or the memory release linked list, so as to obtain a traversal result; and predicting the use condition of the source code file on the memory based on the traversal result to obtain the memory prediction result.
In an optional embodiment, at least the following method is used to determine whether the pending linked list has a public intersection with the memory application linked list and/or the memory release linked list, so as to obtain the traversal result:
step S502, if it is determined that the undetermined linked list and the memory application linked list have a public intersection, determining to classify the undetermined linked list into the memory application linked list, and if it is determined that the undetermined linked list and the memory release linked list have a public intersection, determining to classify the undetermined linked list into the memory release linked list;
step S504, if it is determined that the undetermined linked list does not have a public intersection with the memory application linked list or the memory release linked list, determining the undetermined linked list as an irrelevant linked list, wherein the irrelevant linked list is used for representing that the source code file corresponding to the undetermined linked list is irrelevant to the application and/or release of the memory;
step S506, if it is determined that the pending linked list and the memory application linked list and the memory release linked list both have a public intersection, determining the pending linked list as a related linked list, where the related linked list is used to represent that the source code file corresponding to the pending linked list is related to both the application and the release of the memory.
As an alternative embodiment, as shown in fig. 5(a), by scanning the set of pending linked lists one by one, if there is an intersection node with a certain type of the memory growing linked list and the memory releasing linked list, for example, c- > a shown in fig. 5(a) only has an intersection with the memory growing linked list; x- > d- > a only has intersection with the memory release linked list; y- > e- > c only has intersection with the memory growth linked list, and is drawn into the linked list set of the corresponding type as shown in the solid line and the dotted line frame of fig. 5 (b); if no public intersection exists with any type of linked list in the memory growing linked list and the memory releasing linked list, the linked list is drawn into an irrelevant linked list as shown in a dotted line frame in the figure 5(b), which indicates that the pending linked list is irrelevant to the increase and decrease of the memory; if there is a common set with both the memory growing chain table and the memory releasing chain table, it is marked into the related chain table as shown by the two-dot chain line box in fig. 5(b), indicating that the pending chain table contains both the memory growth and the memory release.
In an optional embodiment, the predicting the use condition of the source code file for the memory based on the traversal result to obtain the memory prediction result includes: if the traversal result indicates that the pending linked list is included in the memory application linked list, predicting the application of a source code file corresponding to the pending linked list to the memory based on the memory application linked list; if the traversal result indicates that the pending linked list is classified into the memory release linked list, predicting the release of the source code file corresponding to the pending linked list to the memory based on the memory release linked list; and if the traversal result indicates that the to-be-determined linked list is the irrelevant linked list, predicting the use condition of the source code file corresponding to the to-be-determined linked list on the memory is not needed.
In an optional embodiment, the predicting the use condition of the source code file for the memory based on the traversal result to obtain the memory prediction result further includes:
step S602, if the pending linked list is determined to be a related linked list, generating a new target data structure corresponding to the related linked list based on a head node of the related linked list;
step S604, traversing all leaf nodes in the new target data structure, and determining a correlation between each of the leaf nodes and the use condition of the memory;
step S606, determining the memory prediction result corresponding to the correlation linked list based on the correlation relationship.
In an optional embodiment, the determining the correlation between each leaf node and the use condition of the memory includes: if the leaf node is traversed to be contained in the memory application linked list, determining that the leaf node is positively correlated with the memory application; and if the leaf node is traversed to be contained in the memory release linked list, determining that the leaf node is positively correlated with the memory release.
As another alternative, as shown in fig. 5(b), the related linked lists are sorted according to the length of the linked lists, and the nodes of the related linked lists are exposed one by one and relate to the input of the leaf nodes of the syntax tree, that is, the increase or decrease of the actual memory can be predicted according to the leaf nodes which can receive the input; in the related linked list set, selecting one linked list, scanning the head node of the linked list to generate a corresponding new abstract syntax tree, traversing all leaf nodes in a new target data structure, dyeing each leaf node, if the leaf node is contained in a memory growth linked list, indicating that the leaf node is related to memory growth, as shown by a node 6a in fig. 6, and if the leaf node is contained in a memory release linked list, indicating that the leaf node is related to memory release, as shown by a node 6b in fig. 6, and determining a memory prediction result corresponding to the related linked list based on the related relationship.
In an optional embodiment, the method further includes:
step S702, if it is determined that the leaf node in the new target data structure cannot trace the source in the code, determining a target leaf node in the new target data structure, where the target leaf node is an input leaf node of the new target data structure;
step S704, calculating a first calling number of a plurality of leaf nodes positively correlated to the application of the memory and a second calling number of a plurality of leaf nodes positively correlated to the release of the memory according to the target leaf node and the new target data structure;
step S706, determining the memory prediction result based on the first call frequency and the second call frequency.
As another alternative, if the leaf node cannot trace the source in the code, the input is accepted for the exposure, as shown by node 6c in FIG. 6.
Optionally, in this embodiment of the present application, the abstract syntax tree is equivalent to an executable sequence, and by inputting a place in the abstract syntax tree that can accept input, a place where a growing/releasing place is marked is monitored and can be used as an output. The prediction of the black box can be completed by directly receiving the executable sequence for input simulation execution, that is, in the embodiment of the present application, by representing the abstract syntax tree as a black box, the number of calls of the node 6a and the node 6b can be calculated according to the input of the node 6c and other data and logic carried in the syntax tree structure, so as to predict the increase and decrease of the memory.
In an optional embodiment, the method further includes:
step S802, if the correlation is determined to be irrelevant, acquiring an input signal of each of a plurality of correlation linked lists;
step S804, performing permutation and combination on a plurality of different input signals to obtain a permutation and combination result;
step S806, simulating to obtain a memory application and release curve based on the permutation and combination result;
step S808, locating a code position in the source code file corresponding to the use condition of the memory according to the memory application and the release curve.
In the embodiment of the present application, as shown in fig. 7, the abstract syntax tree is equivalent to an executable sequence, where input information is input to a place in the abstract syntax tree that can receive input, and a place where a growing/releasing place is marked is monitored to be used as output information, so that the executable sequence can be directly subjected to input simulation execution, and prediction of a black box can be completed to obtain a permutation and combination result.
In the embodiment of the present application, if it is determined that the correlation is uncorrelated, an input signal of each of the multiple correlated linked lists is obtained, a permutation and combination result is obtained by permutation and combination of multiple different input signals, a memory application and release curve can be obtained through simulation, the application and release of a memory are shown in the memory application and release curve, and then a relevant pin having a large influence can be found in advance according to the memory application and release curve and directly located to a specific code position.
As an alternative embodiment, each linked list in the related linked lists shown in fig. 5(b) corresponds to one black box, and the combination of all the black boxes can form the whole prediction system, and after the basic input signal is established, the memory increase and decrease change rule of the whole system can be predicted according to the permutation and combination of different signals.
By adopting the scheme, the memory increase and decrease model can be obtained simply through prediction of a static grammatical analysis method (the real running condition of software cannot be completely replaced), after the input signal is automatically given, the increase and decrease of the memory of the whole system can be simulated, the method has good auxiliary or guiding effect on the related change of the software architecture, the risk of memory leakage is reduced from the source, and the risk of difficult later debugging and troubleshooting and difficult iteration caused by incomplete early design consideration can be greatly reduced.
According to an embodiment of the present disclosure, an apparatus embodiment for implementing the memory prediction method is further provided, and fig. 8 is a schematic structural diagram of a memory prediction apparatus according to an embodiment of the present disclosure, and as shown in fig. 8, the memory prediction apparatus includes: an acquisition module 80, a conversion module 82, and a prediction module 84, wherein:
an obtaining module 80, configured to obtain a source code file; a conversion module 82, configured to convert the source code file into a target data structure, where each leaf node in the target data structure is used to represent a logic sequence and execution data of a corresponding code block in the source code file; and the prediction module 84 is configured to predict the use condition of the source code file for the memory based on the target data structure, so as to obtain a prediction result.
It should be noted that the above modules may be implemented by software or hardware, for example, for the latter, the following may be implemented: the modules can be located in the same processor; alternatively, the modules may be located in different processors in any combination.
It should be noted here that the above-mentioned obtaining module 80, the converting module 82 and the predicting module 84 correspond to steps S102 to S106 in the method embodiment, and the above-mentioned modules are the same as the examples and application scenarios realized by the corresponding steps, but are not limited to the disclosure of the above-mentioned method embodiment. It should be noted that the modules described above may be implemented in a computer terminal as part of an apparatus.
It should be noted that, for alternative or preferred embodiments of the present embodiment, reference may be made to the related description in the method embodiment, and details are not described herein again.
The memory prediction device may further include a processor and a memory, where the obtaining module 80, the converting module 82, the predicting module 84, and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to implement corresponding functions.
The processor comprises a kernel, and the kernel calls a corresponding program unit from the memory, wherein one or more than one kernel can be arranged. The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
According to the embodiment of the application, the embodiment of the nonvolatile storage medium is also provided. Optionally, in this embodiment, the nonvolatile storage medium includes a stored program, and the device in which the nonvolatile storage medium is located is controlled to execute the any one of the memory prediction methods when the program runs.
Optionally, in this embodiment, the nonvolatile storage medium may be located in any one of a group of computer terminals in a computer network, or in any one of a group of mobile terminals, and the nonvolatile storage medium includes a stored program.
Optionally, the apparatus in which the non-volatile storage medium is controlled to perform the following functions when the program is executed: according to an aspect of the embodiments of the present disclosure, there is provided a memory prediction method, including: acquiring a source code file; converting the source code file into a target data structure, wherein each leaf node in the target data structure is used for representing the logic sequence and the execution data of the corresponding code block in the source code file; and predicting the use condition of the source code file to the memory based on the target data structure to obtain a prediction result.
Optionally, the apparatus in which the non-volatile storage medium is controlled to perform the following functions when the program is executed: and applying and/or releasing part of source codes in the source code file to the memory, wherein the target data structure is an abstract syntax tree.
Optionally, the apparatus in which the non-volatile storage medium is controlled to perform the following functions when the program is executed: traversing the target data structure to obtain root nodes of the leaf nodes; generating an inverted linked list according to each leaf node and the root node of each leaf node; and predicting the use condition of the source code file on the memory based on the inverted linked list to obtain the memory prediction result.
Optionally, the apparatus in which the non-volatile storage medium is controlled to perform the following functions when the program is executed: obtaining a screening list and a selection list, wherein the screening list comprises: the memory application screening list and the memory release screening list comprise the following selection lists: a memory application selection list and a memory release selection list; screening out a first leaf node in all leaf nodes by adopting the memory application list, and storing the first leaf node into the memory application selection list; and screening out a second leaf node in all the leaf nodes by adopting the memory release list, and storing the second leaf node into the memory release selection list.
Optionally, the apparatus in which the non-volatile storage medium is controlled to perform the following functions when the program is executed: and marking the inverted linked list according to the type of the leaf node stored in the selected list to obtain a marked linked list, wherein the marked linked list comprises: the method comprises the following steps of (1) storing a memory application linked list, a memory release linked list and an undetermined linked list; recursively traversing all the undetermined linked lists to determine whether the undetermined linked lists have a public intersection with the memory application linked list and/or the memory release linked list to obtain a traversal result; and predicting the use condition of the source code file on the memory based on the traversal result to obtain the memory prediction result.
Optionally, the apparatus in which the non-volatile storage medium is controlled to perform the following functions when the program is executed: if the undetermined linked list and the memory application linked list are determined to have the public intersection, the undetermined linked list is determined to be classified into the memory application linked list, and if the undetermined linked list and the memory release linked list are determined to have the public intersection, the undetermined linked list is determined to be classified into the memory release linked list; if the undetermined linked list and the memory application linked list or the memory release linked list do not have public intersection, determining the undetermined linked list as an irrelevant linked list, wherein the irrelevant linked list is used for representing that the source code file corresponding to the undetermined linked list is irrelevant to the application and/or release of the memory; and if the undetermined linked list is determined to have a public intersection with the memory application linked list and the memory release linked list, determining the undetermined linked list as a related linked list, wherein the related linked list is used for representing that the source code file corresponding to the undetermined linked list is related to both the application and the release of the memory.
Optionally, the apparatus in which the non-volatile storage medium is controlled to perform the following functions when the program is executed: if the traversal result indicates that the pending linked list is included in the memory application linked list, predicting the application of a source code file corresponding to the pending linked list to the memory based on the memory application linked list; if the traversal result indicates that the pending linked list is classified into the memory release linked list, predicting the release of the source code file corresponding to the pending linked list to the memory based on the memory release linked list; and if the traversal result indicates that the to-be-determined linked list is the irrelevant linked list, predicting the use condition of the source code file corresponding to the to-be-determined linked list on the memory is not needed.
Optionally, the apparatus in which the non-volatile storage medium is controlled to perform the following functions when the program is executed: if the pending linked list is determined to be the related linked list, generating a new target data structure corresponding to the related linked list based on the head node of the related linked list; traversing all leaf nodes in the new target data structure, and determining the correlation between each leaf node and the use condition of the memory; and determining a memory prediction result corresponding to the correlation linked list based on the correlation relationship.
Optionally, the apparatus in which the non-volatile storage medium is controlled to perform the following functions when the program is executed: if the leaf node is traversed to be contained in the memory application linked list, determining that the leaf node is positively correlated with the memory application; and if the leaf node is traversed to be contained in the memory release linked list, determining that the leaf node is positively correlated with the memory release.
Optionally, the apparatus in which the non-volatile storage medium is controlled to perform the following functions when the program is executed: determining a target leaf node in the new target data structure if it is determined that the leaf node in the new target data structure cannot trace the source in the code, wherein the target leaf node is an input leaf node of the new target data structure; calculating a first calling frequency of a plurality of leaf nodes which are positively correlated with the application of the memory and a second calling frequency of a plurality of leaf nodes which are positively correlated with the release of the memory according to the target leaf node and the new target data structure; and determining the memory prediction result based on the first calling times and the second calling times.
Optionally, the apparatus in which the non-volatile storage medium is controlled to perform the following functions when the program is executed: if the correlation is determined to be irrelevant, acquiring an input signal of each of a plurality of correlation linked lists; arranging and combining a plurality of different input signals to obtain an arranging and combining result; simulating to obtain a memory application and release curve based on the permutation and combination result; and positioning a code position corresponding to the using condition of the memory in the source code file according to the memory application and the release curve.
According to the embodiment of the application, the embodiment of the processor is also provided. Optionally, in this embodiment, the processor is configured to execute a program, where the program executes any one of the memory prediction methods.
An embodiment of the present application provides an electronic device, which includes a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program to perform any one of the memory prediction methods.
The present application also provides a computer program product adapted to perform a program for initializing the steps of any of the above memory prediction methods when executed on a data processing device.
The above-mentioned serial numbers of the embodiments of the present disclosure are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present disclosure, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable non-volatile storage medium. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a non-volatile storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned nonvolatile storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present disclosure, and it should be noted that modifications and embellishments could be made by those skilled in the art without departing from the principle of the present disclosure, and these should also be considered as the protection scope of the present disclosure.

Claims (14)

1. A method for memory prediction, comprising:
acquiring a source code file;
converting the source code file into a target data structure, wherein each leaf node in the target data structure is used for representing the logic sequence and execution data of the corresponding code block in the source code file;
and predicting the use condition of the source code file to the memory based on the target data structure to obtain a prediction result.
2. The method of claim 1, wherein the usage of the memory by the source code file comprises: and applying and/or releasing a part of source codes to the memory in the source code file, wherein the target data structure is an abstract syntax tree.
3. The method of claim 2, wherein predicting the memory usage of the source code file based on the target data structure comprises:
traversing the target data structure to obtain root nodes of all the leaf nodes;
generating an inverted linked list according to each leaf node and the root node of each leaf node;
and predicting the use condition of the source code file on the memory based on the inverted linked list to obtain the memory prediction result.
4. The method of claim 3, wherein before predicting the memory usage of the source code file based on the inverted linked list to obtain the memory prediction result, the method further comprises:
obtaining a screening list and a selection list, wherein the screening list comprises: the memory application screening list and the memory release screening list comprise: a memory application selection list and a memory release selection list;
screening out a first leaf node in all leaf nodes by adopting the memory application list, and storing the first leaf node to the memory application selection list; and
and screening out a second leaf node in all leaf nodes by adopting the memory release list, and storing the second leaf node to the memory release selection list.
5. The method of claim 4, wherein the predicting the usage of the memory by the source code file based on the inverted linked list, and obtaining the memory prediction result comprises:
and marking the inverted linked list according to the type of the leaf node stored in the selected list to obtain a marked linked list, wherein the marked linked list comprises: the method comprises the following steps of (1) storing a memory application linked list, a memory release linked list and an undetermined linked list;
recursively traversing all the pending linked lists to determine whether the pending linked lists have a public intersection with the memory application linked list and/or the memory release linked list, and obtaining a traversal result;
and predicting the use condition of the source code file on the memory based on the traversal result to obtain the memory prediction result.
6. The method of claim 5, wherein the traversal result is obtained by determining whether the pending linked list has a public intersection with the memory application linked list and/or the memory release linked list at least as follows:
if the undetermined linked list and the memory application linked list are determined to have the public intersection, the undetermined linked list is determined to be classified into the memory application linked list, and if the undetermined linked list and the memory release linked list are determined to have the public intersection, the undetermined linked list is determined to be classified into the memory release linked list;
if the undetermined linked list is determined to have no public intersection with the memory application linked list or the memory release linked list, determining the undetermined linked list as an irrelevant linked list, wherein the irrelevant linked list is used for representing that the source code file corresponding to the undetermined linked list is irrelevant to the application and/or release of the memory;
and if the undetermined linked list is determined to have public intersection with the memory application linked list and the memory release linked list, determining the undetermined linked list as a related linked list, wherein the related linked list is used for representing that the source code file corresponding to the undetermined linked list is related to both the application and the release of the memory.
7. The method of claim 6, wherein predicting the memory usage of the source code file based on the traversal result comprises:
if the traversal result indicates that the pending linked list is included in the memory application linked list, predicting the application of a source code file corresponding to the pending linked list to the memory based on the memory application linked list;
if the traversal result indicates that the pending linked list is classified into the memory release linked list, predicting the release of the source code file corresponding to the pending linked list to the memory based on the memory release linked list;
if the traversal result indicates that the to-be-determined linked list is the irrelevant linked list, the use condition of the source code file corresponding to the to-be-determined linked list on the memory does not need to be predicted.
8. The method of claim 6, wherein predicting the memory usage of the source code file based on the traversal result further comprises:
if the pending linked list is determined to be the related linked list, generating a new target data structure corresponding to the related linked list based on a head node of the related linked list;
traversing all leaf nodes in the new target data structure, and determining the correlation between each leaf node and the use condition of the memory;
and determining a memory prediction result corresponding to the correlation linked list based on the correlation relationship.
9. The method of claim 8, wherein determining the dependency of each of the leaf nodes on the usage of the memory comprises:
if the leaf node is traversed to be contained in the memory application linked list, determining that the leaf node is positively correlated with the memory application;
and if the leaf node is traversed to be contained in the memory release linked list, determining that the leaf node is positively correlated with the memory release.
10. The method of claim 8, further comprising:
determining a target leaf node in the new target data structure if it is determined that a leaf node in the new target data structure cannot be traced in code, wherein the target leaf node is an input leaf node of the new target data structure;
calculating a first calling frequency of a plurality of leaf nodes in positive correlation with the application of the memory and a second calling frequency of a plurality of leaf nodes in positive correlation with the release of the memory according to the target leaf node and the new target data structure;
and determining the memory prediction result based on the first calling times and the second calling times.
11. The method of claim 8, further comprising:
if the correlation is determined to be uncorrelated, acquiring an input signal of each of the plurality of correlation linked lists;
carrying out permutation and combination on a plurality of different input signals to obtain a permutation and combination result;
simulating to obtain a memory application and release curve based on the permutation and combination result;
and positioning a code position corresponding to the using condition of the memory in the source code file according to the memory application and release curve.
12. A memory prediction apparatus, comprising:
the acquisition module is used for acquiring a source code file;
a conversion module, configured to convert the source code file into a target data structure, where each leaf node in the target data structure is used to characterize a logic sequence and execution data of a corresponding code block in the source code file;
and the prediction module is used for predicting the use condition of the source code file to the memory based on the target data structure to obtain a prediction result.
13. A non-volatile storage medium, comprising a stored program, wherein when the program runs, a device in which the non-volatile storage medium is located is controlled to execute the memory prediction method according to any one of claims 1 to 11.
14. An electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the computer program to perform the memory prediction method of any one of claims 1 to 11.
CN202110511788.2A 2021-05-11 2021-05-11 Memory prediction method and device, storage medium and electronic equipment Active CN113268243B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110511788.2A CN113268243B (en) 2021-05-11 2021-05-11 Memory prediction method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110511788.2A CN113268243B (en) 2021-05-11 2021-05-11 Memory prediction method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113268243A true CN113268243A (en) 2021-08-17
CN113268243B CN113268243B (en) 2024-02-23

Family

ID=77230396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110511788.2A Active CN113268243B (en) 2021-05-11 2021-05-11 Memory prediction method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113268243B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113778440A (en) * 2021-08-18 2021-12-10 上海瑞家信息技术有限公司 Data processing method and device, electronic equipment and storage medium
WO2023051270A1 (en) * 2021-09-30 2023-04-06 中兴通讯股份有限公司 Memory occupation amount pre-estimation method and apparatus, and storage medium
CN116450361A (en) * 2023-05-23 2023-07-18 南京芯驰半导体科技有限公司 Memory prediction method, device and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140068351A1 (en) * 2012-08-28 2014-03-06 Nec Laboratories America, Inc. Blackbox Memory Monitoring with a Calling Context Memory Map and Semantic Extraction
US20150100940A1 (en) * 2013-10-04 2015-04-09 Avaya Inc. System and method for prioritizing and remediating defect risk in source code
US20150363113A1 (en) * 2014-06-13 2015-12-17 Pivotal Software, Inc. Precisely tracking memory usage in multi-process computing environment
CN108153666A (en) * 2016-12-06 2018-06-12 北京奇虎科技有限公司 A kind of method and apparatus of resource reclaim loophole in static detection Android code
CN109117633A (en) * 2018-08-13 2019-01-01 百度在线网络技术(北京)有限公司 Static source code scan method, device, computer equipment and storage medium
CN109783361A (en) * 2018-12-14 2019-05-21 平安壹钱包电子商务有限公司 The method and apparatus for determining code quality
US20190155641A1 (en) * 2017-10-26 2019-05-23 Huawei Technologies Co.,Ltd. Method and apparatus for collecting information, and method and apparatus for releasing memory
US20190213355A1 (en) * 2018-01-08 2019-07-11 Codevalue D.T. Ltd. Time Travel Source Code Debugger Incorporating Redaction Of Sensitive Information
CN110187967A (en) * 2019-05-15 2019-08-30 南瑞集团有限公司 A kind of memory prediction method and device suitable for dependency analysis tool
CN110472411A (en) * 2019-08-20 2019-11-19 杭州和利时自动化有限公司 A kind of memory Overflow handling method, apparatus, equipment and readable storage medium storing program for executing
CN111966491A (en) * 2020-08-04 2020-11-20 Oppo广东移动通信有限公司 Method for counting occupied memory and terminal equipment
CN112667240A (en) * 2020-12-23 2021-04-16 平安普惠企业管理有限公司 Program code conversion method and related device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140068351A1 (en) * 2012-08-28 2014-03-06 Nec Laboratories America, Inc. Blackbox Memory Monitoring with a Calling Context Memory Map and Semantic Extraction
US20150100940A1 (en) * 2013-10-04 2015-04-09 Avaya Inc. System and method for prioritizing and remediating defect risk in source code
US20150363113A1 (en) * 2014-06-13 2015-12-17 Pivotal Software, Inc. Precisely tracking memory usage in multi-process computing environment
CN108153666A (en) * 2016-12-06 2018-06-12 北京奇虎科技有限公司 A kind of method and apparatus of resource reclaim loophole in static detection Android code
US20190155641A1 (en) * 2017-10-26 2019-05-23 Huawei Technologies Co.,Ltd. Method and apparatus for collecting information, and method and apparatus for releasing memory
US20190213355A1 (en) * 2018-01-08 2019-07-11 Codevalue D.T. Ltd. Time Travel Source Code Debugger Incorporating Redaction Of Sensitive Information
CN109117633A (en) * 2018-08-13 2019-01-01 百度在线网络技术(北京)有限公司 Static source code scan method, device, computer equipment and storage medium
CN109783361A (en) * 2018-12-14 2019-05-21 平安壹钱包电子商务有限公司 The method and apparatus for determining code quality
CN110187967A (en) * 2019-05-15 2019-08-30 南瑞集团有限公司 A kind of memory prediction method and device suitable for dependency analysis tool
CN110472411A (en) * 2019-08-20 2019-11-19 杭州和利时自动化有限公司 A kind of memory Overflow handling method, apparatus, equipment and readable storage medium storing program for executing
CN111966491A (en) * 2020-08-04 2020-11-20 Oppo广东移动通信有限公司 Method for counting occupied memory and terminal equipment
CN112667240A (en) * 2020-12-23 2021-04-16 平安普惠企业管理有限公司 Program code conversion method and related device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MARKUS WENINGER 等: "Analyzing data structure growth over time to facilitate memory leak detection", Retrieved from the Internet <URL:https://dl.acm.org/doi/pdf/10.1145/3297663.3310297> *
甘红星;金大海;宫云战;: "基于源代码的内存泄漏静态分析方法", 内蒙古大学学报(自然科学版), no. 05 *
赖建新: "搞懂静态代码分析,看这文章就够了", Retrieved from the Internet <URL:https://baijiahao.***.com/s?id=1664266188149110032&wfr=spider&for=pc> *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113778440A (en) * 2021-08-18 2021-12-10 上海瑞家信息技术有限公司 Data processing method and device, electronic equipment and storage medium
WO2023051270A1 (en) * 2021-09-30 2023-04-06 中兴通讯股份有限公司 Memory occupation amount pre-estimation method and apparatus, and storage medium
CN116450361A (en) * 2023-05-23 2023-07-18 南京芯驰半导体科技有限公司 Memory prediction method, device and storage medium
CN116450361B (en) * 2023-05-23 2023-09-29 南京芯驰半导体科技有限公司 Memory prediction method, device and storage medium

Also Published As

Publication number Publication date
CN113268243B (en) 2024-02-23

Similar Documents

Publication Publication Date Title
CN113268243B (en) Memory prediction method and device, storage medium and electronic equipment
CN107133174A (en) Test case code automatically generating device and method
CN107632827B (en) Method and device for generating installation package of application
CN104899016A (en) Call stack relationship obtaining method and call stack relationship obtaining device
US11816479B2 (en) System and method for implementing a code audit tool
CN113590454A (en) Test method, test device, computer equipment and storage medium
CN113158189A (en) Method, device, equipment and medium for generating malicious software analysis report
Lettner et al. Automated analysis of two-layered feature models with feature attributes
US20190087160A1 (en) System and method for creating domain specific language
WO2020245504A1 (en) Method and system for integration testing
CN110598419A (en) Block chain client vulnerability mining method, device, equipment and storage medium
CN113778897A (en) Automatic test method, device, equipment and storage medium of interface
CN112463519A (en) Flatter-based page use behavior data non-buried point statistical method, equipment and storage medium
CN112965711A (en) Job test method and apparatus, electronic device, and storage medium
Romero et al. Integration of DevOps practices on a noise monitor system with CircleCI and Terraform
CN112115041A (en) Dynamic point burying method and device for application program, storage medium and computer equipment
CN111459774A (en) Method, device and equipment for acquiring flow of application program and storage medium
CN108563578A (en) SDK compatibility detection method, device, equipment and readable storage medium storing program for executing
CN111124378B (en) Code generation method and device
CN114357057A (en) Log analysis method and device, electronic equipment and computer readable storage medium
CN114385155A (en) vue project visualization tool generation method, device, equipment and storage medium
US20120204159A1 (en) Methods and System for Managing Assets in Programming Code Translation
CN109726550A (en) Abnormal operation behavioral value method, apparatus and computer readable storage medium
CN112231186B (en) Performance data processing method and device, electronic equipment and medium
CN115437621A (en) Process editing method and device based on robot process automation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant