GB2586981A - Code optimisation system and method - Google Patents

Code optimisation system and method Download PDF

Info

Publication number
GB2586981A
GB2586981A GB1913024.4A GB201913024A GB2586981A GB 2586981 A GB2586981 A GB 2586981A GB 201913024 A GB201913024 A GB 201913024A GB 2586981 A GB2586981 A GB 2586981A
Authority
GB
United Kingdom
Prior art keywords
code
program code
machine learning
operable
learning model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB1913024.4A
Other versions
GB201913024D0 (en
Inventor
David Gallop Russell
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Interactive Entertainment Inc
Original Assignee
Sony Interactive Entertainment Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Interactive Entertainment Inc filed Critical Sony Interactive Entertainment Inc
Priority to GB1913024.4A priority Critical patent/GB2586981A/en
Publication of GB201913024D0 publication Critical patent/GB201913024D0/en
Publication of GB2586981A publication Critical patent/GB2586981A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/44Encoding
    • G06F8/443Optimisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Devices For Executing Special Programs (AREA)

Abstract

A system for modifying program code, the system comprising a code input unit operable to acquire program code, a code analysis unit operable to identify at least a first portion of program code for modification, and a code modification unit operable to convert the at least first portion of program code into a second portion of program code, wherein the conversion is determined using a machine learning model

Description

I
CODE OPTIMISATION SYSTEM AND METHOD
This disclosure relates to code optimisation systems and methods.
The problem of generating optimised code for performing a particular function, or set of functions, is of interest for a number of reasons. However while numerous techniques exist for optimising code, these may be laborious to implement and/or of limited effectiveness.
The use of more optimised code may be desirable in that functionality may be maintained while making use of fewer processing resources, for example; this may enable improved performance for an application using the same hardware, if it is well-optimised.
Similarly, the reduced processing requirements may enable a lower amount of power to be used in performing the processing; this may be of particular interest to those dealing with portable devices.
Traditionally, the optimisation process has been performed manually by a programmer. Such a process includes identifying slow or otherwise inefficient portions of code, and replacing them with improved code. This can be beneficial in that complex optimisations may be implemented that lead to significant improvements; however it may be a slow task, and it may not always result in improved code as such an approach is usually intuition-based rather than following a well-defined process.
More recently, optimisers have been developed that are operable to automate the optimisation process. These function by applying one or more predefined modifications (which may be referred to as 'hand-written' translations) to input code, the modifications being associated with improvements in code efficiency. The optimisers may be implemented as a part of a compiler, for example.
These optimisers may make use of machine learning elements in identifying which of the predefined modifications to apply to input code. For example, phase ordering exploration makes use of machine learning to be able to identify a desirable sequence of hand-written translations to apply to the input code. Machine learning methods have also been proposed that are able to be used to identify where to apply the modifications, rather than only performing the selection of a predetermined modification. These earlier arrangements may be considered to be examples of 'auto-tuning' arrangements, in which the optimisations are provided by a human, but applied automatically.
While such arrangements may provide improved optimisation and/or compiling processes, limitations are still present in that these methods effectively only automate the actions of a manual optimisation process -and generally only the more routine elements of that process. It is therefore considered that such processes may not result in suitably optimised program code.
It is therefore desirable than an improved optimisation process is provided for program code.
It is in the context of the above problems that the present disclosure arises.
This disclosure is defined by claim 1.
Further respective aspects and features of the disclosure are defined in the appended claims.
Embodiments of the present invention will now be described by way of example with reference to the accompanying drawings, in which: Figure 1 schematically illustrates a code inputting and compiling arrangement; Figure 2 schematically illustrates a supervised learning method; Figure 3 schematically illustrates a reinforcement learning method; Figure 4 schematically illustrates a code optimisation system; and Figure 5 schematically illustrates a code optimisation method.
Figure 1 schematically illustrates a code inputting and compiling arrangement in which a user is able to input source code directly; of course, in some embodiments the source code that is input may be pre-existing code that has been generated or input at an earlier time and stored for input.
The input terminal 100 represents a user input device, such as a personal computer, which enables a user to develop source code. Of course, the form of this terminal 100 is not of particular importance, so long as it is suitable to enable such inputs. In some embodiments, the source code may be provided using C++; although it should be considered that any suitable programming language may be used.
The compiler 110 comprises three elements; the intermediate representation generation unit 120, the optimiser 130, and the assembler 140. The first two elements (that is, the intermediate representation generation unit 120 and the optimiser 130) are considered to be optional in embodiments of the present disclosure. For example, the user input may be provided in such a manner that there is no need to generate an intermediate representation Similarly, one or both of these elements may be provided separately to the compiler 110.
The intermediate representation generation unit 120 is operable to generate an intermediate representation of the input code. An intermediate representation is an alternative way of representing the source code, often selected so as to be more suitable for future processing -such as optimisation. An example of an intermediate representation that may be used is that of the LLVM Intermediate Representation, although any suitable intermediate representation may be used.
The optimiser 130 is operable to generate an optimised version of the intermediate representation; the functioning of this optimisation process is described in more detail below. Functionally, the purpose of the optimiser 130 is to generate an improved version of the intermediate representation such that is has a smaller size or improved performance, for example.
The assembler 140 is operable to generate object code that is representative of the source code, using the output of the optimiser 130 (that is, the optimised version of the intermediate representation). Object code is code that is able to be executed by a processor; it may also be referred to as machine code. The assembly itself may be considered to be a translation process, in which the intermediate representation is translated into a specific format; an example of this is any language within the x86 assembly family of languages, although of course this should not be considered to be limiting.
In embodiments of the present disclosure, the optimisation process (that is, a process performed by the optimiser 130 of Figure 1) operates as a sequence-to-sequence translator. That is, a first sequence is identified in the intermediate representation that can be optimised, and a second (optimised) sequence is identified that replaces the first sequence for in the optimised intermediate representation. These sequences may be of any length, and are not required to be of fixed length.
A number of methods are considered for identifying which sequences may be modified, and how they are to be modified, in order to generate a more optimised representation. A selection of these is discussed below, in the context of a machine learning implementation.
In this disclosure, references to a machine learning implementation should be read so as to include any suitable machine learning algorithm, artificial neural network, or any other artificial intelligence based implementation. In some embodiments, natural language processing systems may be considered to provide a suitable basis for implementing one or more features of the present disclosure.
Machine learning implementations of the optimisation process may be considered to be particularly suitable in that an efficient and automated process may be designed, and this process may be particularly effective at determining how to modify sequences to generate more optimised code. In some cases, the model may be operable to generate new transforms that would not be considered by a manual optimisation or auto-tuning process.
In such embodiments, the machine learning algorithm is trained so as to be able to identify and perform translations of the intermediate representation in order to generate an optimised form of the intermediate representation. Of course, alternative translations within the compiling process could also be performed; for example, a translation from source code to the optimised intermediate representation, or from the source code to optimised object code directly.
As noted above, such embodiments differ from existing embodiments that utilise machine learning in that no hand-written transformations are needed to be provided to the machine learning model for application to the initial intermediate representation. Instead, training of the machine learning is performed so as to enable the model to determine appropriate translations and an order in which to apply them to the input code.
In some embodiments, the machine learning model is trained using a supervised learning based model. This training method can include providing corresponding sets of program code to the model for analysis; the first set comprises a first representation of the code, while a second set comprises an optimised representation of the code. For example, the corresponding sets may comprise optimised and non-optimised pairs of intermediate representations of source code. These representations may be marked-up (for example, to identify sequences that have been modified, or to identify modifiable elements within a sequence) to facilitate the training of the machine learning model where appropriate.
Given these contrasting representations of the same source code, the machine learning model is configured to be able to identify the differences between the representations, and to identify how to modify the original representation to reach the optimised representation. That is, the machine learning model is operable to identify transforms (or translations) to describe one or more modifications applied to one or more sequences in the intermediate representation.
Rather than identifying only the optimisations that are applied, the machine learning model should also be operable to identify the order in which those optimisations are applied; this may have an impact on the effectiveness or efficiency of the optimisation.
In addition to identifying the optimisations that are applied in the training data, the machine learning model may be adapted to identify new transforms that offer suitable optimisations.
For example, given an initial intermediate representation (x) and an optimised intermediate representation (y) that has had multiple optimisations (A, B) applied the model may determine a new optimisation (C=AB) that replaces these.
Similarly, a new optimisation could be developed that generalises optimisations (or groups of optimisations). For instance, in some cases applying AB to x results in y being more optimised, while in other cases applying BA to x (that is, the same optimisations but in the reverse order) may provide a more optimised y. While the machine learning model may be configured so as to identify when to use AB and when to use BA, it is considered that a new transform may be generated that provides a constant level of optimisation regardless of which would be the better order to apply A and B and/or provides a better optimisation than either of these Optimisation targets (that is, sequences in the initial input which may be modified as a part of the optimisation process) may be determined in a number of ways. Of course, modifications of these methods (or combinations) should also be considered, such as using one method to identify optimisation targets and then updating the method to identify further optimisation targets in response to the optimisation process itself.
A first example of such a method is that of using one or more existing compiler analysis models which are operable to identify sequences for which transforms are defined. This would provide information about which sequences are suitable to operate on in view of existing transforms. Of course, in traditional methods the transforms that may be applied are also provided to the model; however in embodiments of the present disclosure the transforms are identified using the machine learning model that is developed. For instance, an analysis of the input may be performed so as to identify optimisation targets, and the generated information may be provided in association with the input.
A second example is that of using a machine learning model or the like to replace existing compiler analysis models. For example, a range of existing analysis models may be used to train a machine learning model so as to provide a number of possible optimisations that can be implemented in addition to the sequences to which the optimisations should be applied. A third example is that of training a model by providing the inputs (such as un-optimised intermediate representations) and outputs (such as optimised intermediate representations) and allowing the model to determine the differences. This may comprise identifying an initial sequence, a transform, and a final sequence, for example -although the model may of course define the optimisation in any suitable fashion.
Figure 2 schematically illustrates an example of a general supervised learning method. At a step 200, training data is input. This training data may comprise corresponding pairs of unmodified and optimised intermediate representations of program code, for example.
Additional, or alternative, training data may comprise information about possible transforms that may be performed so as to optimise code, and/or information about sequences that may be operated on as a part of the optimisation process.
At a step 210, the training data is analysed by the machine learning algorithm so as to determine the differences between the representations; that is, one or more aspects relating to the optimisation are identified.
At a step 220, the transforms that were used to reach the optimised training data from the unmodified training data are identified. In some embodiments, this may comprise simply locating where known transforms have been applied, and the context in which they are used. In other embodiments, the transforms themselves are also identified. Of course, in some cases both of these identifications may be utilised.
At a step 230, the machine learning model is updated in view of the information that has been generated relating to the optimisations that are performed in the training data. This updating improves the ability of the model to identify and apply transforms as part of the optimisation process when applied to new portions of code or intermediate representations of code.
Rather than relying on supervised learning methods, the machine learning model may be trained using reinforcement learning techniques. In such methods, there is no requirement to provide both the input and output (optimised) program code as training data -instead, a model is created so as to iteratively generate an effective optimisation method.
Such a model is provided with an un-optimised (or at least only partially-optimised) intermediate representation of source code as an input. The model should then be operable to implement one or more optimisations to the input so as to generate the output. The effectiveness of the optimisation should then be measured, and used as an input to guide future optimisations. For example, if a particular optimisation is seen to give a negative effectiveness (such as lowering the efficiency of the output code relative to the input), it may be omitted in similar cases in future. Of course, such a model may be trained on a number of portions of input code as appropriate.
The effectiveness of the optimisation may be measured in a number of ways. For example, the length of the code may be considered (such that shorter code is considered more efficient), or the performance of the code during execution. The latter of these may be evaluated by executing the code once it has been generated, and measuring some combination of benchmarking information (such as frames per second or processor load) and/or the time taken to fully execute a code portion.
Figure 3 schematically illustrates an example of a general reinforcement learning 20 method.
At a step 300, data is input. This data may include one or more of an intermediate representation of source code (or the source code itself), an initial machine learning model, and/or information identifying one or more optimisations that may be performed.
At a step 310, the intermediate representation is modified. This may include the application of one or more optimisations, and/or one or more other changes to the intermediate representation with the goal of optimising the content. That is, the modifications to the intermediate representation may be known optimisations, or changes for which it is not known whether or not it will make the code more efficient.
At a step 320, the modified intermediate representation is evaluated, with the results of the evaluation being fed into the data input at step 300 for any future iterations of the reinforcement learning method. This evaluation may take any suitable form; as noted above, code length or one or more parameters associated with the execution of the code (such as processor load) may be considered as a suitable evaluation parameter.
The reinforcement aspect of such a network may take the form of the modifications being evaluated such that modifications which improve the efficiency of the code (as evaluated in step 320) are considered useful and as such are designated for use in later optimisations. For example, a positive weighting may be applied to the specific modification, while negative weightings are applied to modifications that are seen to reduce the efficiency of the code (or they may be discarded by the network altogether).
A recurrent neural network (RNN) is an example of a specific implementation of the optimisation process that may be used in embodiments of the present disclosure; this example is of course purely illustrative, and should not be regarded as being limiting.
An RNN is an example of an artificial neural network that makes use of sequential information -that is, RNNs may be used to identify a suitable feature (such as an optimisation, in the present case) in dependence upon earlier and/or later features (such as other optimisations that have been applied, or other sequences within the code that have been identified). Long short-term memory (LSTM) models are a specific implementation of an RNN that may be particularly appropriate in some embodiments. RNNs may be trained in any suitable manner; one example is that of a backpropagation algorithm.
RNNs may be of particular interest in some embodiments due to the ability to use inputs that are not of fixed size; this provides much greater flexibility when interpreting the input code representation and generating output code, and as a result can enable an improved optimisation process to be performed. The time-dependence of the RNN may also be advantageous when identifying optimisation targets within the input, as the context in which sequences appear may indicate which portions may be operated upon as a part of the optimisation process.
Unsupervised learning methods may also be of use in training a model in some embodiments of the present disclosure. These methods are used to identify patterns in unlabelled data; in the present case, this would mean that transforms are identified from preexisting code representations without the requirement of performing a pre-processing step to identify transforms and/or their locations within the representations.
One example of an unsupervised learning method is that of cluster analysis. This is a method that is operable to identify common features amongst data sets, and to group them (the groups being referred to as clusters). For example, transforms that are similar/the same may be identified as being a first cluster, while a second set of transforms that are each similar to one another may be identified as a second cluster. Having identified these clusters, general definitions of the transforms may be generated that are able to be applied to newly-input code so as to be able to perform an optimisation process.
Of course, this is not the only example of an unsupervised learning method that may be implemented; for example, a generative adversarial network may also be configured to implement an unsupervised learning method.
It is therefore clear that any suitable supervised, unsupervised, and/or reinforcement learning methods may be suitable for use in embodiments of the present arrangement. While examples of each are provided above, the skilled person should be expected to be able to select an appropriate implementation for given inputs and desired outputs within embodiments of the present disclosure.
Optimisations have been discussed above in general terms, rather than referring to specific examples of transforms that may be applied so as to generate more optimised representations of input code. For clarity, however, a number of examples of such transforms are provided below.
In some embodiments, the representation of the input code may be compressed by the optimisation process. For example, repeated statements may be omitted where appropriate, if they do not cause any new functions to be performed. Similarly, sequences which appear in the same program multiple times may be defined as a new function that can be referred to using a shorter function name -this may again reduce the size of the code, and may improve the efficiency.
In some embodiments, the representation of the code may be modified so as to substitute one or more functions out for new functions. This may not shorten the representation of the code, in some cases, but may provide a more efficient operation in that specific use case (or, of course, greater efficiency generally). In some examples, a group of functions may be replaced by a known composite function.
In some cases, the model may be trained so as to identify one or more functions that can be added to the representation of the code so as to improve its efficiency. An example of this is to pre-calculate one or more variables before they are required to be used, such that they can be obtained without significant processing as required. Of course, one or more functions within the representation may need to be amended so as to utilise this new function.
These are merely examples of optimisations that may be identified and implemented, of course, and it is considered that any suitable modifications to the representation of the code that increase processing efficiency, reduce the processing time (for example, by increasing parallelism of the processing), or reduce the code length/storage size may be used in the optimisation process.
Increased parallelism is a significant benefit to the optimisation process that may be obtained from embodiments; traditional optimisation methods often comprise a large number of serial decision/condition statements which can result in a highly serialised processing. By taking an alternative approach, in which the optimisation is driven by machine learning methods rather than applying hand-written transforms, the process may be distributed more effectively and thereby be performed more quickly.
In some embodiments, the optimisation model may be trained upon a range of different programs so as to provide a generalised optimisation process that may be more widely applicable. This may also provide an improved optimisation method; for example, optimisations that are more commonly used for specific programs/program classes may be identified as being applicable in other areas.
Alternatively, the optimisation model may be tuned to be more effective upon particular program code or particular classes of programs. For example, the model may be trained only upon program code from a specific program/class. VVhile this may result in a less widely-applicable optimisation process, the process may be more effective at optimising code similar to that upon which it was trained.
Figure 4 schematically illustrates a system for modifying program code. The system comprises a code input unit 400, a code analysis unit 410, a code modification unit 420, and an assembler 430.
The code input unit 400 is operable to acquire program code, for example as a direct input by a user or from a storage device. In some embodiments, the code input unit 410 is operable to generate an intermediate representation of the acquired program code -although of course the program code may be acquired in such a format initially, or the modifications may be made to the source code.
The code analysis unit 410 is operable to identify at least a first portion of program code for modification. The code analysis unit 410 may be operable to identify one or more sequences of the program code that are suitable for modification; for example, one or more segments of the program code that may be suitable for modification so as to improve the efficiency of the code.
Of course, references to 'program code' here may include source code or an intermediate representation of that code.
The code analysis unit 410 may further be operable to identify one or more suitable modifications to the identified program code for use by the code modification unit 420; the code analysis unit 410 may additionally be operable to identify an order in which the modifications should be applied.
The code modification unit 420 is operable to convert the at least first portion of program code into a second portion of program code, wherein the conversion is determined using a machine learning model. As noted above, this machine learning model may be trained to identify appropriate transforms (including the modification to be applied and/or the order of applying modifications) from training data, rather than simply applying predefined optimisations to input code.
The machine learning model may be trained using a supervised learning, unsupervised learning, or reinforcement learning method, and in some embodiments the machine learning model comprises a recurrent neural network. The machine learning model is trained so as to improve one or more of the efficiency, execution speed, and storage size of the first portion of program code -for example, a single metric may be considered, or multiple metrics that are each assigned an optimisation weighting in dependence upon the importance of that metric in evaluating the optimisation of the program code.
The machine learning model is trained to identify one or more conversions to be applied to program code based upon existing examples of un-optimised and optimised code, for example -this is discussed above with reference to Figures 2 and 3. In some embodiments, the machine learning model is trained using program code corresponding to a single program type; for example, a single category of application (such as game), or a subset of a category (such as a specific game franchise). This may lead to an increased optimisation ability for the model in respect of those categories, as the identified optimisations may be more tailored to those applications.
The machine learning model operates on variable-length sequences of the program code, rather than only fixed-length inputs; this increases the flexibility of the model relative to those arrangements which only apply predefined modifications as a part of the optimisation process.
The assembler 430 is operable to convert the modified code into object code; this feature is of course optional in embodiments of the present disclosure, as the optimised intermediate representation may be the preferred format for additional review/processing, storage, transmission, or the like.
The arrangement of Figure 4 is an example of a processor (for example, a CPU located in a PC or at a server) that is operable to modify program code, and in particular is operable to: acquire program code; identify at least a first portion of program code for modification; and convert the at least first portion of program code into a second portion of program code, wherein the conversion is determined using a machine learning model.
Figure 5 schematically illustrates a method for modifying program code.
A step 500 comprises acquiring program code.
A step 510 comprises identifying at least a first portion of program code for modification; as is made clear above, this may of course comprise an analysis of the intermediate representation of the code rather than source code itself.
A step 520 comprises converting the at least first portion of program code into a second portion of program code, wherein the conversion is determined using a machine learning model. An optional step 530 comprises assembling the code; that is, converting the modified code into object code.
The techniques described above may be implemented in hardware, software or combinations of the two. In the case that a software-controlled data processing apparatus is employed to implement one or more features of the embodiments, it will be appreciated that such software, and a storage or transmission medium such as a non-transitory machine-readable storage medium by which such software is provided, are also considered as embodiments of the disclosure.

Claims (15)

  1. CLAIMS1. A system for modifying program code, the system comprising: a code input unit operable to acquire program code; a code analysis unit operable to identify at least a first portion of program code for modification; and a code modification unit operable to convert the at least first portion of program code into a second portion of program code, wherein the conversion is determined using a machine learning model.
  2. 2. A system according to claim 1, wherein the machine learning model comprises a recurrent neural network.
  3. 3. A system according to claim 1, wherein the machine learning model is trained so as to improve one or more of the efficiency, execution speed, and storage size of the first portion of program code.
  4. 4. A system according to claim 1, wherein the machine learning model is trained using a supervised learning, unsupervised learning, or reinforcement learning method.
  5. 5. A system according to claim 4, wherein the machine learning model is trained to identify one or more conversions to be applied to program code based upon existing examples of unoptimised and optimised code.
  6. 6. A system according to claim 1, wherein the machine learning model operates on variable-length sequences of the program code.
  7. 7. A system according to claim 1, wherein the machine learning model is trained using program code corresponding to a single program type.
  8. 8. A system according to claim 1, wherein the code input unit is operable to generate an intermediate representation of the acquired program code.
  9. 9. A system according to claim 1, wherein the code analysis unit is operable to identify one or more sequences of the program code that are suitable for modification.
  10. 10. A system according to claim 1, wherein the code analysis unit is operable to identify one or more suitable modifications to the identified program code for use by the code modification unit.
  11. 11. A system according to claim 10, wherein the code analysis unit is operable to identify an order in which the modifications should be applied.
  12. 12. A system according to claim 1, comprising an assembler operable to convert the modified code into object code. 10
  13. 13. A method for modifying program code, the method comprising: acquiring program code; identifying at least a first portion of program code for modification; and converting the at least first portion of program code into a second portion of program code, wherein the conversion is determined using a machine learning model.
  14. 14. Computer software which, when executed by a computer, causes the computer to carry out the method of claim 13.
  15. 15. A non-transitory machine-readable storage medium which stores computer software according to claim 14.
GB1913024.4A 2019-09-10 2019-09-10 Code optimisation system and method Pending GB2586981A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1913024.4A GB2586981A (en) 2019-09-10 2019-09-10 Code optimisation system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1913024.4A GB2586981A (en) 2019-09-10 2019-09-10 Code optimisation system and method

Publications (2)

Publication Number Publication Date
GB201913024D0 GB201913024D0 (en) 2019-10-23
GB2586981A true GB2586981A (en) 2021-03-17

Family

ID=68241031

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1913024.4A Pending GB2586981A (en) 2019-09-10 2019-09-10 Code optimisation system and method

Country Status (1)

Country Link
GB (1) GB2586981A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2589900B (en) * 2019-12-12 2022-06-01 Sony Interactive Entertainment Inc Apparatus and method for source code optimisation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CUMMINS CHRIS ET AL: "End-to-End Deep Learning of Optimization Heuristics", 2017 26TH INTERNATIONAL CONFERENCE ON PARALLEL ARCHITECTURES AND COMPILATION TECHNIQUES (PACT), IEEE, 9 September 2017 (2017-09-09), pages 219 - 232, XP033243793, DOI: 10.1109/PACT.2017.24 *
GENE SHER ET AL: "Preliminary results for neuroevolutionary optimization phase order generation for static compilation", OPTIMIZATIONS FOR DSP AND EMBEDDED SYSTEMS, ACM, 2 PENN PLAZA, SUITE 701 NEW YORK NY 10121-0701 USA, 15 February 2014 (2014-02-15), pages 33 - 40, XP058045392, ISBN: 978-1-4503-2595-0, DOI: 10.1145/2568326.2568328 *
WANG ZHENG ET AL: "Machine Learning in Compiler Optimization", PROCEEDINGS OF THE IEEE, IEEE. NEW YORK, US, vol. 106, no. 11, 1 November 2018 (2018-11-01), pages 1879 - 1901, XP011703346, ISSN: 0018-9219, [retrieved on 20181025], DOI: 10.1109/JPROC.2018.2817118 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2589900B (en) * 2019-12-12 2022-06-01 Sony Interactive Entertainment Inc Apparatus and method for source code optimisation
US11748072B2 (en) 2019-12-12 2023-09-05 Sony Interactive Entertainment Inc. Apparatus and method for source code optimisation

Also Published As

Publication number Publication date
GB201913024D0 (en) 2019-10-23

Similar Documents

Publication Publication Date Title
Buitinck et al. API design for machine learning software: experiences from the scikit-learn project
US11036483B2 (en) Method for predicting the successfulness of the execution of a DevOps release pipeline
CN108197027B (en) Software performance optimization method, storable medium, computer program
US11423333B2 (en) Mechanisms for continuous improvement of automated machine learning
CN114625351B (en) Compiler optimization method, system and storage medium
US20220107793A1 (en) Concept for Placing an Execution of a Computer Program
Yang et al. Generating efficient MCMC kernels from probabilistic programs
Monge et al. Ensemble learning of runtime prediction models for gene-expression analysis workflows
Manthey Towards next generation sequential and parallel SAT solvers
CN114676042B (en) Method and device for generating test data of electric power Internet of things
CN116627490A (en) Intelligent contract byte code similarity detection method
GB2586981A (en) Code optimisation system and method
Anderson et al. Sample, estimate, tune: Scaling bayesian auto-tuning of data science pipelines
Shewale et al. Compiler optimization prediction with new self-improved optimization model
Trinca et al. A preliminary study on generating well-formed Q# quantum programs for fuzz testing
Battiti et al. An investigation of reinforcement learning for reactive search optimization
Zeng et al. ALT: optimizing tensor compilation in deep learning compilers with active learning
Chen et al. Supersonic: Learning to generate source code optimisations in c/c++
CN110377525B (en) Parallel program performance prediction system based on runtime characteristics and machine learning
Liu et al. Fuzzy encoding and decoding approaches for 2-TCLE and their applications in multi-criteria decision making
EP4254279A1 (en) Machine learning pipeline augmented with explanation
Noack et al. High-performance hybrid-global-deflated-local optimization with applications to active learning
Alhasnawy et al. Using machine learning to predict the sequences of optimization passes
Gerndt et al. A multi-aspect online tuning framework for HPC applications
Trevisan Jost et al. Gpu code generation of cardiac electrophysiology simulation with mlir