CN113986239A - Distributed compiling method, device, equipment and readable storage medium - Google Patents

Distributed compiling method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN113986239A
CN113986239A CN202111233730.2A CN202111233730A CN113986239A CN 113986239 A CN113986239 A CN 113986239A CN 202111233730 A CN202111233730 A CN 202111233730A CN 113986239 A CN113986239 A CN 113986239A
Authority
CN
China
Prior art keywords
task
compiling
task scheduling
distributed
running
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111233730.2A
Other languages
Chinese (zh)
Inventor
付华楷
宋保科
胡琼霞
方亦腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Third Generation Communication Technology Co ltd
Fiberhome Telecommunication Technologies Co Ltd
Original Assignee
Nanjing Third Generation Communication Technology Co ltd
Fiberhome Telecommunication Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Third Generation Communication Technology Co ltd, Fiberhome Telecommunication Technologies Co Ltd filed Critical Nanjing Third Generation Communication Technology Co ltd
Priority to CN202111233730.2A priority Critical patent/CN113986239A/en
Publication of CN113986239A publication Critical patent/CN113986239A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/42Syntactic analysis
    • G06F8/427Parsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

The application discloses a distributed compiling method, a distributed compiling device, equipment and a readable storage medium, wherein a compiling instruction is executed, and a concurrent subtask sequence is generated according to the compiling instruction and a source code packet in a preset shared directory; running the task scheduling server in the preset shared directory according to the subtask sequence so that the task scheduling client in each container compiles the subtasks distributed by the task scheduling server from the subtask sequence; and after each task scheduling client is determined to execute and compile the subtasks, updating the task state and finishing the packing work. Therefore, the task granularity during parallel division of the overall compilation task is ensured to be small, and meanwhile, real-time scheduling in the distributed compilation process is not needed, so that the overall parallelism and scheduling of the distributed compilation task can be effectively improved, the compilation time is shortened, and the compilation efficiency is improved.

Description

Distributed compiling method, device, equipment and readable storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a distributed compiling method, apparatus, device, and readable storage medium.
Background
At present, with the increase of functions, the code amount of the middleware of the large software platform reaches the level of millions or even tens of millions. If a plurality of CPU architectures need to be supported, the compiling time of the source code is multiplied along with the number of the CPU architectures on the original basis. When the middleware of the large software platform needs to be externally released in a binary form, the whole release needs several hours or even longer time due to the huge code amount and the compiling problem caused by a multi-CPU architecture, and the release efficiency of the middleware of the large software platform is severely restricted.
Although the distribution efficiency of the middleware of the large-scale software platform is solved by adopting a distributed compiling mechanism at present, the traditional distributed compiling mechanism needs to compile files and configuration synchronously among a plurality of virtual machine clusters, so that the compiling efficiency is reduced, or when a large number of compiling dependencies exist among software modules of the middleware of the large-scale software platform and software iteration and version updating are frequent, the traditional distributed compiling system cannot flexibly analyze and process the dependencies among the modules, so that the parallelism of the overall tasks of the distributed compiling is reduced, and when the dependencies are scheduled and processed in real time in the distributed compiling process, extra overhead is increased, so that the compiling efficiency of the overall tasks is not high.
Disclosure of Invention
The application provides a distributed compiling method, a distributed compiling device, equipment and a readable storage medium, which are used for solving the problem that the compiling efficiency of the distributed compiling is too low.
To achieve the above object, an embodiment of the present invention provides a distributed compiling method, where the method includes the following steps:
executing a compiling instruction, and generating a concurrent subtask sequence according to the compiling instruction and a source code packet in a preset shared directory;
running the task scheduling server in the preset shared directory according to the subtask sequence so that the task scheduling client in each container compiles the subtasks distributed by the task scheduling server from the subtask sequence;
and after each task scheduling client is determined to execute and compile the subtasks, updating the task state and finishing the packing work.
In order to achieve the above object, an embodiment of the present invention further provides a distributed compiling apparatus, where the distributed compiling apparatus includes:
the execution module is used for executing the compiling instruction and generating a concurrent subtask sequence according to the compiling instruction and a source code packet in a preset shared directory;
running and distributing, wherein the running and distributing is used for running the task scheduling server in the preset shared directory according to the subtask sequence so that the task scheduling client in each container compiles the subtasks distributed by the task scheduling server from the subtask sequence;
and the determining and updating module is used for determining that after each task scheduling client executes and compiles the subtasks, the task state is updated, and the packing work is completed.
To achieve the above object, an embodiment of the present invention further provides a computer device, which includes a processor, a memory, and a computer program stored on the memory and executable by the processor, wherein when the computer program is executed by the processor, the steps of the distributed compiling method as described above are implemented.
In order to achieve the above object, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, where the computer program, when executed by a processor, implements the steps of the distributed compiling method.
The application discloses a distributed compiling method, a distributed compiling device, equipment and a readable storage medium, wherein a compiling instruction is executed, and a concurrent subtask sequence is generated according to the compiling instruction and a source code packet in a preset shared directory; running the task scheduling server in the preset shared directory according to the subtask sequence so that the task scheduling client in each container compiles the subtasks distributed by the task scheduling server from the subtask sequence; and after each task scheduling client is determined to execute and compile the subtasks, updating the task state and finishing the packing work. Therefore, the task granularity during parallel division of the overall compilation task is ensured to be small, and meanwhile, real-time scheduling in the distributed compilation process is not needed, so that the overall parallelism and scheduling of the distributed compilation task can be effectively improved, the compilation time is shortened, and the compilation efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a distributed compiling method according to an embodiment of the present application;
fig. 2 is a schematic scene diagram of a terminal provided in an embodiment of the present application;
FIG. 3 is a flow diagram illustrating sub-steps of the distributed compilation process of FIG. 1;
fig. 4 is a schematic flow chart of parsing and slicing according to an embodiment of the present application;
FIG. 5 is a flow diagram illustrating sub-steps of the distributed compilation process of FIG. 1;
fig. 6 is a schematic block diagram of a distributed compiling apparatus according to an embodiment of the present application;
fig. 7 is a block diagram schematically illustrating a structure of a computer device according to an embodiment of the present application.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
The embodiment of the application provides a distributed compiling method, a distributed compiling device, computer equipment and a computer readable storage medium. The distributed compiling method can be applied to computer equipment, and the computer equipment can be electronic equipment such as a notebook computer and a desktop computer.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a schematic flowchart of a distributed compiling method according to an embodiment of the present application.
As shown in fig. 1, the present embodiment provides a distributed compiling method, which includes the following steps:
step S101: and executing the compiling instruction, and generating a concurrent subtask sequence according to the compiling instruction and a source code packet in a preset shared directory.
Illustratively, a compiling instruction input by a user is received to execute the compiling instruction. For example, the user login is detected, and the compiling instruction input by the user is received in real time. And when a compiling instruction input by a user is received, executing the compiling instruction. And generating a subtask sequence with high concurrency degree through the compiling instruction and the source code packet in the preset shared directory, wherein the subtask sequence with high concurrency degree is a subtask sequence with higher than the preset concurrency degree. For example, the source code package in the preset shared directory is compiled through the compiling instruction to generate a high-concurrency subtask sequence which is higher than the preset-concurrency subtask sequence.
As shown in fig. 2, a plurality of virtual machines are set in the terminal in advance, and a plurality of containers are run in each virtual machine. A container cluster is generated by a plurality of virtual machines, and a plurality of containers run in each virtual machine. For example, N virtual machines are provided, and M containers are run in each virtual machine, so as to generate a container cluster composed of N × M containers, where the container is a Docker container. A sharing catalog is set in a terminal in advance, and all containers in the container cluster share a preset sharing catalog through a virtual machine container sharing technology. The virtual machine may be in a terminal, and the number of the terminals may be one or multiple, which is not limited to the number of the terminals. For example, when there is one terminal, the terminal is taken as a host, a plurality of virtual machines are set in the host, and a shared directory is set in the host. Or, when there are a plurality of terminals, determining one of the terminals as a host, setting a plurality of virtual machines in the host and/or other terminals, and setting a shared directory in the host.
The source code package comprises the preposed task management module, and the method has the advantages that the code catalog of the source code and the code of the preposed task management module need to be synchronously modified when the makefile is changed and adjusted, and the issue of the preposed task management module for independent management does not need to be considered when the preposed task management module is contained in the source code package. Otherwise, if the source code packet does not contain the preposed task management module, the information in the source code packet needs to be read to determine whether the preposed task management module needs to be updated during compiling, and if the preposed task management module needs to be updated, the compiling needs to be started after the preposed task management module is downloaded at other positions.
In an embodiment, specifically referring to fig. 3, step S101 includes: substep S1011 to substep S1013.
Substep S1011, running the pre-task management module through the compiling instruction;
exemplarily, the preset shared directory includes a source code packet, and the source code packet includes a pre-task management module, a plurality of function modules, and a general compilation task. And when the execution instruction is detected, the preposed task management module is operated through the execution instruction.
Substep S1012, analyzing each functional module according to the running pre-task management module, and obtaining a dependency relationship between the functional modules;
exemplarily, each functional module is analyzed by the running pre-task management module to obtain the dependency relationship between the functional modules. For example, the code logic relationship in each functional module is obtained through the pre-task management module, and the dependency relationship between each functional module is obtained through the code logic relationship.
Specifically, the analyzing, by the pre-task management module according to the running, each functional module to obtain a dependency relationship between the functional modules includes: analyzing the link parameters of each functional module through the running preposed task management module to obtain a code compiling dependency graph; and analyzing the code compiling dependency graph to obtain the dependency relationship of each functional module in the code compiling dependency graph.
Exemplarily, the running preposed task management module analyzes the link parameters of each functional module to obtain a code compiling dependency graph. For example, each functional module comprises a code directory, the code directory is hierarchical, the front task management module recursively scans makefile of the code directory layer by layer from the top of a code directory tree to a leaf node, determines the dependency relationship with other directory hierarchies according to link options in the makefile, and finally generates a code compiling dependency graph managed according to the tree hierarchy. The makefile defines a series of rules to specify which files need to be compiled first, which files need to be compiled later, and which files need to be recompiled.
And obtaining the dependency relationship of each function in the code compiling dependency graph after obtaining the code compiling dependency graph of each function module. For example, as shown in fig. 4, the code directory of the functional module includes MOD1, MOD2, MOD3, and MOD4, and by obtaining the code compilation dependency graph, MOD3 depends on MOD1, and MOD4 depends on MOD 1.
And a substep S1013 of analyzing and fragmenting the total compilation task according to the pre-task management module and the dependency relationship to generate a concurrent subtask sequence.
Exemplarily, the total compiling task is analyzed and fragmented through the running preposed task management module and the dependency relationship, that is, the total compiling task is fragmented into a plurality of subtask sequences with high concurrency. For example, as shown in fig. 4, a total compiling task is analyzed through a pre-task management module and a dependency relationship, a dependent-link compiling task and a non-dependent-link compiling task in the total compiling task are obtained, and the dependent-link compiling task is fragmented to generate the compiling task and the link task, that is, a high-concurrency subtask sequence is generated, where the high-concurrency subtask sequence includes the non-dependent-link compiling task, the compiling task and the link task.
Specifically, the parsing and fragmenting the total compilation task according to the pre-task management module and the dependency relationship to generate a high-concurrency subtask sequence includes: analyzing the total compiling task through the running preposed task management module and the dependency relationship to obtain each compiling link task information in the total compiling task, wherein the compiling link task information comprises a dependent compiling link task and a non-dependent compiling link task; slicing and refining the dependent compiling link to obtain a compiling task and a link task; and generating a subtask sequence with high concurrency based on the independent compiling link task, the compiling task and the link task.
Exemplarily, the total compiling task is analyzed through the running preposed task management module and the dependency relationship, and information of each compiling link task in the total compiling task is obtained, wherein the analysis comprises multiple rounds of analysis. For example, code compilation is divided into two sub-processes: one is compilation and the other is linking. The first round is the transliteration of the compilation command, as shown in FIG. 4, with the code subdirectories MOD1, MOD2, MOD3, MOD 4; the user requires to compile two tool chains of x86 and ppc, and determines the compiling order of the directory under different tool chains, and there is only one task at this time, namely compiling link x86 architecture MOD1 → compiling link x86 architecture MOD2 → compiling link x86 architecture MOD3 (dependent on MOD1) → compiling link ppc architecture MOD1 → compiling link ppc architecture MOD2 → compiling link ppc architecture MOD4 (dependent on MOD 1).
The second round is decomposition according to a tool chain, and there are two concurrent tasks, namely, the compiling and linking x86 architecture MOD1 → the compiling and linking x86 architecture MOD2 → the compiling and linking x86 architecture MOD3 (dependent on MOD1), the compiling and linking ppc architecture MOD1 → the compiling and linking ppc architecture MOD2 → the compiling and linking ppc architecture MOD4 (dependent on MOD 1).
The third round is that the system is divided according to the directory dependence, and four concurrent tasks are existed in the task pool without dependence, namely a compiling link x86 architecture MOD1, a compiling link x86 architecture MOD2, a compiling link ppc architecture MOD1 and a compiling link ppc architecture MOD 2; for the task with the dependency placed in the task pool II, the task pool II depends on the task in the task pool I, and two concurrent tasks exist in the task pool II, namely a compiling and linking x86 architecture MOD3 (dependent on MOD1) and a compiling and linking ppc architecture MOD4 (dependent on MOD 1). At this time, the high concurrency degree of the task pool I and the task pool II takes a maximum value of 4.
The fourth round is to separate the compiling and linking sub-processes in the task pool two, wherein the compiled tasks are put into the task pool one, and only 2 linked tasks are in the task pool two, namely the link x86 architecture MOD3 and the link ppc architecture MOD 4. The number of tasks of the maximum number of concurrency taking, namely, task pool one, is 6, namely, the compiling link x86 architecture MOD1, the compiling link x86 architecture MOD2, the compiling link ppc architecture MOD1, the compiling link ppc architecture MOD2, the compiling x86 architecture MOD3 and the compiling ppc architecture MOD 4. And generating a high-concurrency subtask sequence by the subtasks in the first task pool and the second task pool, wherein the subtask sequence comprises a compiling and linking x86 architecture MOD1, a compiling and linking x86 architecture MOD2, a compiling and linking ppc architecture MOD1, a compiling and linking ppc architecture MOD2, a compiling x86 architecture MOD3 and a compiling ppc architecture MOD4 of the first task pool, and a linking x86 architecture MOD3 and a linking ppc architecture MOD4 of the second task pool.
Step S102: and running the task scheduling server in the preset shared directory according to the subtask sequence so that the task scheduling client in each container compiles the task scheduling server to distribute the subtasks from the subtask sequence.
Exemplarily, when the subtask sequence is obtained, the task scheduling server in the preset shared directory is run through the subtask sequence, and the subtask in the subtask sequence is distributed to the task scheduling client in each container through the task scheduling server, so that each task scheduling client compiles the subtask.
In an embodiment, specifically referring to fig. 5, step S102 includes: substeps 1021 to substep S1023.
Step S1021, inputting the subtask sequence into a task scheduling server to run the task scheduling server;
exemplarily, the obtained sub-task sequence is input to the task scheduling server to run the task scheduling server. After the task scheduling server is operated, request information sent by each task scheduling client is received, the task scheduling client name, the task scheduling client address and the like carried in the request information are obtained, and the obtained task scheduling client, the task scheduling client address and the like are written into a preset assignable task table.
Substep S1022, reading a preset assignable task table through the running task scheduling server, and obtaining a plurality of task scheduling clients in the preset assignable task table;
exemplarily, by operating the task scheduling server, the task scheduling client reads a preset allocable task table, and obtains a plurality of task scheduling clients recorded in the preset allocable task table, for example, obtains names, addresses, and the like of the task scheduling clients recorded in the preset allocable task table.
And a substep S1023 of distributing the subtasks in the subtask sequence to each task scheduling client, so that the task scheduling client in each container compiles the task scheduling server to distribute the subtasks from the subtask sequence.
Exemplarily, when a plurality of task scheduling clients in a preset assignable task table are obtained, the subtasks in the subtask sequence are distributed to each task scheduling client. When each task scheduling client receives the subtasks distributed by the task scheduling server, each subtask is compiled, the compiling process is completed in a preset shared directory, namely a sub-compiling file obtained by compiling the subtasks is generated in the preset shared directory. And after the task scheduling client finishes compiling the subtasks, sending request information to the task scheduling server to acquire new subtasks.
Specifically, before the task scheduling server that operates reads a preset assignable task table, the method further includes: receiving request information sent by each task scheduling client, and acquiring task scheduling client information carried in the request information; and adding the task scheduling client information into a preset assignable task table.
Exemplarily, request information sent by each task scheduling client is received, and task scheduling client information carried in the request information is obtained; and adding the service scheduling client information into a preset assignable task table. And if the subtask is not received within the preset time after the task scheduling client sends the request information to the task scheduling server, the request information is sent to the task scheduling server again. In the embodiment, when a terminal is started, a container in the terminal is also started along with the terminal, when the container is started, the task scheduling client is operated, the task scheduling client sends a task request message REQ to a task scheduling server, and then waits for the task scheduling server to reply an ACK message; the task scheduling server receives the REQ message of the task scheduling client, and schedules and distributes a task from the local subtask sequence to be issued to the task scheduling client along with the ACK message for execution; the task scheduling client receives the ACK message of the task scheduling server, executes the subtask instruction, sends a REP message to the task scheduling server after the completion of the subtask instruction, and then waits for the task scheduling server to reply the CLOS message; the task scheduling server receives the REP message of the task scheduling client, updates the subtask state to be complete, and then replies a CLOS message to the task scheduling client; and the task scheduling client receives the CLOS message of the task scheduling server, closes the task at the time after the CLOS message is completed, and starts the next task request. The sending of the request information further comprises the steps of obtaining address information of the task scheduling server when the task scheduling server runs, and sending the request information to the task scheduling server through the address information.
Step S103: and after each task scheduling client is determined to execute and compile the subtasks, updating the task state and finishing the packing work.
Exemplarily, after determining that the task scheduling client executes the compiling subtask, the task state is updated. And after all the subtasks distributed by the task scheduling server are finished, the task scheduling server updates the task state, namely stops running. And acquiring sub-compiled files of each subtask in the preset shared directory, and packaging the sub-compiled files to generate a compiled file, namely completing the packaging work.
In the embodiment of the application, a high-concurrency subtask sequence is generated by executing a compiling instruction and a source code packet in a preset shared directory, and a task scheduling server in the preset shared directory is operated according to the subtask sequence, so that a task scheduling client in each container compiles subtasks distributed by the task scheduling server from the subtask sequence; and after each task scheduling client is determined to execute and compile the subtasks, the task state is updated, and the packing work is completed. Therefore, the task granularity during parallel division of the overall compilation task is ensured to be small, and meanwhile, real-time scheduling in the distributed compilation process is not needed, so that the overall parallelism and scheduling of the distributed compilation task can be effectively improved, the compilation time is shortened, and the compilation efficiency is improved.
Referring to fig. 6, fig. 6 is a schematic block diagram of a distributed compiling apparatus according to an embodiment of the present application.
As shown in fig. 6, the distributed compiling apparatus 300 includes: an execution module 301, a run and distribute module 302, and a determine and update module 303.
The execution module 301 is configured to execute a compiling instruction, and generate a concurrent subtask sequence according to the compiling instruction and a source code packet in a preset shared directory;
the operation and distribution 302 is used for operating the task scheduling server in the preset shared directory according to the subtask sequence, so that the task scheduling client in each container compiles the task scheduling server to distribute the subtasks from the subtask sequence;
and the determining and updating module 303 is configured to determine that after each task scheduling client executes and compiles the subtasks, the task state is updated, and thus the packing work is completed.
Wherein, the execution module 301 is further specifically configured to:
running the preposed task management module through the compiling instruction;
analyzing each functional module according to the running preposed task management module to obtain the dependency relationship among the functional modules;
and analyzing and fragmenting the total compiling task according to the preposed task management module and the dependency relationship to generate a concurrent subtask sequence.
Wherein, the execution module 301 is further specifically configured to:
analyzing the link parameters of each functional module through the running preposed task management module to obtain a code compiling dependency graph;
and analyzing the code compiling dependency graph to obtain the dependency relationship of each functional module in the code compiling dependency graph.
Wherein, the execution module 301 is further specifically configured to:
analyzing the total compiling task through the running preposed task management module and the dependency relationship to obtain each compiling link task information in the total compiling task, wherein the compiling link task information comprises a dependent compiling link task and a non-dependent compiling link task;
slicing and refining the dependent compiling link to obtain a compiling task and a link task;
and generating a concurrent subtask sequence based on the independent compiling link task, the compiling task and the link task.
Wherein, the operation and distribution 302 is further specifically configured to:
inputting the subtask sequence into a task scheduling server to run the task scheduling server;
reading a preset assignable task table through the running task scheduling server to obtain a plurality of task scheduling clients in the preset assignable task table;
distributing the subtasks in the subtask sequence to each task scheduling client, so that the task scheduling client in each container compiles the task scheduling server to distribute the subtasks from the subtask sequence.
Wherein, the distributed compiling apparatus is further specifically configured to:
receiving request information sent by each task scheduling client, and acquiring task scheduling client information carried in the request information;
and adding the task scheduling client information into a preset assignable task table.
Wherein, the distributed compiling apparatus is further specifically configured to:
presetting a plurality of virtual machines, operating a plurality of containers for each virtual machine, and generating a container cluster, wherein each container comprises a task scheduling client;
and presetting a sharing directory so that each container in the container cluster shares the preset sharing directory.
It should be noted that, as will be clear to those skilled in the art, for convenience and brevity of description, the specific working processes of the apparatus and the modules and units described above may refer to the corresponding processes in the foregoing distributed compiling method embodiment, and are not described herein again.
The apparatus provided by the above embodiments may be implemented in the form of a computer program that can be run on a computer device as shown in fig. 7.
Referring to fig. 7, fig. 7 is a schematic block diagram illustrating a structure of a computer device according to an embodiment of the present disclosure. The computer device may be a terminal.
As shown in fig. 7, the computer device includes a processor, a memory, and a network interface connected by a system bus, wherein the memory may include a nonvolatile storage medium and an internal memory.
The non-volatile storage medium may store an operating system and a computer program. The computer program comprises program instructions which, when executed, cause a processor to perform any of the distributed compilation methods.
The processor is used for providing calculation and control capability and supporting the operation of the whole computer equipment.
The internal memory provides an environment for the execution of a computer program on a non-volatile storage medium, which when executed by a processor, causes the processor to perform any of the distributed compilation methods.
The network interface is used for network communication, such as sending assigned tasks and the like. Those skilled in the art will appreciate that the architecture shown in fig. 7 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
It should be understood that the Processor may be a Central Processing Unit (CPU), and the Processor may be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, etc. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Wherein, in one embodiment, the processor is configured to execute a computer program stored in the memory to implement the steps of:
executing a compiling instruction, and generating a concurrent subtask sequence according to the compiling instruction and a source code packet in a preset shared directory;
running the task scheduling server in the preset shared directory according to the subtask sequence so that the task scheduling client in each container compiles the task scheduling server to distribute the subtasks from the subtask sequence;
and after each task scheduling client is determined to execute and compile the subtasks, updating the task state and finishing the packing work.
In one embodiment, the source code packet of the processor comprises a preposed task management module, a plurality of functional modules and a general compiling task; when the concurrent subtask sequence generated according to the compiling command and the source code packet is implemented, the method is used for implementing:
running the preposed task management module through the compiling instruction;
analyzing each functional module according to the running preposed task management module to obtain the dependency relationship among the functional modules;
and analyzing and fragmenting the total compiling task according to the preposed task management module and the dependency relationship to generate a concurrent subtask sequence.
In one embodiment, the processor analyzes each of the functional modules according to the running pre-task management module, and when obtaining the implementation of the dependency relationship between the functional modules, is configured to implement:
analyzing the link parameters of each functional module through the running preposed task management module to obtain a code compiling dependency graph;
and analyzing the code compiling dependency graph to obtain the dependency relationship of each functional module in the code compiling dependency graph.
In an embodiment, when the processor parses and fragments the total compilation task according to the pre-task management module and the dependency relationship to generate a concurrent subtask sequence, the processor is configured to:
analyzing the total compiling task through the running preposed task management module and the dependency relationship to obtain each compiling link task information in the total compiling task, wherein the compiling link task information comprises a dependent compiling link task and a non-dependent compiling link task;
slicing and refining the dependent compiling link to obtain a compiling task and a link task;
and generating a concurrent subtask sequence based on the independent compiling link task, the compiling task and the link task.
In one embodiment, the processor is configured to, when the task scheduling client in each container compiles the task scheduling service to distribute the sub-task implementation from the sub-task sequence, implement:
inputting the subtask sequence into a task scheduling server to run the task scheduling server;
reading a preset assignable task table through the running task scheduling server to obtain a plurality of task scheduling clients in the preset assignable task table;
distributing the subtasks in the subtask sequence to each task scheduling client, so that the task scheduling client in each container compiles the task scheduling server to distribute the subtasks from the subtask sequence.
In one embodiment, when the processor is implemented before reading a preset assignable task table through the running task scheduling server, the processor is configured to implement:
receiving request information sent by each task scheduling client, and acquiring task scheduling client information carried in the request information;
and adding the task scheduling client information into a preset assignable task table.
In one embodiment, the processor, when implemented prior to executing the compiled instructions, is configured to implement:
presetting a plurality of virtual machines, operating a plurality of containers for each virtual machine, and generating a container cluster, wherein each container comprises a task scheduling client;
and presetting a sharing directory so that each container in the container cluster shares the preset sharing directory.
Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, where the computer program includes program instructions, and a method implemented when the program instructions are executed may refer to various embodiments of the distributed compiling method of the present application.
The computer-readable storage medium may be an internal storage unit of the computer device described in the foregoing embodiment, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the computer device.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments. While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A distributed compilation method, the method comprising:
executing a compiling instruction, and generating a concurrent subtask sequence according to the compiling instruction and a source code packet in a preset shared directory, wherein the source code packet comprises a preposed task management module;
running the task scheduling server in the preset shared directory according to the subtask sequence so that the task scheduling client in each container compiles the subtasks distributed by the task scheduling server from the subtask sequence;
and after each task scheduling client is determined to execute and compile the subtasks, updating the task state and finishing the packing work.
2. The distributed compilation method of claim 1 wherein the source code package includes a pre-task management module, a plurality of functional modules, and a general compilation task; the generating of the concurrent subtask sequence according to the compiling command and the source code packet includes:
running the preposed task management module through the compiling instruction;
analyzing each functional module according to the running preposed task management module to obtain the dependency relationship among the functional modules;
and analyzing and fragmenting the total compiling task according to the preposed task management module and the dependency relationship to generate a concurrent subtask sequence.
3. The distributed compiling method according to any claim 2, wherein the analyzing each of the functional modules according to the running pre-task management module to obtain the dependency relationship between each of the functional modules comprises:
analyzing the link parameters of each functional module through the running preposed task management module to obtain a code compiling dependency graph;
and analyzing the code compiling dependency graph to obtain the dependency relationship of each functional module in the code compiling dependency graph.
4. The distributed compilation method of claim 2, wherein the parsing and fragmenting the overall compilation task according to the pre-task management module and the dependencies to generate concurrent subtask sequences comprises:
analyzing the total compiling task through the running preposed task management module and the dependency relationship to obtain each compiling link task information in the total compiling task, wherein the compiling link task information comprises a dependent compiling link task and a non-dependent compiling link task;
slicing and refining the dependent compiling link to obtain a compiling task and a link task;
and generating a concurrent subtask sequence based on the independent compiling link task, the compiling task and the link task.
5. The distributed compilation method of claim 1, wherein the causing the task scheduling clients in the respective containers to compile the subtasks distributed by the task scheduling server from the sequence of subtasks comprises:
inputting the subtask sequence into a task scheduling server to run the task scheduling server;
reading a preset assignable task table through the running task scheduling server to obtain a plurality of task scheduling clients in the preset assignable task table;
distributing the subtasks in the subtask sequence to each task scheduling client, so that the task scheduling client in each container compiles the task scheduling server to distribute the subtasks from the subtask sequence.
6. The distributed compilation method of claim 5, wherein before the task scheduling server that is run reads a preset assignable task table, the method further comprises:
receiving request information sent by each task scheduling client, and acquiring task scheduling client information carried in the request information;
and adding the task scheduling client information into a preset assignable task table.
7. The distributed compilation method of claim 1 wherein prior to executing the compilation instructions, further comprising:
presetting a plurality of virtual machines, operating a plurality of containers for each virtual machine, and generating a container cluster, wherein each container comprises a task scheduling client;
and presetting a sharing directory so that each container in the container cluster shares the preset sharing directory.
8. A distributed compilation apparatus, comprising:
the execution module is used for executing the compiling instruction and generating a concurrent subtask sequence according to the compiling instruction and a source code packet in a preset shared directory;
running and distributing, wherein the running and distributing is used for running the task scheduling server in the preset shared directory according to the subtask sequence so that the task scheduling client in each container compiles the subtasks distributed by the task scheduling server from the subtask sequence;
and the determining and updating module is used for determining that after each task scheduling client executes and compiles the subtasks, the task state is updated, and the packing work is completed.
9. A computer arrangement comprising a processor, a memory, and a computer program stored on the memory and executable by the processor, wherein the computer program, when executed by the processor, implements the steps of the distributed compilation method as recited in any of claims 1 to 7.
10. A computer-readable storage medium, having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps of the distributed compilation method of any of claims 1 to 7.
CN202111233730.2A 2021-10-22 2021-10-22 Distributed compiling method, device, equipment and readable storage medium Pending CN113986239A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111233730.2A CN113986239A (en) 2021-10-22 2021-10-22 Distributed compiling method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111233730.2A CN113986239A (en) 2021-10-22 2021-10-22 Distributed compiling method, device, equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN113986239A true CN113986239A (en) 2022-01-28

Family

ID=79740472

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111233730.2A Pending CN113986239A (en) 2021-10-22 2021-10-22 Distributed compiling method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113986239A (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020124012A1 (en) * 2001-01-25 2002-09-05 Clifford Liem Compiler for multiple processor and distributed memory architectures
WO2010109751A1 (en) * 2009-03-25 2010-09-30 日本電気株式会社 Compiling system, compiling method, and storage medium containing compiling program
KR20110116772A (en) * 2010-04-20 2011-10-26 네무스텍(주) System for source code version management in distributed build system and method for transmitting referential file
US8464237B1 (en) * 2008-02-27 2013-06-11 Google Inc. Method and apparatus for optimizing compilation of a computer program
KR20130073374A (en) * 2011-12-23 2013-07-03 삼성전자주식회사 System, apparatus and method for distributed compilation of applications
CN106095522A (en) * 2016-06-03 2016-11-09 北京奇虎科技有限公司 A kind of method realizing distributed compilation and distributed compilation system
US20160378443A1 (en) * 2015-06-26 2016-12-29 Mingqiu Sun Techniques for distributed operation of secure controllers
WO2017000601A1 (en) * 2015-06-29 2017-01-05 中兴通讯股份有限公司 Software compiling method and apparatus
CN109542446A (en) * 2017-08-14 2019-03-29 中兴通讯股份有限公司 A kind of compiling system, method and compiler
CN109799991A (en) * 2017-11-16 2019-05-24 中标软件有限公司 Compilation of source code method and system based on MapReduce frame distributed computing environment
CN110489126A (en) * 2019-08-08 2019-11-22 腾讯科技(深圳)有限公司 Execution method and apparatus, storage medium and the electronic device of compiler task
CN110968320A (en) * 2018-09-30 2020-04-07 上海登临科技有限公司 Joint compiling method and compiling system for heterogeneous hardware architecture
CN111026397A (en) * 2019-10-22 2020-04-17 烽火通信科技股份有限公司 Rpm packet distributed compiling method and device
CN111596923A (en) * 2020-05-21 2020-08-28 广东三维家信息科技有限公司 Haxe static link library construction method and device and electronic equipment
CN112114816A (en) * 2020-09-25 2020-12-22 统信软件技术有限公司 Distributed compiling system and method
CN112394942A (en) * 2020-11-24 2021-02-23 季明 Distributed software development compiling method and software development platform based on cloud computing
CN112965720A (en) * 2021-02-19 2021-06-15 上海微盟企业发展有限公司 Component compiling method, device, equipment and computer readable storage medium
CN113254022A (en) * 2021-05-14 2021-08-13 北京车和家信息技术有限公司 Distributed compilation system and method

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020124012A1 (en) * 2001-01-25 2002-09-05 Clifford Liem Compiler for multiple processor and distributed memory architectures
US8464237B1 (en) * 2008-02-27 2013-06-11 Google Inc. Method and apparatus for optimizing compilation of a computer program
WO2010109751A1 (en) * 2009-03-25 2010-09-30 日本電気株式会社 Compiling system, compiling method, and storage medium containing compiling program
KR20110116772A (en) * 2010-04-20 2011-10-26 네무스텍(주) System for source code version management in distributed build system and method for transmitting referential file
KR20130073374A (en) * 2011-12-23 2013-07-03 삼성전자주식회사 System, apparatus and method for distributed compilation of applications
US20160378443A1 (en) * 2015-06-26 2016-12-29 Mingqiu Sun Techniques for distributed operation of secure controllers
WO2017000601A1 (en) * 2015-06-29 2017-01-05 中兴通讯股份有限公司 Software compiling method and apparatus
CN106095522A (en) * 2016-06-03 2016-11-09 北京奇虎科技有限公司 A kind of method realizing distributed compilation and distributed compilation system
CN109542446A (en) * 2017-08-14 2019-03-29 中兴通讯股份有限公司 A kind of compiling system, method and compiler
CN109799991A (en) * 2017-11-16 2019-05-24 中标软件有限公司 Compilation of source code method and system based on MapReduce frame distributed computing environment
CN110968320A (en) * 2018-09-30 2020-04-07 上海登临科技有限公司 Joint compiling method and compiling system for heterogeneous hardware architecture
CN110489126A (en) * 2019-08-08 2019-11-22 腾讯科技(深圳)有限公司 Execution method and apparatus, storage medium and the electronic device of compiler task
CN111026397A (en) * 2019-10-22 2020-04-17 烽火通信科技股份有限公司 Rpm packet distributed compiling method and device
CN111596923A (en) * 2020-05-21 2020-08-28 广东三维家信息科技有限公司 Haxe static link library construction method and device and electronic equipment
CN112114816A (en) * 2020-09-25 2020-12-22 统信软件技术有限公司 Distributed compiling system and method
CN112394942A (en) * 2020-11-24 2021-02-23 季明 Distributed software development compiling method and software development platform based on cloud computing
CN112965720A (en) * 2021-02-19 2021-06-15 上海微盟企业发展有限公司 Component compiling method, device, equipment and computer readable storage medium
CN113254022A (en) * 2021-05-14 2021-08-13 北京车和家信息技术有限公司 Distributed compilation system and method

Similar Documents

Publication Publication Date Title
EP3488337B1 (en) Shared software libraries for computing devices
CN108021400B (en) Data processing method and device, computer storage medium and equipment
CN113220431B (en) Cross-cloud distributed data task scheduling method, device and storage medium
KR20060085698A (en) Dynamic addressing (da) using a centralized da manager
CN115080060A (en) Application program distribution method, device, equipment, storage medium and program product
CN112631600A (en) Software construction method and system based on Flutter
CN116028163A (en) Method, device and storage medium for scheduling dynamic link library of container group
CN110225082B (en) Task processing method and device, electronic equipment and computer readable medium
US20200278877A1 (en) Optimization of multi-layered images
CN112148351B (en) Cross-version compatibility method and system for application software
CN110806891B (en) Method and device for generating software version of embedded device
CN112965721A (en) Android-based project compiling method and device, computer equipment and storage medium
Mastelic et al. Towards uniform management of cloud services by applying model-driven development
CN110955415A (en) Method for adapting projection multi-platform service
CN113986239A (en) Distributed compiling method, device, equipment and readable storage medium
CN115016862A (en) Kubernetes cluster-based software starting method, device, server and storage medium
CN114860204A (en) Program processing method, program operating device, terminal, smart card and storage medium
CN106547519B (en) Information processing method and system
CN112988225A (en) Annotation configuration method, device, equipment and storage medium
CN112685043A (en) Asynchronous conversion method, device and equipment of callback function and readable storage medium
CN112052035A (en) Version packaging method and device based on bank back-line system
US20010023434A1 (en) Computational data processing system and computational process implemented by means of such a system
CN112130962A (en) Continuous delivery platform and method for deploying application system by using same
US10162626B2 (en) Ordered cache tiering for program build files
CN113918239B (en) Code implementation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination