CN112799851A - Data processing method and related device in multi-party security computing - Google Patents

Data processing method and related device in multi-party security computing Download PDF

Info

Publication number
CN112799851A
CN112799851A CN202110339661.7A CN202110339661A CN112799851A CN 112799851 A CN112799851 A CN 112799851A CN 202110339661 A CN202110339661 A CN 202110339661A CN 112799851 A CN112799851 A CN 112799851A
Authority
CN
China
Prior art keywords
computing
instances
parallel
compute
parallel computing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110339661.7A
Other languages
Chinese (zh)
Inventor
李浒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huakong Tsingjiao Information Technology Beijing Co Ltd
Original Assignee
Huakong Tsingjiao Information Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huakong Tsingjiao Information Technology Beijing Co Ltd filed Critical Huakong Tsingjiao Information Technology Beijing Co Ltd
Priority to CN202110339661.7A priority Critical patent/CN112799851A/en
Publication of CN112799851A publication Critical patent/CN112799851A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6209Protecting access to data via a platform, e.g. using keys or access control rules to a single file or object, e.g. in a secure envelope, encrypted and accessed using a key, or with access control rules appended to the object itself

Abstract

The present disclosure provides a data processing method and related apparatus in multi-party secure computing. The method comprises the following steps: starting N parallel computing instances in the multi-party secure computing to read corresponding data, wherein the number N of the started parallel computing instances is greater than the number M of computing core resources, and M and N are positive integers; m parallel computing examples of which the corresponding data are read are selected from the N parallel computing examples and are respectively distributed to corresponding computing core resources for execution; once the distributed computing examples are executed by the computing core resources, the computing examples are selected from the non-executed parallel computing examples and distributed to the computing core resources for execution. The embodiment of the disclosure improves the utilization rate of computing resources in multi-party security computing and improves the operation efficiency.

Description

Data processing method and related device in multi-party security computing
Technical Field
The present disclosure relates to the field of multi-party secure computing, and in particular, to a data processing method and related apparatus in multi-party secure computing.
Background
Secure Multi-Party computing (Secure Multi-Party computing) refers to a technique for securely computing an agreed-upon function without a trusted third Party. Multiparty security computing is the cryptographic basis on which many applications such as electronic elections, threshold signatures, and electronic auctions are implemented.
In multi-party security computing, involving participation of multiple parties such as each data source node, each computing node, a computing request node, a result receiving node, and the like, data needs to be transmitted across a network, and in most cases, data transmission needs to be transmitted over a wide area network, so that data transmission takes a long time. Meanwhile, the cipher text calculation task of the multi-party safe calculation also consumes computation resources such as a CPU (central processing unit) and the like, and the computation resources of each computation instance need to be ensured during calculation. On one hand, the computing resources need to meet the computing requirements of ciphertext computing tasks, and on the other hand, data transmission is subjected to cooperative processing, so that the problem that the balance cannot be well balanced is caused, and the resource utilization rate is reduced.
Disclosure of Invention
One purpose of the present disclosure is to improve the utilization rate of computing resources in multi-party secure computing, and improve the computing efficiency.
According to an aspect of the present disclosure, there is provided a data processing method in multi-party secure computing, including:
starting N parallel computing instances in the multi-party secure computing to read corresponding data, wherein the number N of the started parallel computing instances is greater than the number M of computing core resources, and M and N are positive integers;
m parallel computing examples of which the corresponding data are read are selected from the N parallel computing examples and are respectively distributed to corresponding computing core resources for execution;
once the distributed computing examples are executed by the computing core resources, the computing examples are selected from the non-executed parallel computing examples and distributed to the computing core resources for execution.
Optionally, the parallel computing instance of the N parallel computing instances that has not read the corresponding data continues to read while the M parallel computing instances execute.
Optionally, the selected M parallel computing instances are M parallel computing instances in which the corresponding data is read first.
Optionally, the multiparty security computation is performed by a plurality of computing nodes, the N parallel computing instances are distributed on the plurality of computing nodes, and the selecting M parallel computing instances from the N parallel computing instances is performed according to a principle of balancing computing nodes where the selected parallel computing instances are located in parallel computing instances in which corresponding data has been read.
Optionally, the parallel computing instance sends an execution request after reading the corresponding data, where the execution request has a located computing node identifier, and the computing node balancing is executed based on the computing node identifier.
Optionally, once there is an operation core resource to execute the allocated computation instance, selecting a computation instance from the non-executed parallel computation instances to allocate to the operation core resource to execute, including:
once the distributed calculation examples are executed by the calculation core resources, delaying the preset waiting time;
and selecting a computing instance to be allocated to the computing core resource for execution based on the principle of the balance of the computing nodes in the unexecuted parallel computing instance after the waiting time is delayed.
Optionally, the preset waiting time period is fixed.
Optionally, the preset time length is set based on a distribution situation of the parallel computing instances being executed at each computing node when the computing core resources finish executing the allocated computing instances.
Optionally, the multi-party secure computation is performed by a plurality of computing nodes, the N parallel computation instances are distributed on the plurality of computing nodes, and the selecting M parallel computation instances from the N parallel computation instances is performed randomly in the parallel computation instance that has read the corresponding data.
Optionally, the corresponding data is ciphertext data from a data source node.
According to an aspect of the present disclosure, there is provided a scheduler including:
a memory storing computer readable instructions;
a processor reading computer readable instructions stored by the memory to perform the method as described above.
According to an aspect of the present disclosure, there is provided a computing platform comprising:
a scheduler as described above;
the N parallel computing instances and the M arithmetic core resources are respectively distributed on the plurality of computing nodes.
According to an aspect of the present disclosure, there is provided a computer readable program medium storing computer readable instructions which, when executed by a processor, cause a computer to perform a method according to the above.
In the embodiment of the disclosure, a process of reading corresponding data by starting N parallel computing instances in the multi-party secure computing is performed. When the calculation examples are allocated to the corresponding operation core resources to run, the corresponding data required by the calculation examples are read. Compared with the mode that the corresponding data is read only after the computing instance is allocated to the corresponding computing core resource to run in the prior art, the method and the device avoid the defect that the computing core resource cannot execute any action when the corresponding data is read through network transmission in the prior art, and improve the utilization rate of the computing core resource and the resource allocation efficiency. In a large number of parallel computing examples or tasks, only the computing example or task in the first round needs network transmission or reading time, and data transmission or reading in the later process can be performed in parallel with the operation of the previous computing example in the computing core resource, so that the required transmission or reading time of corresponding data is obviously hidden, the computing efficiency of the whole multi-party safe computing is improved, and the computing overhead and the reading overhead of the computing core resource are fully balanced.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings.
FIG. 1 illustrates an architectural diagram of a multi-party secure computing system, according to one embodiment of the present disclosure;
FIG. 2 illustrates a block diagram of a computing platform, according to one embodiment of the present disclosure;
FIG. 3 shows a block diagram of a scheduler according to one embodiment of the present disclosure;
FIG. 4 illustrates a block diagram of a compute node, according to one embodiment of the present disclosure;
FIG. 5 illustrates a flow diagram of a method of data processing in multi-party security computing, according to one embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these example embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more example embodiments. In the following description, numerous specific details are provided to give a thorough understanding of example embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, steps, and so forth. In other instances, well-known structures, methods, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
FIG. 1 illustrates an architectural diagram of a multi-party secure computing system, according to one embodiment of the present disclosure. The system includes a control node 110, a computing platform 120, an algorithm storage node 130, a plurality of data source nodes 140, a computation request node 150, and a result receiving node 160. Computing platform 120 includes a scheduler 121 and a plurality of computing nodes 122.
The data source node 140 is a node that owns the data needed for multi-party secure computing. The data source node has plaintext data required for multi-party secure computing. When performing multi-party security computation, it encrypts plaintext data with its key into ciphertext data, which is sent to compute node 122 in computing platform 120.
Compute node 122 is a node that performs specific multi-party security computations. For security reasons, multiple owner compute nodes 122 are typically required to coordinate the computation, with each compute node 122 co-processing the computation of a portion of the data.
The scheduler 121 is a node that allocates processes to the respective computation nodes 122. The disclosed embodiments are implemented primarily in the scheduler 121.
The control node 110 is the node that controls the normal transfer of information and data in multi-party secure computing. The other nodes in the multi-party secure computing system are controlled to operate by the control node 110.
Compute requesting node 150 is the node that initiates the multi-party secure compute request. The multi-party secure computation request may include an identification of the data source node 140 to which the ciphertext data required by each process of the ciphertext operation belongs, and indicate the computation that needs to be performed. The computation requesting node 150 sends a multi-party secure computation request to the control node 110. The control node 110 controls the computing platform 120 to perform the specified computations based on the identity of the data source node 140. During calculation, the scheduler 121 decomposes the calculation into processes, gives the processes to the corresponding core resources inside the calculation node 122 for operation, and notifies the processes of the identifier of the corresponding data source node 140. Before the process runs in the core resource, the ciphertext data encrypted by the key of the data source node 140 for the corresponding plaintext data is acquired from the corresponding data source node 140 according to the identifier of the corresponding data source node 140, and corresponding operation is performed to obtain a ciphertext operation result.
The result receiving node 160 is a node that receives the result of the ciphertext operation from the computing platform 120 and converts it back into plaintext.
The computation requesting node 150, the result receiving node 160, and each data source node 140 may be respectively embodied in various forms of desktop computers, laptop computers, mobile terminals, car terminals, PDAs, dedicated terminals, and the like. The control node 110, the scheduler 121, and the computing nodes 122 may be implemented in various forms such as a single server, a part (e.g., a virtual machine) of a single server, a cluster including a plurality of servers, and a cluster including a part (e.g., a virtual machine) of each of a plurality of servers, or may be implemented as a cloud. It is understood that the compute request node 150 and the result receiving node 160 may be the same node.
According to one embodiment of the present disclosure, a method of data processing in multi-party secure computing is provided. Which is executed by the scheduler 121 of fig. 1. As shown in fig. 5, the method includes:
step 510, starting N parallel computing instances in the multi-party secure computing to read corresponding data, wherein the number N of the started parallel computing instances is greater than the number M of the computing core resources, and M and N are positive integers;
step 520, selecting M parallel computing examples of which the corresponding data has been read from the N parallel computing examples, and respectively allocating the M parallel computing examples to corresponding arithmetic core resources for execution;
step 530, once the distributed computation examples are executed by the computation core resources, selecting the computation examples from the parallel computation examples which are not executed and distributing the computation examples to the computation core resources for execution.
The above steps are described in detail below.
In step 510, the compute instance 210 refers to a process decomposed in the computing process. These processes are decomposed by the scheduler 121 and stored at each compute node 122, as shown in FIG. 2. Each compute node 122 is responsible for managing several processes, i.e., compute instances 210, all of which compute instances 210 together comprise the overall computing process. These compute instances 210 are stored without the compute core resources 220 allocated to them and are therefore not executed, and can only be executed if the compute core resources 220 are allocated to them and loaded onto the compute core resources 220.
The arithmetic core resource 220 refers to a hardware unit of a processing unit (e.g., CPU) that can independently execute a decomposed process or instance of computation. Generally, processes are run independently by cores in a processing unit. How many cores in a processing unit can execute how many processes or compute instances in parallel. In this case, the arithmetic core resource is a core. As shown in FIG. 2, there may be one or more compute core resources 220, e.g., cores, in each compute node 122. In the embodiment of the present disclosure, it is assumed that the arithmetic core resources 220 of all the compute nodes 122 are added together, and there are M total. Both of which receive the scheduler 121 schedule.
In one embodiment, step 510 may comprise: decomposing the multi-party secure computation into a plurality of tiers of processes; the same level of N processes are launched as the N parallel computing instances 210. The computation request sent by the computation requesting node 150 indicates the computation required, and the scheduler 121 may decompose the computation into multiple hierarchical processes, where a process at a previous hierarchical level may depend on the result of a process at a next hierarchical level, and thus may wait until the process at the next hierarchical level is completed. The processes of the same hierarchy may be executed in parallel. Thus, N processes at the same level may be launched as the N parallel computing instances 210. The N processes may be all processes in the hierarchy or some of them. That is, a portion of the processes in the hierarchy may be initially launched as parallel computing instances at step 510. Thereafter, other processes in the hierarchy are successively started. As shown in FIG. 2, the N initiated parallel computing instances 210 are distributed across multiple compute nodes 122.
After the N parallel computing instances 210 in the multi-party secure computation are started, they can be made to read the corresponding data they need. The corresponding data needed may be ciphertext data read from the corresponding data source node 140 according to the identification of the data source node 140 in the multi-party secure computation request. The ciphertext data is obtained by the data source node 140 encrypting plaintext data with its key.
Since the number N of started parallel computing instances is greater than the number M of computational core resources, all the N parallel computing instances cannot run simultaneously. Therefore, M parallel computing instances in which the required data has been completely read from the corresponding data source nodes 140 can be operated in M computing core resources first, and the remaining parallel computing instances in which the required data has not been completely read from the corresponding data source nodes 140 can be continuously read while the M parallel computing instances are operated in the computing core resources, so that a part of data reading and data computing are simultaneously operated, and the utilization efficiency of the computing core resources is improved.
Then, in step 520, M parallel computing instances in which the corresponding data has been read are selected from the N parallel computing instances and are respectively allocated to the corresponding computing core resources for execution.
In one embodiment, the selected M parallel computing instances 210 are the M parallel computing instances that have first read the corresponding data. In this case, each parallel computing instance 210 may be made to send an execution request to the scheduler 121 after reading the corresponding data from the data source node 140, that is, request the scheduler to allocate one computational core resource 220 for execution. The scheduler 121 may treat the M parallel compute instances 210 that have received the execution request first as the M parallel compute instances 210 to be allocated to the M arithmetic core resources 220. The embodiment has the advantages that the M arithmetic core resources 220 can be enabled to start running the parallel computing instances 210 as early as possible, and as many remaining parallel computing instances 210 as possible can utilize the running period of the M arithmetic core resources 220 to read data, thereby improving the cooperative efficiency of reading data and running computing instances.
In another embodiment, the selected M parallel computing instances 210 may not be the M parallel computing instances that have been read of the corresponding data first. When the M parallel computing instances 210 are selected, more than M parallel computing instances 210 may have read the corresponding data, and even N parallel computing instances 210 may have read the corresponding data. This has the advantage that M parallel computing instances 210 can be selected among more parallel computing instances 210, thereby allowing for options. The selected rule may be a rule of balancing the located compute nodes 122. Although it is apparent that the time point of selecting the M parallel computing instances 210 is later than the time point of selecting the M parallel computing instances 210 in the previous embodiment, which affects the overall operating speed, since the M parallel computing instances 210 can be selected based on some principles, the M computing core resources 220 may execute the M parallel computing instances 210 more smoothly, and the execution efficiency is higher. Taking the principle of computing node 122 balancing as an example, it may select M parallel computing instances according to the principle of balancing the computing node where the selected parallel computing instance is located, among more than M parallel computing instances that have read the corresponding data. This has the advantage that the computing nodes 122 where the M parallel computing instances 210 are located are more balanced, reducing the probability of transferring a computing instance 210 on one computing node 122 to an arithmetic core resource 220 on another computing node 122 for execution (the data transfer is also time consuming and computational effort consuming for the computing nodes), thereby reducing the overall processing time for the M arithmetic core resources 220 to cooperatively execute the M parallel computing instances 210.
As described above, in one embodiment, the parallel computing instance 210 sends an execution request to the scheduler 121 after reading the corresponding data. In this case, the execution request may be identified with the compute node 122 at which the parallel compute instance 210 is located. In this way, compute node balancing may be performed based on the compute node 122 identification. In a particular embodiment, scheduler 121 may determine, upon receiving an execution request for M parallel compute instances 210, whether compute node 122 in which the M parallel compute instances 210 are located covers all compute nodes 122 of compute platform 120 or a predetermined number of compute nodes 122. If already covered, the M parallel computing instances 210 are selected to be allocated to the M arithmetic core resources 220, respectively. If not, the computing node 122 of the parallel computing instance 210 corresponding to the received execution request is waited for until all computing nodes 122 or a predetermined number of computing nodes 122 of the computing platform 120 are covered. At this time, more than M parallel computing instances 210 that have read the corresponding data may be selected from M, so that the computing nodes 122 where the M parallel computing instances 210 are located cover all the computing nodes 122 of the computing platform 120 or a predetermined number of the computing nodes 122.
The M parallel computing instances 210 are allocated to the corresponding computing core resources 220 for execution, and may be preferentially allocated to the computing core resources 220 of the computing node 122 where the computing instance 210 is located, so that transmission overhead caused by transmission of the computing instance 210 to other computing nodes 122 may be reduced.
In addition, in one embodiment, the selection of M parallel computation instances from the N parallel computation instances is performed randomly among the parallel computation instances that have read the corresponding data. The embodiment has the advantages of simplicity, easiness in implementation and high effect. Moreover, due to the characteristic of random selection, better distribution efficiency can be achieved in most cases.
In step 530, once there is an allocated compute instance 210 executed by an arithmetic core resource 220, the compute instance 210 is selected from the non-executed parallel compute instances 210 to be allocated to the arithmetic core resource 220 for execution.
When the distributed computing instances 210 are executed by the computing core resources 220, the remaining parallel computing instances 210 have already read the corresponding data, so that the remaining parallel computing instances 210 do not need to wait for reading the data, and the execution parallelism and the execution efficiency are improved. At this time, the computation instance 210 needs to be selected from the un-executed parallel computation instances 210 and allocated to the operation core resource 220 for execution. The selected principle may also be the same as the principle of the above located computing node 122. In one embodiment, an unexecuted parallel compute instance 210 on the compute node 122 with the least number of parallel compute instances 210 running on the compute core resources 220 of the compute nodes 122 may be selected as the selected compute instance 210. For example, there are 4 compute nodes 122, compute nodes 1-4. On compute node 1, there are 3 parallel compute instances executing on compute core resource 220; on compute node 2, there are 2 parallel compute instances executing on compute core resources 220; on compute node 3, 5 parallel compute instances are executing on compute core resource 220; on compute node 4, there are 4 parallel compute instances executing on compute core resources 220. At this time, in order to balance the located compute nodes 122, one compute instance 210 on compute node 2 should be selected from the non-executed parallel compute instances 210 and allocated to the idle compute core resource 220. By the computing node 122 balancing principle, the computing burden of the computing node 122 can be balanced, so that the cooperative processing efficiency of running the parallel computing instance is improved.
Additionally, in one embodiment, in this step, a preset wait period is delayed once an allocated compute instance 210 has been executed by a compute core resource 220. Then, from the parallel computing instances 210 that are not executed after delaying the waiting time, the computing instances 210 are selected to be allocated to the computing core resources 220 for execution based on the principle of the balance of the located computing nodes. The advantage of delaying the preset wait time is that after delaying the preset wait time, the balance of the distributed computing nodes 122 is considered according to the computing node balance principle, which is often more objective. For example, a computing core resource 220 on the computing node 3 has just executed a computing instance 210, and a new computing instance 210 needs to be found to be allocated to the idle computing core resource 220 by considering the distribution of the parallel computing instances on the computing nodes 1-4. At this time, on compute node 1, there are 3 parallel compute instances executing on compute core resources 220; on compute node 2, there are 2 parallel compute instances executing on compute core resources 220; on compute node 3, 5 parallel compute instances are executing on compute core resource 220; on compute node 4, there are 4 parallel compute instances executing on compute core resources 220. It seems more appropriate to find a further instance of computation on compute node 2 instead, according to the compute node 122 balancing principle. However, perhaps a few seconds later, on compute node 3, 4 more compute core resources 220 run the corresponding parallel compute instances. Only 1 compute instance remains on compute node 3 to run, in which case it is more appropriate to instead find one compute instance on compute node 3 as a replacement. Therefore, delaying the wait period is advantageous for more objectively balancing the load.
In one embodiment, the preset wait period is fixed. In another embodiment, the preset time duration may be set based on a distribution of the parallel computing instances being executed when the allocated computing instances are executed by the computing core resources. Specifically, if the distribution of the executing parallel computing instances at each computing node is greatly different when the computing core resources execute the allocated computing instances, the preset waiting time may be set to be shorter, so that the confidence of selecting the alternative computing instance at this time is higher. For example, if the instances of computation being executed on compute nodes 1-4 are 1, 6, 4, 9, respectively, the lowest number of 1's differs from the 4's last but one's second, and the confidence that selecting the unexecuted instance of computation on compute node 1 is objectively practical is greater, the wait period may be set to be very short. If the instances of the computations being performed on compute nodes 1-4 are 2, 3, respectively, and the differences between these values are small, the wait period may be set short to further observe the variation in the differences between them.
As shown in fig. 3, the structure of the scheduler 121 may include, but is not limited to: at least one processing unit 310, at least one memory unit 320, and a bus 330 that couples various system components including the memory unit 320 and the processing unit 310.
The storage unit 330 stores program code, which can be executed by the processing unit 310, so that the processing unit 310 performs the steps of the exemplary embodiments of the above-described exemplary methods of the present specification. For example, the processing unit 310 may perform the various steps as shown in fig. 5.
The storage unit 320 may include readable media in the form of volatile storage units, such as a random access memory unit (RAM)3201 and/or a cache memory unit 3202, and may further include a read only memory unit (ROM) 3203. The storage unit 320 may also include a program/utility 3204 having a set (at least one) of program modules 3205, such program modules 3205 including, but not limited to: a social operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 330 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The scheduler 121 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the scheduler 121, and/or with any device (e.g., router, modem, etc.) that enables the scheduler 121 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 350. Also, the scheduler 121 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adaptation 360. As shown in FIG. 3, the network adapter 360 communicates with the other modules of the scheduler 121 via the bus 330. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the scheduler 121, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
As shown in FIG. 4, the structure of the compute node 122 may include, but is not limited to: at least one processing unit 410, at least one memory unit 420, and a bus 430 that couples various system components including the memory unit 420 and the processing unit 410. Processing unit 410 includes a plurality of computational core resources 220, such as cores.
The memory unit 430 stores program code, including programs/utilities including but not limited to: a social operating system, one or more application programs, other program modules, and program data, among which are examples of parallel computing 210 to which embodiments of the present disclosure relate.
Bus 430 may be any bus representing one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The computing node 122 may also communicate with one or more external devices 710 (e.g., pointing devices, bluetooth devices, etc.). Such communication may occur via input/output (I/O) interfaces 450. Also, the computing node 122 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 460. As shown in FIG. 4, network adapter 460 communicates with the other modules of compute node 122 via bus 430.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, there is also provided a computer program medium having stored thereon computer readable instructions which, when executed by a processor of a computer, cause the computer to perform the method described in the above method embodiment section.
According to an embodiment of the present disclosure, there is also provided a program product for implementing the method in the above method embodiment, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (10)

1. A data processing method in multi-party secure computing, comprising:
starting N parallel computing instances in the multi-party secure computing to read corresponding data, wherein the number N of the started parallel computing instances is greater than the number M of computing core resources, and M and N are positive integers;
m parallel computing examples of which the corresponding data are read are selected from the N parallel computing examples and are respectively distributed to corresponding computing core resources for execution;
once the distributed computing examples are executed by the computing core resources, the computing examples are selected from the non-executed parallel computing examples and distributed to the computing core resources for execution.
2. The method of claim 1, wherein ones of the N parallel computing instances that have not read the corresponding data continue to read while the M parallel computing instances execute.
3. The method of claim 1, wherein the selected M parallel computing instances are the M parallel computing instances that have been read of the corresponding data first.
4. The method according to claim 1, wherein the multi-party secure computation is performed by a plurality of computing nodes, the N parallel computing instances are distributed on the plurality of computing nodes, and the selecting M parallel computing instances from the N parallel computing instances is performed according to a principle of balancing computing nodes where the selected parallel computing instances are located, among parallel computing instances where corresponding data have been read.
5. The method of claim 4, wherein the parallel computing instances send execution requests after reading the corresponding data, the execution requests include the located computing node identifiers, and the computing node balancing is performed based on the computing node identifiers.
6. The method of claim 1, wherein selecting a compute instance from the non-executed parallel compute instances to be allocated to the compute core resource for execution once the compute instance allocated by the compute core resource has been executed comprises:
once the distributed calculation examples are executed by the calculation core resources, delaying the preset waiting time;
and selecting a computing instance to be allocated to the computing core resource for execution based on the principle of the balance of the computing nodes in the unexecuted parallel computing instance after the waiting time is delayed.
7. The method according to claim 6, wherein the preset time duration is set based on the distribution of the parallel computing instances being executed at each computing node when the allocated computing instances are executed by the computing core resources.
8. A scheduler, comprising:
a memory storing computer readable instructions;
a processor reading computer readable instructions stored by the memory to perform the method of any of claims 1-7.
9. A computing platform, comprising:
the scheduler of claim 8;
the N parallel computing instances and the M arithmetic core resources are respectively distributed on the plurality of computing nodes.
10. A computer-readable program medium storing computer-readable instructions which, when executed by a processor, cause a computer to perform the method of any one of claims 1-7.
CN202110339661.7A 2021-03-30 2021-03-30 Data processing method and related device in multi-party security computing Pending CN112799851A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110339661.7A CN112799851A (en) 2021-03-30 2021-03-30 Data processing method and related device in multi-party security computing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110339661.7A CN112799851A (en) 2021-03-30 2021-03-30 Data processing method and related device in multi-party security computing

Publications (1)

Publication Number Publication Date
CN112799851A true CN112799851A (en) 2021-05-14

Family

ID=75815996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110339661.7A Pending CN112799851A (en) 2021-03-30 2021-03-30 Data processing method and related device in multi-party security computing

Country Status (1)

Country Link
CN (1) CN112799851A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420338A (en) * 2021-05-21 2021-09-21 华控清交信息科技(北京)有限公司 Data processing method and device and data processing device
CN113992439A (en) * 2021-12-27 2022-01-28 支付宝(杭州)信息技术有限公司 Distributed multi-party secure computing system, method and node

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130090907A1 (en) * 2010-06-29 2013-04-11 Serguei Maliassov Method and System For Parallel Simulation Models
CN104239133A (en) * 2014-09-26 2014-12-24 北京国双科技有限公司 Log processing method, device and server
US20170109204A1 (en) * 2015-10-15 2017-04-20 International Business Machines Corporation Cpu resource management in computer cluster
US10020979B1 (en) * 2014-03-25 2018-07-10 A10 Networks, Inc. Allocating resources in multi-core computing environments
CN110795241A (en) * 2019-10-18 2020-02-14 北京并行科技股份有限公司 Job scheduling management method, scheduling center and system
CN110795385A (en) * 2019-10-29 2020-02-14 天津飞腾信息技术有限公司 Trusted core and computing core resource allocation method and device of system on chip
US20210051008A1 (en) * 2018-08-14 2021-02-18 Advanced New Technologies Co., Ltd. Multi-party security computing method and apparatus, and electronic device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130090907A1 (en) * 2010-06-29 2013-04-11 Serguei Maliassov Method and System For Parallel Simulation Models
US10020979B1 (en) * 2014-03-25 2018-07-10 A10 Networks, Inc. Allocating resources in multi-core computing environments
CN104239133A (en) * 2014-09-26 2014-12-24 北京国双科技有限公司 Log processing method, device and server
US20170109204A1 (en) * 2015-10-15 2017-04-20 International Business Machines Corporation Cpu resource management in computer cluster
US20210051008A1 (en) * 2018-08-14 2021-02-18 Advanced New Technologies Co., Ltd. Multi-party security computing method and apparatus, and electronic device
CN110795241A (en) * 2019-10-18 2020-02-14 北京并行科技股份有限公司 Job scheduling management method, scheduling center and system
CN110795385A (en) * 2019-10-29 2020-02-14 天津飞腾信息技术有限公司 Trusted core and computing core resource allocation method and device of system on chip

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420338A (en) * 2021-05-21 2021-09-21 华控清交信息科技(北京)有限公司 Data processing method and device and data processing device
CN113992439A (en) * 2021-12-27 2022-01-28 支付宝(杭州)信息技术有限公司 Distributed multi-party secure computing system, method and node

Similar Documents

Publication Publication Date Title
CN108537543B (en) Parallel processing method, device, equipment and storage medium for blockchain data
US10262390B1 (en) Managing access to a resource pool of graphics processing units under fine grain control
US10394477B2 (en) Method and system for memory allocation in a disaggregated memory architecture
Li et al. Feedback dynamic algorithms for preemptable job scheduling in cloud systems
US10109030B1 (en) Queue-based GPU virtualization and management system
US11150951B2 (en) Releasable resource based preemptive scheduling
US20120005345A1 (en) Optimized resource management for map/reduce computing
US10423442B2 (en) Processing jobs using task dependencies
CN111406250A (en) Provisioning using prefetched data in a serverless computing environment
US11201836B2 (en) Method and device for managing stateful application on server
US20140237151A1 (en) Determining a virtual interrupt source number from a physical interrupt source number
CN110187958B (en) Task processing method, device, system, equipment and storage medium
US9471387B2 (en) Scheduling in job execution
US20190294466A1 (en) Network-accessible volume creation and leasing
CN112799851A (en) Data processing method and related device in multi-party security computing
US20220405114A1 (en) Method, device and computer program product for resource scheduling
CN113821308A (en) System on chip, virtual machine task processing method and device, and storage medium
KR101640231B1 (en) Cloud Driving Method for supporting auto-scaled Hadoop Distributed Parallel Processing System
US9253056B2 (en) System to enhance performance, throughput and reliability of an existing cloud offering
US10956228B2 (en) Task management using a virtual node
Gouasmi et al. Cost-efficient distributed MapReduce job scheduling across cloud federation
CN112860421A (en) Method, apparatus and computer program product for job processing
CN109343958B (en) Computing resource allocation method and device, electronic equipment and storage medium
US9176910B2 (en) Sending a next request to a resource before a completion interrupt for a previous request
CN112860422A (en) Method, apparatus and computer program product for job processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination