CN110347637A - ASIC array, data processing board, and block mining method and apparatus - Google Patents

ASIC array, data processing board, and block mining method and apparatus Download PDF

Info

Publication number
CN110347637A
CN110347637A CN201811503862.0A CN201811503862A CN110347637A CN 110347637 A CN110347637 A CN 110347637A CN 201811503862 A CN201811503862 A CN 201811503862A CN 110347637 A CN110347637 A CN 110347637A
Authority
CN
China
Prior art keywords
block
asic
array
random number
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811503862.0A
Other languages
Chinese (zh)
Inventor
张楠赓
徐英韬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canaan Creative Co Ltd
Original Assignee
Canaan Creative Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canaan Creative Co Ltd filed Critical Canaan Creative Co Ltd
Publication of CN110347637A publication Critical patent/CN110347637A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0423Input/output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • G06F15/17356Indirect interconnection networks
    • G06F15/17368Indirect interconnection networks non hierarchical topologies
    • G06F15/17381Two dimensional, e.g. mesh, torus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7807System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
    • G06F15/7825Globally asynchronous, locally synchronous, e.g. network on chip
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7828Architectures of general purpose stored program computers comprising a single central processing unit without memory
    • G06F15/7832Architectures of general purpose stored program computers comprising a single central processing unit without memory on one IC chip (single chip microprocessors)
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7867Architectures of general purpose stored program computers comprising a single central processing unit with reconfigurable architecture
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/25Pc structure of the system
    • G05B2219/25257Microcontroller

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Multi Processors (AREA)

Abstract

The invention relates to an ASIC array for block mining, comprising: a plurality of ASIC chips disposed on a PCB circuit board, the ASIC chips comprising a plurality of dies, the dies comprising a network on chip, wherein the network on chip comprises a plurality of compute nodes, the compute nodes comprising a memory for distributing a subset of the random data sets in a memory block mining. The ASIC array distributes the memory to the computing nodes, and the ASIC array can store the random data set required in block mining through the hierarchical structure of the computing nodes, the network on chip, the ASIC chip and the ASIC array, namely, the demand of the random data set on the large-capacity memory in the hash computing process of block mining is effectively reduced by distributing the subsets to the memories of each computing node.

Description

ASIC array, data processing plate and block method for digging and equipment
Technical field
The present invention relates to block chain technical fields, more particularly to a kind of method and apparatus excavated for block, and Using the ASIC array and data processing board of the method that block excavates.
Background technique
Ether mill (Ethereum) is the block platform chain of a completely new evolution, it allows anyone to establish in platform With use the decentralization application by block chain technical operation.The core in ether mill is point-to-point (P2P) network, ether Mill block chain database is safeguarded and is updated by numerous nodes for being connected to network.
Fig. 1 is that the ether mill block of the prior art excavates schematic diagram.As shown in Figure 1, tranaction costs are collected by node, user (Miner) 12 be exactly the node for collecting in ether mill network 10, propagating, confirming and execute transaction.Transaction is packaged as by users Block, and compete with one another for, so that their block can be added on next block chain, this process is referred to as to dig mine.Mine Pool server 11 (Pool server) will dig the mine business subcontract user inner to mine pond (Mining pool) on new block, It includes that block prefix hashed value, user it is expected the random number range and difficulty that search that each, which digs parameter that miner makees,.
As block chain network, users successfully " are dug " by solving the problems, such as the task of complex mathematical to area Block, this is referred to as " proof of work ".One operational problem needs more if algorithmically solved than verifying solution The resource of the order of magnitude, then it is exactly the splendid selection of proof of work.It is special to prevent from having occurred and that in block chain network Centralization phenomenon caused by hardware (such as application-specific IC), ether mill have selected to lay particular emphasis on the fortune for consuming more memories Calculation problem.If problem needs memory and CPU, ideal hardware is exactly common computer.This just makes the workload in ether mill It proves the characteristic with anti-application-specific IC, is compared with the block chain excavated by specialised hardware control block, ether mill Being securely distributed for more decentralization can be brought.
The process of complex mathematical and the cpu performance zero correlation of block excavating equipment are solved the problems, such as, with block excavating equipment Memory size and memory bandwidth are positively correlated, it means that the block excavation of those large scale deployments by way of shared drive is set It is standby, linear or superlinearity (super-linear) growth can not be generated on digging mine efficiency.
Chinese patent application " a kind of FPGA parallel array module and its calculation method ", publication number CN106843080A are public A kind of FPGA parallel array module has been opened, host computer is connected to comprising the correspondence with foreign country layer that sets gradually, task sliced layer And computation layer;Correspondence with foreign country layer, task sliced layer, computation layer are equipped with power module and radiating module;Correspondence with foreign country layer is used for It is communicated with host computer, correspondence with foreign country layer is equipped with ARM main control module, and ARM main control module is for realizing software custom feature It calls;ARM main control module is equipped with interface module for realizing Linux software flow, and correspondence with foreign country layer is connected by interface module It is connected to host computer, and is equipped between ARM main control module and interface module and cracks module;Module is cracked for encapsulating FPGA gusts of tissue Column, scheduling FPGA resource carry out cipher key calculation, calculate correct key;Task sliced layer is used to carry out cutting and equal to task Weighing apparatus scheduling, task, which is layered, is equipped with multiple FPGA second level main control modules, and FPGA second level main control module is for carrying out complicated calculations;Meter It calculates layer and is equipped with multiple module ASICs, module ASIC is for carrying out simple computation;Each FPGA second level main control module is connected to multiple Module ASIC.
But during the block in current ether mill excavates, there are the demands of large capacity memory, how to reduce block excavating equipment Under the premise of cost, the problem of reducing the demand to block excavating equipment memory, be current industry primary study.
Summary of the invention
To solve the operation needs of problems in the excavation of ether mill block to large capacity memory, the invention discloses a kind of ASIC Array using the data processing plate of the ASIC array, and uses the data processing plate block excavating equipment and corresponding block Method for digging.
Specifically, the invention discloses a kind of ASIC arrays, and for carrying out block excavation, which includes: to set The multiple asic chips being placed in PCB circuit board, the asic chip include multiple bare dies, which includes network-on-chip, communication Module and functional module, wherein the network-on-chip includes multiple calculate nodes, which includes memory, and the memory is for dividing The subset of random data set in the excavation of cloth memory block.
Further, which is arranged with the formal distribution of M N array, wherein M, N are positive integer, and M >=2, N ≥2;The asic chip includes 2 bare dies, and 4 × 4 asic chips are arranged in the one side of the PCB circuit board, the PCB circuit board 4 × 4 asic chips are arranged in another side.
Preferably, which is arranged with the formal distribution of 6 × 12 arrays.
The invention also discloses a kind of block method for digging, carry out block excavation, feature using above-mentioned ASIC array It is, comprising: step 1, obtain the random data set for being excavated to current block;Step 2, which is drawn It is divided into multiple subsets and is distributed the calculate node for being stored in the ASIC array;Step 3, a certain calculate node is arbitrarily chosen to be protected A certain random number in the subset deposited carries out first round address arithmetic to obtain destination address;The destination address is obtained to accrued Correspondence random number in the subset that operator node is saved, the input as next round address arithmetic;By the ground of default wheel number The random number obtained after the operation of location is target random number;Step 4, hash is carried out to the target random number to calculate to obtain target Value, if the target value is less than or equal to difficulty threshold value, using the target random number as the block random number of current block, with the mesh Scale value is the block hashed value of current block;It is on the contrary then abandon the target random number, and it re-execute the steps 3;Step 5, by this The current block is written in block random number and the block hashed value, and the current block is broadcasted to block chain network;Step 6, When verifier receives the current block of broadcast, the legitimacy of the current block is verified, and will verify and legal work as proparea Block chain enters block chain.
Further, which includes: current block head and current block body, wherein deserving before proparea build includes Block hashed value, current block random number, current block hashed value and difficulty threshold value of one block.
Preferably, it with the mixed calculation device of calculate node where the random number, carries out when front-wheel address arithmetic.
The invention also discloses a kind of data processing plates excavated for block, including above-mentioned ASIC array, further includes: The controller unit being connect with the ASIC array, for monitoring the ASIC array processing, to the ASIC array input data and defeated The result that the ASIC array obtains out;Temperature management unit, including radiator and temperature sensor, wherein the temperature sensor is used In the temperature for detecting the data processing plate and it is sent to the controller unit and/or the radiator, or controls the work of the power supply State;The radiator is used to cool down for the data processing plate;Guidance unit is connect with the controller unit, for the control The starting of device unit guides;Debugging unit is connect with the controller unit, for the controller unit to be debugged and provided Tuning parameter.
Further, which is field programmable gate array.
The invention also discloses a kind of equipment excavated for block, including at least one above-mentioned data processing plate, also It include: network communication module, for connecting network to carry out data receiver and transmission;Task allocating module, being used for will be from the net The data distribution that network obtains gives the data processing plate, and the result that the data processing plate obtains is sent out by the network communication module It send to the network.
ASIC array of the invention passes through calculate node-network-on-chip-ASIC core in Memory Allocation to calculate node The hierarchical structure of piece-ASIC array, make ASIC array can store block excavate needed for random data set, i.e., by will be sub Collection is assigned to the mode of each calculate node memory, is effectively reduced in the Hash computation processes for carrying out block excavation, random data Collect the demand to large capacity memory.
Detailed description of the invention
Fig. 1 is that the ether mill block of the prior art excavates schematic diagram.
Fig. 2 is the ether mill block structure schematic diagram of the prior art.
Fig. 3 is block method for digging flow chart of the invention.
Fig. 4 is the ASIC array structure schematic diagram of first embodiment of the invention.
Fig. 5 is the network-on-chip two-dimensions ring topology schematic diagram of first embodiment of the invention.
Fig. 6 is the on piece channel direction schematic diagram between the calculate node of first embodiment of the invention.
Fig. 7 is access diagram between piece between the bare die of first embodiment of the invention.
Fig. 8 is access diagram between piece between the asic chip of first embodiment of the invention.
Fig. 9 is the structural schematic diagram of the data processing plate based on ASIC array of second embodiment of the invention.
Figure 10 is the block excavating equipment structural schematic diagram of third embodiment of the invention.
Wherein, appended drawing reference are as follows:
10: ether mill 11: mine pool server
12: user's 100:ASIC array
110:ASIC chip 120: bare die
130: network-on-chip 140: node
140-1: calculate node 140-2: functional node
141: memory 142: mixed to calculate device
143: node communication module 144: piece upper channel
150: reflector node 160: channel between piece
161: link pair 162: link
170:PCB circuit board 180: serial parallel channel
200: data processing plate 210: controller unit
220: power supply 230: temperature sensor
240: fan 300: task allocating module
400: network communication module 500: block excavating equipment
RX: receiver TX: transmitter
E, S, W, N: port position
Specific embodiment
Below in conjunction with attached drawing, the present invention will be described in detail.In the accompanying drawings, identical label typicallys represent identical or function It can similar component.In addition, the leftmost number of appended drawing reference shows the volume of that width attached drawing when the appended drawing reference first appears Number.
Subject description discloses one or more embodiments comprising feature of the present invention.Disclosed embodiment is used only for lifting Example explanation.Protection scope of the present invention is not limited to the disclosed embodiments.The present invention is defined by the following claims.Explanation It is directed to the reference of " one embodiment ", " embodiment ", " example embodiment " etc. in book, refers to that the embodiment of description may include Specific feature, structure or characteristic, each embodiment of but not must include these a particular feature, structure, or characteristics.In addition, Such statement not refers to the same embodiment.Further, specific feature, structure or characteristic are being described in conjunction with the embodiments When, regardless of either with or without specific description, it has been shown that such feature, structure or characteristic are integrated in other embodiments be In the knowledge of those skilled in the art.
Some vocabulary is used in specification and following claims to censure specific components or component, this field The member of ordinary skill, it is to be appreciated that technology user or manufacturer can be different noun or term come call the same component or Component.This specification and following claims not in such a way that the difference of title is as component or component is distinguished, and It is the criterion with component or component difference functionally as differentiation.In specification in the whole text and subsequent claim Mentioned " comprising " and "comprising" is an open term, therefore should be construed to " including but not limited to ".In addition, " connection " One word includes any direct and indirect means of electrical connection herein.Indirect means of electrical connection include by other devices into Row connection.It should be noted that in the description of the present invention, term " transverse direction ", " longitudinal direction ", "upper", "lower", "front", "rear", The orientation or positional relationship of the instructions such as "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outside" is based on attached drawing institute The orientation or positional relationship shown, is merely for convenience of description of the present invention and simplification of the description, and is not the dress of indication or suggestion meaning It sets or element must have a particular orientation, be constructed and operated in a specific orientation, therefore should not be understood as to limit of the invention System.
Block chain is a kind of calculating and storage architecture, specifically a kind of distributed accounting system of decentralization, should Node in system safeguards a account book without trusting each other, by unified common recognition mechanism jointly.And ether mill is a kind of energy Enough it is reprogrammed the single block chain to realize arbitrarily complicated computing function.Fig. 2 is the ether mill block structure of the prior art Schematic diagram.As shown in Fig. 2, the block structure in ether mill includes block head and block body, wherein block head includes that previous block dissipates Train value, the block hashed value of this block, the block random number of this block, the difficulty threshold value for excavating this block etc., block body includes The signature packet for the message that all external accounts issue in this block establishment process.The common recognition mechanism in ether mill is PoW (Proof of Work proof of work mechanism), using Ethash algorithm, this algorithm is to Dagger-Hashimoto algorithm Modified version, using a large amount of pseudo-random data, the set of these pseudo-random datas be referred to as " data set (dataset) " or Referred to as DAG.The characteristics of Ethash algorithm is that the efficiency that block excavates and memory size and memory bandwidth are positively correlated, process packet Include: for each block, calculating a seed (seed) first, seed only and current block it is information-related;Then basis Seed generates the random data set (cache) of a 32M;(had according to the random data set DAG that cache generates a 1GB size To acyclic graph), DAG is a complete search space, and the process that block excavates is exactly the random selection element (class from DAG It is similar to search suitable Nonce in the block excavation of block chain) hash operations are carried out again, it is specified that DAG can be quickly calculated from Cache The element of position, and then Hash verifying requires to periodically update Cache and DAG, every 1000 blocks update once, and Provide the size linear increase as time goes by of DAG.
Since the proof of work in ether mill has the characteristic of anti-application-specific IC, the block in ether mill is excavated In, ASIC is relative to the common computer for using memory and CPU, and there is no the advantages in speed and efficiency.Fig. 3 is area of the invention Block method for digging flow chart.As shown in figure 3, excavating the block in ether mill can the invention discloses a kind of block method for digging To use ASIC array and obtain higher speed and efficiency, specifically, block method for digging of the invention includes:
Step S1, the user in ether mill obtain the block body of current block, and for being excavated to current block Random data set DAG;Wherein DAG is to be obtained by technological means known in Ethash algorithm, such as pass through such as keccak256 The hash of algorithm, which calculates, obtains seed, is calculated by the hash of such as keccak512 algorithm and generates cache according to seed, by such as The hash of keccak512 algorithm, which is calculated, generates DAG etc. according to cache, and the present invention is not limited thereto;
Random data set DAG is divided into multiple random data set subsets by step S2, and by subset in the form of one-to-one Distribution is stored in the memory of calculate node, so that the memory of each calculate node is stored with a pseudo random number subset;
Step S3 chooses input of a certain random number as first run address arithmetic in a certain subset, passes through address arithmetic It is obtained the result is that a destination address, output of the corresponding random number of this destination address as epicycle address arithmetic, i.e., The random number of this output is for epicycle address arithmetic as a result, wherein the random number of this output is likely to be at the random number institute of input Subset, it is also possible to be in another subset;Using the random number of this output as the input of next round address arithmetic, carry out down One wheel address arithmetic;After carrying out R wheel address arithmetic, using the address arithmetic result of acquisition as target random number;Wherein R is preset Take turns number;Hash is carried out to the target random number of acquisition to calculate to obtain target value;
Address arithmetic can be executed by the corresponding calculate node of save location of subset, can also be by specifying computation processor It executes, the hash operations processing node e.g. on asic chip, the hash operations processing chip on ASIC array or ASIC gusts Independent hash arithmetic processor other than column, the present invention is not limited thereto;In an embodiment of the present invention, calculate node is having Outside the function of standby storage pseudo random number subset, it is also equipped with and hash operations processing function is carried out to the pseudo random number subset of itself storage Can, i.e., each round address arithmetic independently carries out in some calculate node, when front-wheel address arithmetic is where random number What the mixed calculation device of calculate node carried out;
The target value of acquisition is compared by step S4 with the difficulty threshold value that current block excavates, if target value be less than or Equal to difficulty threshold value, then using the target random number of acquisition as the block random number of current block, the target value with acquisition is current The block hashed value of block;If target value is greater than difficulty threshold value, give up the corresponding target random number of this target value, and again Execute step S3;
Step S5, by the block head of the step S4 block random number obtained and block hashed value write-in current block, to obtain To complete current block, complete current block is broadcasted to ether mill.
Step S6, the other users (verifier) in ether mill are in the current block for receiving broadcast, according to current block Block head included in the block hashed value of previous block, the block random number of current block, current block block dissipate Train value and difficulty threshold value etc. verify the legitimacy of current block, and enter block chain for legal current block chain is verified.
According to above-mentioned block method for digging, the invention also discloses a kind of ASIC array (ASIC Array), by memory (SRAM) it is assigned on multiple asic chips (chip), is connected between asic chip by high-speed link, and each asic chip On further include multiple bare dies (die), each bare die includes a network-on-chip (NoC, Network on Chip), network-on-chip NoC has multiple calculate nodes, and each calculate node includes mixed calculation device, node communication module, memory etc..In this way, into In the Hash computation processes of row Ethash algorithm, by way of by the memory of subset allocation to each calculate node, Ke Yiyou Effect reduces demand of the DAG to big memory in hash calculating.
Calculate node has a variety of distribution forms, and (M >=2, N >=2, M, N e.g. is arranged with the formal distribution of M N array For positive integer), or the setting that is interspersed in the form of cellular network, or be distributed and be arranged with arbitrary form, the present invention not as Limit.
Fig. 4 is the ASIC array structure schematic diagram of first embodiment of the invention.As shown in figure 4, real in of the invention first Example is applied, ASIC array 100 includes PCB circuit board 170 and asic chip 110, and wherein asic chip 110 is set to PCB circuit board 170 upper bottom surface and bottom surface, in the upper bottom surface of PCB circuit board 170, asic chip 110 is 16, uniform in 4 × 4 array Distribution, in the bottom surface of PCB circuit board 170, mirror image is dispersed with same amount of asic chip 110;Each asic chip 110 wraps Include 2 bare dies 120 arranged side by side;Each bare die 120 includes a network-on-chip 130;Network-on-chip 130 includes 72 calculate nodes (Node) 140 and 6 reflector nodes (Reflector), 150,72 calculate nodes 140 are set with the formal distribution of 6 × 12 arrays It sets;Calculate node 140-1 includes memory (SRAM) 141, mixed calculation device (Mixer) 142 and node communication module 143, each calculating Node 140-1 has the memory of rMB.In this way, which entire ASIC array 100 then has 32 (2 × 4 × 4) a asic chips A calculate node of a bare die 120,4608 (64 × 72) in 110,64 (2 × 32), each bare die 120 has the memory of 72*rMB, whole A ASIC array then has the memory of about 4608*rMB, and the value of r is arranged according to the size of DAG, such as when DAG is about 4GB, can set R=1 is set, i.e., each calculate node has the memory of 1MB, the entire ASIC array about memory size of 4.5GB, by 4GB DAG distributed buffer in the calculate node of ASIC array.
Fig. 5 is the two-dimensional annular topological structure schematic diagram of the network-on-chip of first embodiment of the invention.As shown in figure 5,72 A calculate node 140-1 is uniformly distributed in 6 × 12 arrays, and definition X is line direction, and Y is column direction, network-on-chip 130 6 arranges totally, 12 rows also have 1 reflector node 150 in the bottommost of each column respectively, between calculate node 140-1, reflector node 150 with Differential lines between calculate node 140-1 by network-on-chip are formed by the interconnection of piece upper channel 144, have network-on-chip There is two-dimensions ring topology, wherein each calculate node 140-1 there are 4 directly connected calculate node 140- 1 is used as close to node.And reflector node 150 is connected in the column of network-on-chip 130, only there are two close to node, without with Other column connections.Fig. 6 is the on piece channel direction schematic diagram between the calculate node of first embodiment of the invention.As shown in fig. 6, The piece of its immediate node of calculate node 140-1 in the two-dimensional annular topological structure of network-on-chip is indicated with direction E, W, S, N 144 port position of upper channel, then reflector node 150 only has the direction N, S.
Fig. 7 is access diagram between piece between the bare die of first embodiment of the invention.As shown in fig. 7, between bare die 120 Piece between channel 160 include both links to (pair) 161, both links respectively correspond two directions toward each other to 161, A transmitter TX and a receiver RX, direction corresponding to link pair 161 is respectively set in the endpoint of each link pair 161 It is determined by the position where transmitter TX and receiver RX.Each link pair 161 has both links 162, in this way, each Between channel 160 include 4 links 162.Fig. 8 is access diagram between piece between the asic chip of first embodiment of the invention. As shown in figure 8, being connected with each other between bare die 120 also by channel 160 between piece.Since each asic chip 110 includes 2 naked Piece 120, therefore channel 160 is attached requiring 4 silvers between any between of chip 110.
It is emphasized that and be connected with each other between 2 bare dies 120 in asic chip 110 by channel 160 between 2 silvers, Wherein channel 160 can be used as the spare interface channel between 2 bare dies 120 between 1 silver.
In the first embodiment of the present invention, bare die 120 is connected with full-mesh topology (full mesh topology) structure, Channel 160 is directly connected to every other bare die 120 between i.e. each bare die 120 passes through piece.Bare die 120 further includes for operation Task generates and the functional module of operational data insertion, and the serial parallel for calculating data insertion and calculated result output is logical Road (SerDes) 180.Referring once again to Fig. 5, as shown in figure 5, in the first embodiment of the present invention, 72 calculating of network-on-chip There are 64 calculate node connection serial parallel channels 144, remaining 8 calculate node linkage function modules in node 140-1.In Fig. 5 In, the functional node 140-2 that dark node on behalf is connected with functional module, light node on behalf is connected with series-parallel channel Calculate node 140-1.
Fig. 9 is a kind of structural schematic diagram of data processing plate based on ASIC array of second embodiment of the invention.Such as Fig. 9 It is shown, in the second embodiment of the present invention, a kind of data processing plate based on ASIC array is provided, which includes ASIC array 100, controller unit (Controller) 210, power supply (Power) 220 and temperature management unit (thermal Management), wherein temperature management unit includes temperature sensor (Sensors) 230, radiator 240.ASIC array 100 For receiving data that controller unit 210 transmits and carrying out hash calculating to it, after the result met the requirements, by it Send controller unit 210 to;Controller unit 210 is connect with ASIC array 100 and temperature management unit respectively, for monitoring The hash of ASIC array calculates, and is transferred to ASIC array after receiving external data, and exports the result of ASIC array acquisition; Power supply 220 is used to DC input voitage being converted to operating voltage, and operating voltage is supplied to the ASIC battle array of data processing plate Column 100, controller unit 210 and other modules;Temperature management unit can will pass through at the data that temperature sensor 230 obtains The temperature information of reason plate is sent to controller unit 210, can also receive the instruction of controller unit 210, temperature management unit Be sent to radiator 240 according to the instruction of controller unit 210, the fan governor of radiator 240 to start or close fan, Or adjustment rotation speed of the fan, power work state can also be controlled in the case where not being related to system software, such as work as temperature sensing Device 230 detects that data processing plate 200 controls when the temperature is excessively high and cuts off the power.In the second embodiment of the present invention, controller Unit 210 can be using the Zynq ZU3CG field programmable gate array with 4GB RAM of operation (SuSE) Linux OS FPGA。
It further include guidance unit, debugging unit and status lamp, wherein guidance unit is used in the second embodiment of the present invention In controller unit starting guide, e.g. μ SD storage card or other have the storage medium of guiding function, the present invention is not As limit;Debugging unit may have access to the serial console and JTAG of controller unit, for debugging to controller unit And tuning parameter is provided, e.g. USB full-speed device etc., the present invention is not limited thereto;Status lamp be used for through color and/or Brightness change shows the working condition of data processing plate, the elements such as e.g. LED.
ASIC array disclosed by the invention and the data processing plate for using ASIC array, for being hashed using e.g. SHA3 The hash of function calculates, and meets the big memory requirements of the workload verification algorithm Pow in ether mill.Figure 10 is that third of the present invention is implemented The block excavating equipment structural schematic diagram of example.As shown in Figure 10, it in the third embodiment of the present invention, discloses a kind of for ether The block excavating equipment 500 that mill block excavates, including at network communication module 400, task allocating module 300 and data above-mentioned Plate 200 is managed, wherein network communication module 400 is for the data communication with ether mill, the e.g. equipment such as High_speed NIC;Task point The ether mill block mining data for being used to will acquire with module 400 is distributed to data processing plate, and data processing plate is obtained As a result ether mill is sent to by network communication module;Block excavating equipment 500 includes one or more data processing plates 200, The data that data processing plate 200 distributes task allocating module 400 are handled, to carry out block excavation.Third of the invention The block excavating equipment 500 of embodiment further includes power supply module, radiating module, cabinet etc., is belonged to normal in block excavation applications See technology, therefore which is not described herein again.
Although the present invention is disclosed as above with embodiment, and is not intended to limit the invention, the skill of any the art Art personnel can carry out equivalent modifications or change to it, be intended to be limited solely by the present invention in without departing from spirit and scope of the invention Claims protection scope in.

Claims (13)

1. a kind of ASIC array, for carrying out block excavation, which is characterized in that the ASIC array includes: to be set to PCB circuit board On multiple asic chips, which includes multiple bare dies, which includes network-on-chip, and wherein the network-on-chip includes Multiple calculate nodes, the calculate node include memory, which is used to be distributed the son of the random data set in memory block excavation Collection.
2. ASIC array as described in claim 1, which is characterized in that the calculate node is set with the formal distribution of M N array It sets;Wherein, M, N are positive integer, and M >=2, N >=2.
3. ASIC array as claimed in claim 2, which is characterized in that the calculate node is set with the formal distribution of 6 × 12 arrays It sets.
4. ASIC array as described in claim 1, which is characterized in that the asic chip includes 2 bare dies.
5. ASIC array as described in claim 1, which is characterized in that 4 × 4 ASIC are arranged in the one side of the PCB circuit board 4 × 4 asic chips are arranged in the another side of chip, the PCB circuit board.
6. a kind of block method for digging carries out block excavation using ASIC array as claimed in any one of claims 1 to 5, It is characterized in that, comprising:
Step 1, the random data set for being excavated to current block is obtained;
Step 2, which is divided into multiple subsets and distribution is stored in the calculate node of the ASIC array;
Step 3, arbitrarily choose a certain random number in the subset that a certain calculate node is saved, carry out first round address arithmetic with Obtain destination address;It obtains the destination address and corresponds to correspondence random number in the subset that calculate node is saved, as next round The input of address arithmetic;Using the random number obtained after the address arithmetic of default wheel number as target random number;
Step 4, it carries out hash to the target random number to calculate to obtain target value, if the target value is less than or equal to difficulty threshold Value, then using the target random number as the block random number of current block, using the target value as the block hashed value of current block;Instead Then abandon the target random number, and re-execute the steps 3;
Step 5, the current block is written into the block random number and the block hashed value, and the current block is broadcasted to block Chain network.
7. block method for digging as claimed in claim 6, which is characterized in that further include:
Step 6, when verifier receives the current block of broadcast, the legitimacy of the current block is verified, and verifying is closed The current block chain of method enters block chain.
8. block method for digging as claimed in claim 6, which is characterized in that the current block includes: current block head and works as Proparea block, wherein deserving block hashed value, current block random number, the current block hash that proparea build includes previous block Value and the difficulty threshold value.
9. block method for digging as claimed in claim 6, which is characterized in that in the step 3, to calculate section where the random number The mixed calculation device of point is carried out when front-wheel address arithmetic.
10. a kind of data processing plate including the described in any item ASIC arrays of Claims 1 to 5 is excavated for block, special Sign is, further includes: the controller unit being connect with the ASIC array, for monitoring the ASIC array processing, to the ASIC gusts Column input data and the result for exporting ASIC array acquisition.
11. data processing plate as claimed in claim 10, which is characterized in that further include:
Temperature management unit, including radiator and temperature sensor, wherein the temperature sensor is for detecting the data processing plate Temperature and be sent to the controller unit and/or the radiator, or control the working condition of the power supply;The radiator is for being Data processing plate cooling;
Guidance unit is connect with the controller unit, for the starting guidance to the controller unit;
Debugging unit is connect with the controller unit, for being debugged to the controller unit and providing tuning parameter.
12. data processing plate as claimed in claim 10, which is characterized in that the controller unit is field-programmable gate array Column.
13. a kind of block excavating equipment, including at least one is such as the described in any item data processing plates of claim 10~12, uses It is excavated in block, which is characterized in that further include:
Network communication module, for connecting network to carry out data receiver and transmission;
Task allocating module, data distribution for will obtain from the network give the data processing plate, and by the data processing plate Obtained result is sent to the network by the network communication module.
CN201811503862.0A 2018-04-08 2018-12-10 ASIC array, data processing board, and block mining method and apparatus Pending CN110347637A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810306317 2018-04-08
CN2018103063176 2018-04-08

Publications (1)

Publication Number Publication Date
CN110347637A true CN110347637A (en) 2019-10-18

Family

ID=68174022

Family Applications (3)

Application Number Title Priority Date Filing Date
CN201811503862.0A Pending CN110347637A (en) 2018-04-08 2018-12-10 ASIC array, data processing board, and block mining method and apparatus
CN201811504033.4A Active CN110347071B (en) 2018-04-08 2018-12-10 ASIC array, data processing board, and block mining method and apparatus
CN201811504035.3A Pending CN110347638A (en) 2018-04-08 2018-12-10 ASIC array, data processing board, and block mining method and apparatus

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN201811504033.4A Active CN110347071B (en) 2018-04-08 2018-12-10 ASIC array, data processing board, and block mining method and apparatus
CN201811504035.3A Pending CN110347638A (en) 2018-04-08 2018-12-10 ASIC array, data processing board, and block mining method and apparatus

Country Status (1)

Country Link
CN (3) CN110347637A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115587018A (en) * 2022-11-22 2023-01-10 中科声龙科技发展(北京)有限公司 Calculation force service data set storage method, calculation force calculation device and calculation force service equipment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11665776B2 (en) * 2019-12-27 2023-05-30 Arteris, Inc. System and method for synthesis of a network-on-chip for deadlock-free transformation

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100401946B1 (en) * 2001-08-10 2003-10-17 박종원 A method of address calculation and routing and a conflict-free memory system using the method
US9864584B2 (en) * 2014-11-14 2018-01-09 Cavium, Inc. Code generator for programmable network devices
CN106407008B (en) * 2016-08-31 2019-12-03 北京比特大陆科技有限公司 Dig mine method for processing business, device and system
US20180082290A1 (en) * 2016-09-16 2018-03-22 Kountable, Inc. Systems and Methods that Utilize Blockchain Digital Certificates for Data Transactions
CN107329926A (en) * 2017-07-10 2017-11-07 常州天能博智能***科技有限公司 A kind of computing board and its troubleshooting methodology

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115587018A (en) * 2022-11-22 2023-01-10 中科声龙科技发展(北京)有限公司 Calculation force service data set storage method, calculation force calculation device and calculation force service equipment
CN115587018B (en) * 2022-11-22 2023-03-10 中科声龙科技发展(北京)有限公司 Calculation force service data set storage method, calculation force calculation device and calculation force service equipment

Also Published As

Publication number Publication date
CN110347071A (en) 2019-10-18
CN110347071B (en) 2022-07-26
CN110347638A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
CN104025063B (en) Method and apparatus for sharing network interface controller
CN103491188B (en) Virtual desktop and GPU is utilized thoroughly to pass the method realizing multiple users share graphics workstation
Durand et al. Euroserver: Energy efficient node for european micro-servers
EP3742485B1 (en) Layered super-reticle computing: architectures and methods
CN104077138A (en) Multiple core processor system for integrating network router, and integrated method and implement method thereof
KR101357843B1 (en) Data center switch
CN110347637A (en) ASIC array, data processing board, and block mining method and apparatus
CN107667373A (en) Safe trusted execution environment data storage
CN106301859A (en) A kind of manage the method for network interface card, Apparatus and system
CN108777612A (en) A kind of optimization method and circuit of proof of work operation chip core calculating unit
Balamurugan et al. Roadmap for machine learning based network-on-chip (M/L NoC) technology and its analysis for researchers
CN106649702A (en) File storage method and apparatus of cloud storage system, and cloud storage system
Engel et al. Common read-out receiver card for ALICE Run2
US11983260B2 (en) Partitioned platform security mechanism
CN107450632A (en) Unit control system and method
CN107766146A (en) Method and corresponding equipment for resource reconfiguration
CN103500240B (en) The method that silicon through hole is carried out Dynamic Programming wiring
CN105957559A (en) Test system and testing device
CN105577752A (en) Management system used for fusion framework server
CN109739560A (en) A kind of GPU card cluster configuration control system and method
CN108965478A (en) Distribution type data collection method and system based on block chain technology
DE102022110979A1 (en) MODULAR THERMAL TEST CARRIER
CN106843080B (en) A kind of FPGA parallel array module and its calculation method
CN102542088B (en) Coverage ratio driven random authentication method
CN102882799B (en) The controllable clustered deploy(ment) configuration System and method for of flow

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination