US20230273729A1 - Core group memory processing with group b-float encoding - Google Patents
Core group memory processing with group b-float encoding Download PDFInfo
- Publication number
- US20230273729A1 US20230273729A1 US18/109,788 US202318109788A US2023273729A1 US 20230273729 A1 US20230273729 A1 US 20230273729A1 US 202318109788 A US202318109788 A US 202318109788A US 2023273729 A1 US2023273729 A1 US 2023273729A1
- Authority
- US
- United States
- Prior art keywords
- memory
- regions
- compute
- core
- processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000015654 memory Effects 0.000 title claims abstract description 619
- 238000012545 processing Methods 0.000 title claims abstract description 309
- 230000006870 function Effects 0.000 claims description 79
- 230000004913 activation Effects 0.000 claims description 11
- 238000003062 neural network model Methods 0.000 claims description 11
- 238000003672 processing method Methods 0.000 claims description 5
- 230000003068 static effect Effects 0.000 claims description 5
- 238000007667 floating Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 description 114
- 239000000872 buffer Substances 0.000 description 76
- 238000004891 communication Methods 0.000 description 61
- 239000010410 layer Substances 0.000 description 17
- 238000013507 mapping Methods 0.000 description 15
- 230000008520 organization Effects 0.000 description 14
- 238000013528 artificial neural network Methods 0.000 description 13
- 238000000034 method Methods 0.000 description 12
- 238000003491 array Methods 0.000 description 11
- 238000004364 calculation method Methods 0.000 description 8
- 238000011176 pooling Methods 0.000 description 7
- 239000011229 interlayer Substances 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 6
- 230000001360 synchronised effect Effects 0.000 description 6
- 238000005070 sampling Methods 0.000 description 5
- 238000012935 Averaging Methods 0.000 description 4
- 101100437784 Drosophila melanogaster bocks gene Proteins 0.000 description 3
- 150000001875 compounds Chemical class 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 230000005055 memory storage Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000012517 data analytics Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
- G06F3/0611—Improving I/O performance in relation to response time
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0207—Addressing or allocation; Relocation with multidimensional access, e.g. row/column, matrix
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/0284—Multiple user address space allocation, e.g. using different base addresses
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/064—Management of blocks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/38—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
- G06F7/40—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using contact-making devices, e.g. electromagnetic relay
- G06F7/44—Multiplying; Dividing
- G06F7/446—Multiplying; Dividing by partial product forming (with electric multiplication table)
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/38—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
- G06F7/48—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
- G06F7/544—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices for evaluating functions by calculation
- G06F7/5443—Sum of products
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/54—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using elements simulating biological cells, e.g. neuron
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C7/00—Arrangements for writing information into, or reading information out from, a digital store
- G11C7/10—Input/output [I/O] data interface arrangements, e.g. I/O data control circuits, I/O data buffers
- G11C7/1006—Data managing, e.g. manipulating data before writing or reading out, data bus switches or control circuits therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0813—Multiuser, multiprocessor or multiprocessing cache systems with a network or matrix configuration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0862—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1008—Correctness of operation, e.g. memory ordering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1016—Performance improvement
- G06F2212/1024—Latency reduction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1028—Power efficiency
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/45—Caching of specific data in cache memory
- G06F2212/454—Vector or matrix data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/60—Details of cache memory
- G06F2212/6026—Prefetching based on access pattern detection, e.g. stride based prefetch
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C7/00—Arrangements for writing information into, or reading information out from, a digital store
- G11C7/10—Input/output [I/O] data interface arrangements, e.g. I/O data control circuits, I/O data buffers
- G11C7/1015—Read-write modes for single port memories, i.e. having either a random port or a serial port
- G11C7/1039—Read-write modes for single port memories, i.e. having either a random port or a serial port using pipelining techniques, i.e. using latches between functional memory parts, e.g. row/column decoders, I/O buffers, sense amplifiers
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C7/00—Arrangements for writing information into, or reading information out from, a digital store
- G11C7/10—Input/output [I/O] data interface arrangements, e.g. I/O data control circuits, I/O data buffers
- G11C7/1015—Read-write modes for single port memories, i.e. having either a random port or a serial port
- G11C7/1042—Read-write modes for single port memories, i.e. having either a random port or a serial port using interleaving techniques, i.e. read-write of one part of the memory while preparing another part
Definitions
- FIG. 36 illustrates an exemplary branching dataflow utilizing a partial feature-map buffer, in accordance aspects of the present technology.
- FIG. 37 shows a memory processing unit (MPU), in accordance with aspects of the present technology.
- MPU memory processing unit
- FIG. 45 shows a method of fitting arrays into a 2-dimension memory, in accordance with aspects of the present technology.
- FIG. 63 illustrates another logical view of a feature map and weights encoded in Group B-float from an input side of a compute core, in accordance with aspects of the present technology.
- the given computation function can be segmented, and the computation function can be configured to be performed on one or more of the plurality of processing units 135 - 150 .
- the processing regions 135 - 150 can each include one or more memory processing units, memory processing unit cores, or the like.
- the memory processing units and or cores can implement computation functions in arrays of memory cells without changing the basic memory array structure.
- the one or more centralized or distributed control circuitry 160 can configure the one or more computation functions of the one or more of the plurality of processing regions 135 - 150 .
- the computation functions can include, but are not limited to, vector products, matrix-dot-products, convolutions, min/max pooling, averaging, scaling, and or the like.
- the plurality of processing regions 212 - 216 can be configurable for memory-to-core dataflow from respective ones of the plurality of regions of the first memory 202 - 210 to one or more cores 220 - 232 within adjacent ones of the plurality of processing regions 212 - 216 .
- the plurality of processing regions 212 - 216 can also be configurable for core-to-memory dataflow from one or more cores 220 - 232 within ones of the plurality of processing regions 212 - 216 to adjacent ones of the plurality of regions of the first memory 202 - 210 .
- the memory processing unit 300 can further include an inter-layer-communication (ILC) unit 340 .
- the ILC unit 340 can be global or distributed across the plurality of processing regions 312 - 316 .
- the ILC unit 340 can include a plurality of ILC modules, wherein each ILC module can be coupled to a respective processing regions 312 - 316 .
- Each ILC module can also be coupled to the respective regions of the first memory 302 - 310 adjacent the corresponding respective processing regions 312 - 316 .
- the inter-layer-communication unit 340 can be configured to synchronize data movement between one or more compute cores producing given data and one or more other compute cores consuming the given data.
- the inter-layer communication unit 340 can map the computations functions of compute cores and dataflow between processing regions 312 - 316 and first memory 302 - 310 on an adjacency basis so that dataflow of shared data can be synchronized therebetween.
- the memory processing unit 300 can further include one or more input/output stages 348 , 350 .
- the one or more input/output stages 348 , 350 can be coupled to one or more respective regions of the first memory 302 - 310 .
- the one or more input/output stages 348 , 350 can include one or more input ports, one or more output ports, and or one or more input/output ports.
- the one or more input/output stages 348 , 350 can be configured to stream data into or out of the memory processing unit 300 .
- one or more of the input/output (I/O) stages can be configured to stream data into a first one of the plurality of regions of the first memory 302 - 310 .
- one or more input/output (I/O) stages can be configured to stream data out of a last one of the plurality of regions of the first memory 302 - 310 .
- the memory processing unit 400 can include a first memory region and a plurality of processing region 410 - 414 .
- the first memory can include a plurality of memory regions 402 - 408 .
- the plurality of processing regions 410 - 414 can be interleaved between the plurality of memory regions 402 - 408 of the first memory.
- the plurality of memory regions 402 - 408 and the plurality of processing regions 410 - 414 can have respective predetermine sizes.
- One or more of the plurality of memory regions 402 - 408 can include a plurality of memory blocks 416 - 432 .
- providing more, but smaller, flat memory bocks by organizing each of the plurality of memory regions 402 - 408 into respective sets of a plurality of memory blocks 416 - 432 can provide increased memory bandwidth for increased performance.
- the smaller flat memory blocks can also provide the potential for better chip layout as compared to larger flat memory organizations.
- the increased number of the smaller flat memory blocks can make adjacency mapping for dataflow more challenging.
- a neural network layer, a part of a neural network layer, or a plurality of fused neural network layers can be mapped to a single cluster of compute cores or a core group as a mapping unit.
- a cluster of compute cores is a set of cores of a given processing region that are configured to work together to compute a mapping unit.
- the nodes of a first layer 610 of a neural network can be mapped as a mapping unit to a first set of compute cores
- the nodes of a second layer 620 can be mapped to a second set of compute cores
- the node of a third layer 630 can be mapped to a third set of compute cores, as illustrated in FIG. 6 .
- a mapping unit 710 can be computed by a compute core cluster 720 as illustrated in FIG. 7 .
- more compute cores than are needed to compute a mapping unit can be configured in a compute cluster to improve computing performance.
- the writeback unit 1015 can be configured to write data to an N+1 th portion of the first memory for the multiply-and-accumulate (MAC) array unit 1010 .
- the writeback unit 1015 can also be configured to synchronize data movement the N th portion of the first memory with the inter-layer-communication (ILC) unit.
- the writeback unit 1015 can be configured to perform a fuse operation, send data to an adjacent region of the first memory or adjacent compute core in the respective processing region, and to increment an inter-layer-communication (ILC) counter.
- indirect synchronization can be implemented by the compute cores sending appropriate signals to the buffer to provide visible synchronization.
- the buffers between the compute cores can act as a simple memory used for writing and reading data.
- the producer core can be configured to ensure that the consumer core is ready for data, and the consumer core can be configured to ensure that there is enough data in the memory so that it can perform a computation operation.
- data flow between compute cores 3715 - 3725 of one or more of a plurality of processing regions and corresponding adjacent ones of the plurality of regions of the first memory 3705 can be configured utilizing direct synchronization between the compute cores and the first memory.
- data flow between the second memory (not shown) and the compute cores 3715 - 3755 of the one or more of the plurality of processing regions can be configured utilizing direct synchronization between the compute cores 3715 - 3755 and the second memory.
- Data flow between compute cores 3715 - 3725 within respective ones of the one or more of the plurality of processing regions can also be configured utilizing direct synchronization between adjacent compute cores within the respective processing region.
- the difference between the initial count (i o ) and the minimum count (i n ) represents the amount of data that must be produced (written to the corresponding shared buffer) by one or more producer compute cores before one or more consumer compute cores may start to consume data from the corresponding shared buffer. If there are multiple producer compute cores writing to the same shared buffer, the inter-layer-communication (ILC) unit 3760 - 3765 may require multiple increment synchronization commands for the compute cores before incrementing the current unit count (i c ). Furthermore, the inter-layer-communication (ILC) unit 3760 - 3765 may need to know from the corresponding computer core when a new data set, such as a new feature map, is received to reset the counter values.
- a new data set such as a new feature map
- the memory macro appears as a large 2-dimensional memory array.
- the memory macro can be characterized by a height and a width.
- the width of the memory macro can be configured to provide a very wide word fetch.
- the width of the memory macro can be many words per read wide, which can be determined by a needed read bandwidth access for weight arrays. In an exemplary implementation, the access bandwidth of a memory macro can be up to 1024 bits.
- the height of the memory macro can be a 1-dimensional addressable space.
- the height of the memory macro can be determined by the total size of the memory macro divided by the width of the memory macro.
- the memory macro can be logically split into a plurality of physical channels 4410 . Each physical channel can be considered a “weight prefetch” wide 4420 .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- Neurology (AREA)
- Electromagnetism (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Computer Hardware Design (AREA)
- Complex Calculations (AREA)
Abstract
A memory processing unit (MPU) can include a first memory, a second memory, a plurality of processing regions and control logic. The first memory can include a plurality of memory regions. The plurality of memory regions can be organized in a plurality of memory blocks. The plurality of memory regions can be configured to store integer, B-float, and/or Group B-float encode data. The plurality of processing regions can be interleaved between the plurality of processing regions of the first memory. The plurality of processing regions can be organized in a plurality of core groups include a plurality of compute cores. The compute groups in the processing regions can be coupled to a plurality of adjacent memory blocks in the adjacent memory regions. The second memory can be coupled to the plurality of processing regions.
Description
- This application claims the benefit of U.S. Provisional Patent Application No. 63/310,031 filed Feb. 14, 2022, which is incorporated herein in its entirety.
- Computing systems have made significant contributions toward the advancement of modern society and are utilized in a number of applications to achieve advantageous results. Applications such as artificial intelligence, machine learning, big data analytics and the like perform computations on large amounts of data. In conventional computing systems, data is transferred from memory to one or more processing units, the processing units perform calculations on the data, and the results are then transferred back to memory. The transfer of large amounts of data from memory to the processing unit and back to memory takes time and consumes power. Accordingly, there is a continuing need for improved computing systems that reduce processing latency, data latency and or power consumption.
- The present technology may best be understood by referring to the following description and accompanying drawings that are used to illustrate embodiments of the present technology directed toward memory processing architectures.
- In one embodiment, a memory processing unit (MPU) can include a first memory and a plurality of processing regions. The first memory can include a plurality of memory regions, wherein the plurality of memory regions can be configured in a corresponding pluralities of memory blocks. The memory blocks can be configured to store Brian Floating Point (B-float) encoded data and or group B-float encoded data The plurality of processing regions can be interleaved between the plurality of regions of the first memory, wherein the processing regions include a plurality of core groups, and wherein the core groups include one or more compute cores.
- In another embodiment, a memory processing unit (MPU) can include a first memory and a plurality of processing regions interleaved between a plurality of regions of the first memory. The plurality of memory regions can be configured in corresponding pluralities of memory blocks and the plurality of processing regions can be configure in corresponding pluralities of core groups. The plurality of core groups of respective ones of the plurality of processing regions can be coupled between adjacent ones of the plurality of memory regions of the first memory. The memory blocks are configured to store Group B-float encoded feature map pixels.
- In another embodiment, a memory processing method can include configuring a first memory to store Group B-float encoded data, wherein the first memory includes a plurality of regions. Data flow between compute cores of one or more of a plurality of processing regions and corresponding adjacent ones of the plurality of regions of the first memory can be configured. Data flow between a second memory and the compute cores of the one or more of the plurality of processing regions can also be configured. Data flow can also be configured between compute cores within respective ones of the one or more of the plurality of processing regions. One or more sets of compute cores of one or more of the plurality of processing regions can be configured to perform respective compute functions of a neural network model. Weights for the neural network model can be loaded into the second memory, and activation data for the neural network model can be loaded into one or more of the plurality of regions of the first memory. Data movement between one or more compute cores producing given data and one or more other compute cores consuming the given data can be synchronized based on the neural network model.
- This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
- Embodiments of the present technology are illustrated by way of example and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
-
FIG. 1 shows a memory processing unit (MPU), in accordance with aspects of the present technology. -
FIG. 2 shows a memory processing unit (MPU), in accordance with aspects of the present technology. -
FIG. 3 shows a memory processing unit (MPU), in accordance with aspects of the present technology. -
FIG. 4 shows a memory processing unit (MPU), in accordance with aspects of the present technology. -
FIG. 5 shows a memory processing unit (MPU), in accordance with aspects of the present technology. -
FIG. 6 illustrates an exemplary mapping of a neural network to compute cores, in accordance with aspects of the present technology. -
FIG. 7 illustrates an exemplary compute core mapping, in accordance with aspects of the present technology. -
FIGS. 8A-8B show an exemplary computation of multiple output feature map pixels, in accordance with aspects of the present technology. -
FIG. 9 shows configuration of dataflows in a memory processing unit (MPU), in accordance with aspects of the present technology. -
FIG. 10 shows a near memory (M) compute core, in accordance with aspects of the present technology. -
FIG. 11 shows an arithmetic (A) compute core, in accordance with aspects of the present technology. -
FIG. 12 shows an input (I) core, in accordance with aspects of the present technology. -
FIG. 13 shows an output (O) core, in accordance with aspects of the present technology. -
FIGS. 14-17 illustrate a whole channel compute core configuration, in accordance with aspects of the present technology. -
FIGS. 18-21 show a second memory region polymorphic compute core configuration, in accordance with aspects of the present technology. -
FIGS. 22-25 show a first memory region polymorphic compute core configuration, in accordance with aspects of the present technology. -
FIGS. 26-29 show a compound compute core configuration, in accordance with aspects of the present technology. -
FIG. 30 shows a first memory region sharing feature of the memory processing unit (MPU), in accordance with aspects of the present technology. -
FIGS. 31A and 31B illustrate an exemplary buffer utilization by a consumer and a producer, in accordance with aspects of the present technology. -
FIGS. 32A-32D illustrate an exemplary shared partial buffer for a 3×3 kernel size, in accordance with aspects of the present technology. -
FIGS. 33A and 33B illustrate an exemplary shared partial buffer for a 3×3 kernel size with a 2×2 stride, in accordance with aspects of the present technology. -
FIG. 34 illustrates an example branching dataflow utilizing a full feature-map buffer, in accordance with aspects of the present technology. -
FIG. 35 illustrates an exemplary branching dataflow utilizing a partial feature-map buffer, in accordance aspects of the present technology. -
FIG. 36 illustrates an exemplary branching dataflow utilizing a partial feature-map buffer, in accordance aspects of the present technology. -
FIG. 37 shows a memory processing unit (MPU), in accordance with aspects of the present technology. -
FIG. 38 shows an inter-layer-communication method, in accordance with aspect of the present technology. -
FIG. 39 shows respective shared buffers and corresponding respective ILC entry indexes, in accordance with aspects of the present technology. -
FIG. 40 illustrates tracking of access to a shared respective buffer in a respective ILC entry index, in accordance with aspects of the present technology. -
FIG. 41 illustrates a 4-dimension array, in accordance with aspects of the present technology. -
FIG. 42 illustrates a 3-dimension array, in accordance with aspects of the present technology. -
FIG. 43 illustrates a 2-dimension array, in accordance with aspects of the present technology. -
FIG. 44 shows a memory macro of a memory processing unit (MPU), in accordance with aspects of the present technology. -
FIG. 45 shows a method of fitting arrays into a 2-dimension memory, in accordance with aspects of the present technology. -
FIG. 46 illustrates expansion of a 3-dimension array, in accordance with aspects of the present technology. -
FIG. 47 illustrates expansion of a 2-dimension array, in accordance with aspects of the present technology. -
FIG. 48 illustrates quantization of an array, in accordance with aspects of the present technology. -
FIG. 49 illustrates flattening of a quantized array, in accordance with aspects of the present technology. -
FIG. 50 illustrates reshaping of a flattened array, in accordance with aspects of the present technology. -
FIG. 51 illustrates rotating of a reshaped array, in accordance with aspects of the present technology. -
FIG. 52 illustrates loading virtual channels of the reshaped array into physical channels of memory, in accordance with aspects of the present technology. -
FIGS. 53A-53D illustrate fetching from a wide memory block, in accordance with aspects of the present technology. -
FIGS. 54A-54C illustrate a write back to a wise memory block, in accordance with aspects of the present technology. -
FIG. 55 illustrates a reshape function, in accordance with aspects of the present technology. -
FIGS. 56A-56D illustrate a deconvolution function, in accordance with aspects of the present technology. -
FIG. 57 illustrates a deconvolution function, in accordance with aspects of the present technology. -
FIG. 58 illustrates a sigmoid function, in accordance with aspects of the present technology. -
FIG. 59 illustrates a feature map, in accordance with aspects of the present technology. -
FIG. 60 illustrates a B-float encoded data value, in accordance with aspects of the present technology. -
FIG. 61 illustrates a logical view of a feature map encoded in Group B-float from an output side of a computer core, in accordance with aspects of the present technology. -
FIG. 62 illustrates a logical view of a feature map encoded in Group B-float from an input side of a compute core, in accordance with aspects of the present technology. -
FIG. 63 illustrates another logical view of a feature map and weights encoded in Group B-float from an input side of a compute core, in accordance with aspects of the present technology. -
FIG. 64 illustrates storage of B-float encoded feature map data in a narrow flat memory organization, in accordance with aspects of the present technology. -
FIG. 65 illustrates storage of B-float encoded feature map data in a wide memory organization, in accordance with aspects of the present technology. -
FIG. 66 illustrates storage of Group B-float encoded feature map data in a wide memory organization, in accordance with aspects of the present technology. -
FIG. 67 illustrates accuracy of calculations on Group B-float encoded ResNet-50 feature map pixels values for different group sizes, in accordance with aspects of the present technology. -
FIG. 68 illustrates accuracy of calculations on Group B-float MobileNet feature map pixels values for different group sizes, in accordance with aspects of the present technology. - Reference will now be made in detail to the embodiments of the present technology, examples of which are illustrated in the accompanying drawings. While the present technology will be described in conjunction with these embodiments, it will be understood that they are not intended to limit the technology to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present technology, numerous specific details are set forth in order to provide a thorough understanding of the present technology. However, it is understood that the present technology may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the present technology.
- Some embodiments of the present technology which follow are presented in terms of routines, modules, logic blocks, and other symbolic representations of operations on data within one or more electronic devices. The descriptions and representations are the means used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art. A routine, module, logic block and/or the like, is herein, and generally, conceived to be a self-consistent sequence of processes or instructions leading to a desired result. The processes are those including physical manipulations of physical quantities. Usually, though not necessarily, these physical manipulations take the form of electric or magnetic signals capable of being stored, transferred, compared and otherwise manipulated in an electronic device. For reasons of convenience, and with reference to common usage, these signals are referred to as data, bits, values, elements, symbols, characters, terms, numbers, strings, and/or the like with reference to embodiments of the present technology.
- It should be borne in mind, however, that these terms are to be interpreted as referencing physical manipulations and quantities and are merely convenient labels and are to be interpreted further in view of terms commonly used in the art. Unless specifically stated otherwise as apparent from the following discussion, it is understood that through discussions of the present technology, discussions utilizing the terms such as “receiving,” and/or the like, refer to the actions and processes of an electronic device such as an electronic computing device that manipulates and transforms data. The data is represented as physical (e.g., electronic) quantities within the electronic device's logic circuits, registers, memories and/or the like, and is transformed into other data similarly represented as physical quantities within the electronic device.
- In this application, the use of the disjunctive is intended to include the conjunctive. The use of definite or indefinite articles is not intended to indicate cardinality. In particular, a reference to “the” object or “a” object is intended to denote also one of a possible plurality of such objects. The use of the terms “comprises,” “comprising,” “includes,” “including” and the like specify the presence of stated elements, but do not preclude the presence or addition of one or more other elements and or groups thereof. It is also to be understood that although the terms first, second, etc. may be used herein to describe various elements, such elements should not be limited by these terms. These terms are used herein to distinguish one element from another. For example, a first element could be termed a second element, and similarly a second element could be termed a first element, without departing from the scope of embodiments. It is also to be understood that when an element is referred to as being “coupled” to another element, it may be directly or indirectly connected to the other element, or an intervening element may be present. In contrast, when an element is referred to as being “directly connected” to another element, there are not intervening elements present. It is also to be understood that the term “and or” includes any and all combinations of one or more of the associated elements. It is also to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting.
- Referring now to
FIG. 1 , a memory processing unit, in accordance with aspects of the present technology, is shown. Thememory processing unit 100 can include a plurality of memory regions 110-130, a plurality of processing regions 135-150, one ormore communication links 155, and one or more centralized or distributedcontrol circuitry 160. The plurality of memory regions 110-130 can also be referred to as activation memory. The plurality of processing regions 135-150 can be interleaved between the plurality of memory regions 110-130. The processing regions 135-150 can be interleaved in an alternating regular pattern of aprocessing region 135, amemory region 115, aprocessing region 140, amemory region 120, aprocessing region 145, and so on. In one implementation, the plurality of memory regions 110-130 and the plurality of processing regions 135-150 can have respective predetermine sizes. The plurality of processing regions 135-150 can have the same design. Similarly, the plurality of memory region 110-130 can also have the same design. In one implementation, the plurality of memory regions 110-130 can be static random access memory (SRAM), and the plurality of processing regions 135-150 can include one or more arrays of resistive random access memory (ReRAM), magnetic random access memory (MRAM), phase change random access memory (PCRAM), Flash memory (FLASH), or the like. - One or more of the plurality of processing regions 135-150 can be configured to perform one or more computation functions, one or more instances of one or more computation functions, one or more segments of one or more computation functions, or the like. For example, a
first processing region 135 can be configured to perform two computation functions, and asecond processing region 140 can be configured to perform a third computation function. In another example, thefirst processing region 135 can be configured to perform three instances of a first computation function, and thesecond processing region 140 can be configured to perform a second and third computation function. In yet another example, a given computation function can have a size larger than the predetermined size of the one or more processing regions. In such case, the given computation function can be segmented, and the computation function can be configured to be performed on one or more of the plurality of processing units 135-150. The processing regions 135-150 can each include one or more memory processing units, memory processing unit cores, or the like. The memory processing units and or cores can implement computation functions in arrays of memory cells without changing the basic memory array structure. The one or more centralized or distributedcontrol circuitry 160 can configure the one or more computation functions of the one or more of the plurality of processing regions 135-150. The computation functions can include, but are not limited to, vector products, matrix-dot-products, convolutions, min/max pooling, averaging, scaling, and or the like. - A central data flow direction can be utilized with the plurality of memory regions 110-130 and plurality of processing regions 135-150. The one or more centralized or distributed
control circuitry 160 can control data flow into each given one of the plurality of processing regions 135-150 from a first adjacent one of the plurality of memory regions 110-130 to a second adjacent one of the plurality of memory regions 110-130. For example, the one ormore control circuitry 160 can configure data to flow into afirst processing region 135 from afirst memory region 110 and out to asecond memory region 115. Similarly, the one ormore control circuitry 160 can configure data to flow into asecond processing region 140 from thesecond memory region 115 and out to athird memory region 120. Thecontrol circuitry 160 can include a centralized control circuitry, distributed control circuitry or a combination thereof. If distributed, thecontrol circuitry 160 can be local to the plurality of memory regions 110-130, the plurality of processing regions 135-150, and or one or more communication links 155. - In one implementation, the plurality of memory regions 110-130 and the plurality of processing regions 135-150 can be columnal interleaved with each other. The data can be configured by the one or more centralized or distributed
control circuitry 160 to flow between adjacent columnal interleaved processing regions 135-150 and memory regions 110-130 in a cross-columnal direction. In one implementation, the data can flow in a unidirectional cross-columnal direction between adjacent processing regions 135-150 and memory regions 110-130. For example, data can be configured to flow from afirst memory region 110 into afirst processing region 135, from thefirst processing region 135 out to asecond memory region 115, from thesecond memory region 115 into asecond processing region 140, and so on. In another implementation, the data can flow in a bidirectional cross-columnal direction between adjacent processing regions 135-150 and memory regions 110-130. In addition or alternatively, data within respective ones of the processing region 135-150 can flow between functions within the same processing region. For example, for afirst processing region 135 configured to perform two computation functions, data can flow from the first computation function directly to the second computation function without being written or read from an adjacent memory region. - The one or
more communication links 155 can be coupled between the interleaved plurality of memory region 110-130 and plurality of processing regions 135-150. The one ormore communication links 155 can be configured for moving data between non-adjacent ones of the plurality of memory regions 110-130, between non-adjacent ones of the plurality of processing regions 135-150, or between non-adjacent ones of a given memory region and a given processing region. For example, the one ormore communication links 155 can be configured for moving data between thesecond memory region 115 and afourth memory region 125. In another example, the one ormore communication links 155 can be configured for moving data between thefirst processing region 135 and athird processing region 145. In another example, the one ormore communication links 155 can be configured for moving data between thesecond memory region 115 and thethird processing region 145, or between thesecond processing unit 140 and afourth memory region 125. - Generally, the plurality of memory regions 110-130 and the plurality of processing regions 135-150 are configured such that partial sums move in a given direction through a given processing region. In addition, the plurality of memory regions 110-130 and the plurality of processing regions 135-150 are generally configured such that edge outputs move in a given direction from a given processing region to an adjacent memory region. The terms partial sums and edge outputs are used herein to refer to the results of a given computation function or a segment of a computation function. The computation functions of the plurality of processing regions 135-150 and the dataflow between the plurality of processing regions 135-150 and the memory regions 110-130 can be conceptualized as a plurality of produces and consumers. Computation functions of a given processing region can consume data form a given memory region and produce output data to a next memory region. The output data stored in the given memory region can then be consumed by computation functions of a next given processing region. Accordingly, producers and consumers communicate through shared memory regions 110-130. The computation functions and dataflow between adjacent processing regions 135-150 and memory regions 110-130 can be mapped to ensure adjacency requirements are met. The shared data can therefore be synchronized in a dataflow manner without a global centralized control unit.
- Referring to
FIG. 2 , a memory processing unit (MPU), in accordance with aspects of the present technology, is shown. Thememory processing unit 200 can include a first memory 202-210 and a plurality of processing regions 212-216. The first memory 202-210 can include a plurality of memory regions. The plurality of processing regions 212-216 can be interleaved between the plurality of regions 202-210 of the first memory. The processing regions 212-216 and plurality of first memory regions 202-210 can be interleaved in an alternating regular pattern of aprocessing region 212, amemory region 204, aprocessing region 214, amemory region 206, a processing region, and so on. The plurality of first memory regions 202-210 can be volatile memory, such as static random-access memory (SRAM) or the like. The processing regions 212-216 can include a plurality of compute cores 220-232. The plurality of compute cores 220-232 of respective ones of the plurality of processing regions 212-216 can be coupled between adjacent ones of the plurality of regions of the first memory 202-210. For example, the compute cores 220-228 of afirst processing region 212 can be coupled between afirst region 202 and asecond region 204 of the first memory region 202-210. The compute cores 220-232 in each respective processing region 212-216 can be configurable in one or more clusters 234-238. For example, a first set ofcompute cores first processing region 212 can be configurable in afirst cluster 234. Similarly, a second set of compute cores 224-228 in the first processing region can be configurable in asecond cluster 236. The plurality of compute cores 220-232 of respective ones of the plurality of processing regions 212-216 can also be configurably couplable in series. For example, a set of compute cores 220-224 in afirst processing region 212 can be communicatively coupled in series, with asecond compute core 222 receiving data and or instructions from afirst compute core 220, and athird compute core 224 receiving data and or instructions from thesecond compute core 222. - The
memory processing unit 200 can also include asecond memory 218. Thesecond memory 218 can be coupled to the plurality of processing regions 212-216. Thesecond memory 218 can optionally be logically or physically organized into a plurality of regions. The plurality of regions of thesecond memory 218 can be associated with corresponding ones of the plurality of processing region 212-216. In addition, the plurality of regions of thesecond memory 218 can include a plurality of blocks organized in one or more macros. The second memory can be non-volatile memory, such as resistive random-access memory (RRAM), magnetic random-access memory (MRAM), flash memory (FLASH) or the like. The second memory can alternatively be volatile memory. - One or more of the plurality of processing regions 212-216 can be configured to perform one or more computation functions, one or more instances of one or more computation functions, one or more segments of one or more computation functions, or the like. For example, a
first processing region 212 can be configured to perform two computation functions, and asecond processing region 214 can be configured to perform a third computation function. In another example, thefirst processing region 212 can be configured to perform three instances of a first computation function, and thesecond processing region 214 can be configured to perform a second and third computation function. Similarly, the compute cores 220-232 can be configured to perform one or more computation functions, one or more instances of one or more computation functions, one or more segment of one or more computation functions, or the like. The compute cores 220-232 of the plurality of processing regions 212-216 can each include one or more memory processing units, memory processing unit cores, or the like. The memory processing units and or cores can implement computation functions in arrays of memory cells without changing the basic memory array structure. - The
memory processing unit 200 can further include an inter-layer-communication (ILC)unit 240. TheILC unit 240 can be global or distributed across the plurality of processing regions 212-216. In one implementation, theILC unit 240 can include a plurality of ILC modules 242-246, wherein each ILC module can be coupled to a respective processing region 212-216. Each ILC module can also be coupled to the respective regions of the first memory 202-210 adjacent the corresponding respective processing regions 212-216. The inter-layer-communication unit 240 can be configured to synchronize data movement between one or more compute cores producing given data and one or more other compute cores consuming the given data. - The
memory processing unit 200 can further include one or more input/output stages output stages output stages output stages memory processing unit 200. For example, one or more of the input/output (I/O) stages can be configured to stream data into a first one of the plurality of regions of the first memory 202-210. Similarly, one or more input/output (I/O) stages can be configured to stream data out of a last one of the plurality of regions of the first memory 202-210. - The plurality of processing regions 212-216 can be configurable for memory-to-core dataflow from respective ones of the plurality of regions of the first memory 202-210 to one or more cores 220-232 within adjacent ones of the plurality of processing regions 212-216. The plurality of processing regions 212-216 can also be configurable for core-to-memory dataflow from one or more cores 220-232 within ones of the plurality of processing regions 212-216 to adjacent ones of the plurality of regions of the first memory 202-210. The plurality of processing regions 212-216 and plurality of regions of the first memory 202-210 can also be configured for memory-to-core-to-memory data flow. For example, the dataflow can be configured for a given direction from given ones of the plurality of regions of the first memory 202-210 through respective ones of the plurality of processing regions to adjacent ones of the plurality of regions of the first memory 202-210. In one implementation, the computation functions of compute cores and dataflow between processing regions 212-216 and first memory 202-210 can be organized to ensure adjacency requirements so that dataflow of shared data can be synchronized therebetween without a global centralized control unit.
- The plurality of processing regions 212-216 can also be configurable for memory-to-core data flow from the
second memory 218 to one or more cores 220-232 of corresponding ones of the plurality of processing regions 212-216. If thesecond memory 218 is logically or physically organized in a plurality of regions, respective ones of the plurality of regions of thesecond memory 218 can be configurably couplable to one or more compute cores in respective ones of the plurality of processing regions 212-216. - The plurality of processing regions 212-216 can be further configurable for core-to-core data flow between select adjacent compute cores 220-232 in respective ones of the plurality of processing regions 212-216. For example, a given
core 224 can be configured to share data, accessed from an adjacent portion of thefirst memory 202, with one or more other cores 226-228 configurably coupled in series with the givencompute core 224. In another example, a givencore 220 can be configured to pass data accessed from thesecond memory 218 with one or moreother cores 222 configurably coupled in series with the givencompute core 220. In yet another example, a givencompute core 220 can pass a result, such as a partial sum, computed by the givencompute core 220, to one or moreother cores 222 configurably coupled in series with the givencompute core 220. - Referring to
FIG. 3 , a memory processing unit (MPU), in accordance with aspects of the present technology, is shown. Thememory processing unit 300 can include a first memory and a plurality of processing regions 312-316. The first memory can include including a plurality of regions 302-310. The plurality of processing regions 312-316 can be interleaved between the plurality of regions of the first memory 302-310. The processing regions 312-316 can include a plurality of compute cores 320-332. The plurality of compute cores 320-332 of respective ones of the plurality of processing regions 312-316 can be coupled between adjacent ones of the plurality of regions of the first memory 302-310. For example, the compute cores 320-328 of afirst processing region 312 can be coupled between afirst region 302 and asecond region 304 of the first memory 302-310. The compute cores 320-332 in each respective processing region 312-316 can be configurable in one or more clusters 334-338. For example, a first set of compute cores 320, 322 in afirst processing region 312 can be configurable in afirst cluster 334. Similarly, a second set of compute cores 324-328 in the first processing region can be configurable in asecond cluster 336. The plurality of compute cores 320-332 of respective ones of the plurality of processing regions 312-316 can also be configurably couplable in series. For example, a set of compute cores 320-324 in afirst processing region 312 can be communicatively coupled in series, wherein a second compute core 322 receiving data and or instructions from a first compute core 320, and a third compute core 324 receiving data and or instructions from the second compute core 322. - The
memory processing unit 300 can also include asecond memory 318. Thesecond memory 318 can be coupled to the plurality of processing regions 312-316. Thesecond memory 318 can optionally be logically or physically organized into a plurality of regions. The plurality of regions of thesecond memory 318 can be associated with corresponding ones of the plurality of processing region 312-316. In addition, the plurality of regions of thesecond memory 318 can include a plurality of blocks organized in one or more macros. The first memory 302-310 can be volatile memory, such as static random-access memory (SRAM) or the like. The second memory can be non-volatile memory, such as resistive random-access memory (RRAM), magnetic random-access memory (MRAM), flash memory (FLASH) or the like. The second memory can alternatively be volatile memory. - The
memory processing unit 300 can further include an inter-layer-communication (ILC)unit 340. TheILC unit 340 can be global or distributed across the plurality of processing regions 312-316. In one implementation, theILC unit 340 can include a plurality of ILC modules, wherein each ILC module can be coupled to a respective processing regions 312-316. Each ILC module can also be coupled to the respective regions of the first memory 302-310 adjacent the corresponding respective processing regions 312-316. The inter-layer-communication unit 340 can be configured to synchronize data movement between one or more compute cores producing given data and one or more other compute cores consuming the given data. Theinter-layer communication unit 340 can map the computations functions of compute cores and dataflow between processing regions 312-316 and first memory 302-310 on an adjacency basis so that dataflow of shared data can be synchronized therebetween. - The
memory processing unit 300 can further include one or more input/output stages output stages output stages output stages memory processing unit 300. For example, one or more of the input/output (I/O) stages can be configured to stream data into a first one of the plurality of regions of the first memory 302-310. Similarly, one or more input/output (I/O) stages can be configured to stream data out of a last one of the plurality of regions of the first memory 302-310. - The plurality of processing regions 312-316 can be configurable for memory-to-core dataflow from respective ones of the plurality of regions of the first memory 302-310 to one or more cores 320-332 within adjacent ones of the plurality of processing regions 312-316. The plurality of processing regions 312-316 can also be configurable for core-to-memory dataflow from one or more cores 320-332 within ones of the plurality of processing regions 312-316 to adjacent ones of the plurality of regions of the first memory 302-310. In one implementation, the dataflow can be configured for a given direction from given ones of the plurality of regions of the first memory 302-310 through respective ones of the plurality of processing regions to adjacent ones of the plurality of regions of the first memory 302-310.
- The plurality of processing regions 312-316 can also be configurable for memory-to-core data flow from the
second memory 318 to one or more cores 320-332 of corresponding ones of the plurality of processing regions 312-316. If thesecond memory 318 is logically or physically organized in a plurality of regions, respective ones of the plurality of regions of thesecond memory 318 can be configurably couplable to one or more compute cores in respective ones of the plurality of processing regions 312-316. - The plurality of processing regions 312-316 can be further configurable for core-to-core data flow between select adjacent compute cores 320-332 in respective ones of the plurality of processing regions 312-316. For example, a given core 324 can be configured to share data, accessed from an adjacent portion of the
first memory 302, with one or more other cores 326-328 configurably coupled in series with the given compute core 324. In another example, a given core 320 can be configured to pass data, accessed from thesecond memory 318, with one or more other cores 322 configurably coupled in series with the given compute core 320. In yet another example, a given compute core 320 can pass a result, such as a partial sum, computed by the given compute core 320, to one or more other cores 322 configurably coupled in series with the given compute core 320. - The plurality of processing regions 312-316 can include one or more near memory (M) compute cores. The one or more near memory (M) compute cores can be configurable to compute neural network functions. For example, the one or more near memory (M) compute cores can be configured to compute vector-vector products, vector-matrix products, matrix-matrix products, and the like, and or partial products thereof.
- The plurality of processing regions 312-316 can also include one or more arithmetic (A) compute cores. The one or more arithmetic (A) compute cores can be configurable to compute arithmetic operations. For example, the arithmetic (A) compute cores can be configured to compute merge operations, arithmetic calculations that are not supported by the near memory (M) compute cores, and or the like.
- The plurality of input and
output regions - The compute cores 320-332 can include a plurality of physical channels configurable to perform computations, accesses and the like, simultaneously with other cores within respective processing regions 312-316, and or simultaneously with other cores in other processing regions 312-316. The compute cores 320-332 of respective ones of the plurality of processing regions 312-316 can be associated with one or more blocks of the
second memory 318. The compute cores 320-332 of respective ones of the plurality of processing regions 312-316 can be associated with respective slices of the second plurality of memory regions. The cores 320-332 can also include a plurality of configurable virtual channels. - Referring now to
FIG. 4 , a memory processing unit, in accordance with aspects of the present technology, is shown. Thememory processing unit 400 can include a first memory region and a plurality of processing region 410-414. The first memory can include a plurality of memory regions 402-408. The plurality of processing regions 410-414 can be interleaved between the plurality of memory regions 402-408 of the first memory. In one implementation, the plurality of memory regions 402-408 and the plurality of processing regions 410-414 can have respective predetermine sizes. One or more of the plurality of memory regions 402-408 can include a plurality of memory blocks 416-432. One or more processing regions 410-414 can also include a plurality of core groups 434-448. A core group 434-448 can include one or more computer cores. The computer cores in a respective core group can be arranged in one or more compute clusters. One or more of the plurality of core groups of a respective one of the plurality of processing regions can be coupled between adjacent ones of the plurality of memory regions of the first memory. In one implementation, a given core group can be coupled to a set of directly adjacent memory blocks, while not coupled to the other memory blocks of the adjacent memory regions. In other words, a core group of a respective processing region can be coupled to a set of memory blocks that are proximate to the given core group, while not coupled to memory blocks in the adjacent memory regions that are distal from the given core group. For example, afirst core group 434 of afirst processor region 410 can be coupled between afirst memory block 416 of afirst memory region 402 and afirst memory block 422 of asecond memory region 404. Asecond core group 436 of thefirst processor region 410 can be coupled to the first and asecond memory block first memory region 402 and the first and asecond memory block second memory region 404. Thesecond core group 436 of thefirst processor region 410 can also be coupled between the first and athird core groups first processor region 410. - One or more of the compute cores, and or one or more core groups of the plurality of processing regions 410-414 can be configured to perform one or more computation functions, one or more instances of one or more computation functions, one or more segments of one or more computation functions, or the like. For example, a first computer core, a
first core group 434 or afirst processing region 410 can be configured to perform two computation functions, and a second computer core, second core group orsecond processing region 412 can be configured to perform a third computation function. In another example, a first compute core, thefirst core group 434 or thefirst processing region 410 can be configured to perform three instances of a first computation function, and a second compute core, second core group orsecond processing region 412 can be configured to perform a second and third computation function. In yet another example, a given computation function can have a size larger than the predetermined size of a compute core, core group or one or more processing regions. In such case, the given computation function can be segmented, and the computation function can be configured to be performed on one or more compute cores, one or more core groups or one or more of the processing regions 410-414. The computation functions can include, but are not limited to, vector products, matrix-dot-products, convolutions, min/max pooling, averaging, scaling, and or the like. - The
memory processing unit 400 can also include one or more inter-layer communication (ILC) units 450-456. The ILC unit 450-456 can be global or distributed across the plurality of processing regions 410-414. In one implementation, the ILC unit 450-456 can include a plurality of ILC modules 450-456, wherein each ILC module can be coupled to adjacent respective processing regions 410-414. Each ILC module 450-456 can also be coupled to adjacent respective regions of the first memory 402-408. The inter-layer-communication units 450-456 can be configured to synchronize data movement between one or more compute cores producing given data and one or more other compute cores consuming the given data. Again, the inter-layer communication unit 450-456 can map the computation functions of compute cores and dataflow between processing regions 410-414 and first memory 402-408 based on adjacency so that dataflow of shared data can be synchronized therebetween. - The compute cores of the core groups 434-448 of the processing regions 410-414 can include a plurality of physical channels configurable to perform computations, accesses and the like, simultaneously with other cores within respective core groups 434-448 and or processing regions 410-414, and or simultaneously with other cores in other core groups 434-448 and or processing regions 410-414. The compute cores can also include a plurality of configurable virtual channels.
- Relatively large flat memory regions such as the plurality of first memory regions described above with reference to
FIGS. 1-3 may not be able to provide enough memory bandwidth to achieve a target performance level. Therefore, organizing each of the memory regions 402-408 into a plurality of memory blocks 416-432 and coupling acore group 436 of arespective processing region 410 to a set of memory blocks 416, 418, 422, 424 that are proximate to the givencore group 436, while not coupled to memory blocks in the adjacent memory regions that are distal from the given core group as described above with reference toFIG. 4 , can increase memory bandwidth throughput. Accordingly, providing more, but smaller, flat memory bocks by organizing each of the plurality of memory regions 402-408 into respective sets of a plurality of memory blocks 416-432 can provide increased memory bandwidth for increased performance. The smaller flat memory blocks can also provide the potential for better chip layout as compared to larger flat memory organizations. However, the increased number of the smaller flat memory blocks can make adjacency mapping for dataflow more challenging. - Referring now to
FIG. 5 , a memory processing unit, in accordance with aspects of the present technology, is shown. Thememory processing unit 500 can include a first memory 402-408 and a plurality of processing region 410-414. The first memory can include a plurality of memory regions 402-408. The plurality of processing regions 410-414 can be interleaved between the plurality of memory regions 402-408 of the first memory. In one implementation, the plurality of first memory regions 402-408 and the plurality of processing regions 410-414 can have respective predetermine sizes. One or more of the plurality of memory regions 402-408 can include a plurality of memory blocks 416-432. One or more processing regions 410-414 can also include plurality of core groups 434-448. A core group 434-448 can include one or more computer cores. The computer cores in a respective core group can be arranged in one or more compute clusters. One or more of the plurality of core groups of a respective one of the plurality of processing regions can be coupled between adjacent memory blocks of adjacent ones of the plurality of memory regions of the first memory. In one implementation, a given core group can be coupled to a set of directly adjacent memory blocks, while not coupled to the other memory blocks of the adjacent memory regions. In other words, a core group of a respective processing region can be coupled to a set of memory blocks that are proximate to the given core group, while not coupled to memory blocks in the adjacent memory regions that are distal from the given core group. For example, afirst core group 434 of afirst processor region 410 can be coupled between afirst memory block 416 of afirst memory region 402 and afirst memory block 422 of asecond memory region 404. Asecond core group 436 of thefirst processor region 410 can be coupled to the first and asecond memory block first memory region 402 and the first and asecond memory block second memory region 404. Thesecond core group 436 of thefirst processor region 410 can also be coupled between the first and athird core group first processor region 410. - The
memory processing unit 500 can also include asecond memory 510. Thesecond memory 510 can be coupled to the plurality of processing regions 410-414. Thesecond memory 510 can optionally be logically or physically organized into a plurality of regions (not shown). The plurality of regions of thesecond memory 510 can be associated with corresponding ones of the plurality of processing region 410-414. In addition, the plurality of regions of thesecond memory 510 can include a plurality of blocks organized in one or more macros. The second memory can be non-volatile memory, such as resistive random-access memory (RRAM), magnetic random-access memory (MRAM), flash memory (FLASH) or the like. The second memory can alternatively be volatile memory. - One or more of the compute cores, and or one or more core groups of the plurality of processing regions 410-414 can be configured to perform one or more computation functions, one or more instances of one or more computation functions, one or more segments of one or more computation functions, or the like. For example, a first computer core, a
first core group 434 or afirst processing region 410 can be configured to perform two computation functions, and a second computer core, second core group orsecond processing region 412 can be configured to perform a third computation function. In another example, the first compute core, thefirst core group 434 or thefirst processing region 410 can be configured to perform three instances of a first computation function, and the second compute core, second core group or thesecond processing region 412 can be configured to perform a second and third computation function. In yet another example, a given computation function can have a size larger than the predetermined size of a compute core, core group or one or more processing regions. In such case, the given computation function can be segmented, and the computation function can be configured to be performed on one or more compute cores, one or more core groups or one or more of the processing regions 410-414. The computation functions can include, but are not limited to, vector products, matrix-dot-products, convolutions, min/max pooling, averaging, scaling, and or the like. - The dataflow can be configured by the one or more centralized or distributed control circuitry inter-layer communication (ILC) units 450-456 to flow between adjacent columnal interleaved processing regions 410-414 and memory regions 402-408 in a cross-columnal direction. In one implementation, one or more communication links can be coupled between the interleaved plurality of memory region 402-408 and plurality of processing regions 410-414. The one or more communication links can also be configured for moving data between non-adjacent ones of the plurality of memory regions 402-408, between non-adjacent ones of the plurality of processing regions 410-414, or between non-adjacent ones of a given memory region and a given processing region.
- The plurality of processing regions 410-414 can be configurable for memory-to-core dataflow from respective ones of the plurality of regions of the first memory 402-408 to one or more cores within adjacent ones of the plurality of processing regions 410-414. The plurality of processing regions 410-414 can also be configurable for core-to-memory dataflow from one or more cores within ones of the plurality of processing regions 410-414 to adjacent ones of the plurality of regions of the first memory 402-408. In one implementation, the dataflow can be configured for a given direction from given ones of the plurality of regions of the first memory 402-408 through respective ones of the plurality of processing regions to adjacent ones of the plurality of regions of the first memory 402-408.
- The plurality of processing regions 410-414 can also be configurable for memory-to-core data flow from the
second memory 510 to one or more cores of corresponding ones of the plurality of processing regions 410-414. If thesecond memory 510 is logically or physically organized in a plurality of regions, respective ones of the plurality of regions of thesecond memory 510 can be configurably couplable to one or more compute cores in respective ones of the plurality of processing regions 410-414. - The plurality of processing regions 410-414 can be further configurable for core-to-core data flow between select adjacent compute cores in respective ones of the plurality of processing regions 410-414. For example, a given core can be configured to pass data accessed from an adjacent portion of the
first memory 402 with one or more other cores configurably coupled in series with the given compute core. In another example, a given core can be configured to pass data accessed from thesecond memory 510 with one or more other cores configurably coupled in series with the given compute core. In yet another example, a given compute core can pass a result, such as a partial sum, computed by the given compute core to one or more other cores configurably coupled in series with the given compute core. - Again, relatively large flat memory regions, such as the plurality of first memory regions described above with reference to
FIGS. 1-3 , may not be able to provide enough memory bandwidth to achieve a target performance level. Therefore, organizing each of the memory regions 402-408 into a plurality of memory blocks 416-432 and coupling acore group 436 of arespective processing region 410 to a set of memory blocks 416, 418, 422, 424 that are proximate to the givencore group 436, while not coupled to memory blocks in the adjacent memory regions that are distal from the given core group as described above with reference toFIG. 4 , can increase memory bandwidth throughput. Accordingly, providing more, but smaller, flat memory bocks by organizing each of the plurality of memory regions 402-408 into respective sets of a plurality of memory blocks 416-432 can provide increased memory bandwidth for increased performance. The smaller flat memory blocks can also provide the potential for better chip layout as compared to larger flat memory organizations. However, the increased number of the smaller flat memory blocks can make adjacency mapping for dataflow more challenging. - The plurality of processing regions 410-414 can include one or more near memory (M) compute cores. The one or more near memory (M) compute cores can be configurable to compute neural network functions. For example, the one or more near memory (M) compute cores can be configured to compute vector-vector products, vector-matrix products, matrix-matrix products, and the like, and or partial products thereof.
- The plurality of processing regions 410-414 can also include one or more arithmetic (A) compute cores. The one or more arithmetic (A) compute cores can be configurable to compute arithmetic operations. For example, the arithmetic (A) compute cores can be configured to compute merge operations, arithmetic calculations that are not supported by the near memory (M) compute cores, and or the like.
- A plurality of input and output regions (not shown) can also include one or more input/output (I/O) cores. The one or more input/output (I/O) cores can be configured to access input and or output ports of the memory processing unit (MPU) 500. The term input/output (I/O) core as used herein can refer to cores configured to access input ports, cores configured to access output ports, or cores configured to access both input and output ports.
- The compute cores can also include other types of compute cores such as graph processing cores or the like. The compute cores of the core groups 434-448 of the processing regions 410-414 can include a plurality of physical channels configurable to perform computations, accesses and the like, simultaneously with other cores within respective core groups 434-448 and or processing regions 410-414, and or simultaneously with other cores in other core groups 434-448 and or processing regions 410-414. The compute cores can also include a plurality of configurable virtual channels.
- The plurality of memory regions 402-408 can also be organized into a plurality of memory blocks arranged in a plurality of columns and rows for each memory region 402-408. For example, each given
memory region 404 can be organized into a plurality of memory blocks of m blocks wide and n blocks long, wherein m and n can be different or equal. A fetch unit for a respective processing region, core group or compute core can be configured to fetch from sets of memory blocks of respective adjacent memory regions. Similarly, a write back unit for a respective processing region, core group or compute core can be configured to write back to a set of memory blocks of respective adjacent memory regions. The organization of the plurality of memory blocks in a plurality of columns and rows can provide further increased memory bandwidth for increased performance. The organization of the plurality of memory blocks arranged in a plurality of columns and rows is further explained below with reference toFIGS. 53A-53D and 54A-54C . - In accordance with aspects of the present technology, a neural network layer, a part of a neural network layer, or a plurality of fused neural network layers can be mapped to a single cluster of compute cores or a core group as a mapping unit. A cluster of compute cores is a set of cores of a given processing region that are configured to work together to compute a mapping unit. For example, the nodes of a
first layer 610 of a neural network can be mapped as a mapping unit to a first set of compute cores, the nodes of asecond layer 620 can be mapped to a second set of compute cores, while the node of athird layer 630 can be mapped to a third set of compute cores, as illustrated inFIG. 6 . Furthermore, amapping unit 710 can be computed by acompute core cluster 720 as illustrated inFIG. 7 . Optionally, more compute cores than are needed to compute a mapping unit can be configured in a compute cluster to improve computing performance. - Referring now to
FIGS. 8A-8B , an exemplary computation of multiple output feature map pixels, in accordance with aspects of the present technology, is illustrated. One or more compute cores can be configured to compute a corresponding output feature map pixel from an input feature map pixel value and a kernel data (weight) value. As illustrated, compute cores can be configured as three pixel workers to compute output feature map pixel values for each of the output channels. For example, a given pixel worker can compute output feature map pixel values 810-850 for each of the output channels of the output feature map. The pixel workers can then step to the next set of three pixel values to compute the corresponding output channels of the output feature map, as illustrated inFIG. 8B . In a polymorphic implementation, multiple compute cores can work together as pixel workers. The maximum number of pixel workers for a given layer is limited to the output feature map width of the given layer. The kernel, weight data or the like can be reused without reloading them from the second memory region. - Referring now to
FIG. 9 , configuration of dataflows in the memory processing unit, in accordance with aspects of the present technology, is illustrated. Thedataflow first memory first memory region first memory 960, through the compute cores 950-956, and to a second region of thefirst memory 962. Alternatively, the dataflow can be configured from the second region of thefirst memory 962, through the compute cores 950-956, to the first region of thefirst memory 960. In one implementation, the dataflow between the compute cores 950-956 of the processing regions and adjacent regions of first memory 960-962 can provide a direct route to access feature map data or the like. - The dataflow 930 from the
second memory 970 to the compute cores of the processing regions can also be configured. In one implementation, the dataflow from thesecond memory 970 to the compute cores 950-956 can provide a direct route to access kernel data, weight data or the like. Thedataflow 940 between the compute cores 950-956 can also be configured. In one implementation, the dataflow between the compute cores 950-956 can provide for the sharing of data from the second memory with others of the compute cores 950-956 in a corresponding core group and or processing region. - The plurality of processing regions can include one or more near memory (M) compute cores, one or more arithmetic (A) compute cores, and one or more input/output (I/O) cores. The one or more near memory (M) compute cores can be configurable to compute neural network functions. The one or more arithmetic (A) compute cores can be configurable to compute arithmetic operations. The one or more input/output (I/O) cores can be configured to access input and or output ports of the memory processing unit (MPU).
- Referring now to
FIG. 10 , a near memory (M) compute core, in accordance with aspects of the present technology, is shown. The near memory (M)compute core 1000 can include a fetchunit 1005, a multiply-and-accumulate (MAC)array unit 1010, awriteback unit 1015 and aswitch 1020. The fetchunit 1005 can be configured to fetch data from an Nth portion of the first memory for the multiply-and-accumulate (MAC)array unit 1010. The fetchunit 1005 can also be configured to receive data from a N−1th compute core and or pass data to a N+1th compute core within a respect processing region. The fetchunit 1005 can also be configured to receive data from the second memory. The fetchunit 1005 can also be configured to synchronize data movement the Nth portion of the first memory with the inter-layer-communication (ILC) unit. In one implementation, the fetchunit 1005 can be configured to control an operation sequence of the near memory (M)compute core 1000, to fetch data from the second memory or an adjacent one of a sequence of the plurality of compute cores in a respective processing region, to fetch data from an adjacent one of the plurality of regions of the first memory, to decrement an inter-layer-communication (ILC) counter, and to trigger other units of the near memory (M) core. - The multiply-and-accumulate (MAC)
array unit 1010 can be configured to compute neural network functions. For example, the multiply-and-accumulate (MAC)array unit 1010 can be configured to compute vector-vector products, vector-matrix products, matrix-matrix products, and the like, and or partial products thereof. The multiply-and-accumulate (MAC)array unit 1010 can also be configured to perform pre-channel and bias scaling. In one implementation, the multiply-and-accumulate (MAC)array unit 1010 can be configured to perform main operations such as, but not limited to, dense or fully connected convolutions, two-dimensional convolutions, depth-wise convolutions, and separable convolutions. The multiply-and-accumulate (MAC)array unit 1010 can also be configured to perform fused operations such as, but not limited to, max pooling, average pooling, rectify linear (ReLU) activation, ReLU-x activation, and up-sampling. The multiply-and-accumulate (MAC)array unit 1010 can also be configured to perform virtually fused operations such as, but not limited to, zero padding (folded into kernel corners), average pooling (folded into weights and biases), ReLU activation, ReLU-x activation, and up-sampling. - The
writeback unit 1015 can be configured to write data to an N+1th portion of the first memory for the multiply-and-accumulate (MAC)array unit 1010. Thewriteback unit 1015 can also be configured to synchronize data movement the Nth portion of the first memory with the inter-layer-communication (ILC) unit. In one implementation, thewriteback unit 1015 can be configured to perform a fuse operation, send data to an adjacent region of the first memory or adjacent compute core in the respective processing region, and to increment an inter-layer-communication (ILC) counter. - The
switch 1020 can configure memory accesses, and chain directions and interfaces of the fetch unit and writeback units to ports of the respective near memory (M) compute core based on configuration information. Theswitch 1020 can be preconfigured with memory access and chain directions. Theswitch 1020 can therefore interface the fetch 1005 andwriteback units 1015 based on the data-flow configuration. - The near memory (M)
compute core 1000 can include a plurality of physical channels configurable to perform computations simultaneously. The near memory (M)compute core 1000 can also be associated with one or more blocks of the second memory. The physical channels of the near memory (M)compute core 1000 can be associated with respective slices of the second plurality of memory regions. The near memory (M)compute core 1000 can also include a plurality of configurable virtual channels. - Referring now to
FIG. 11 , an arithmetic (A) compute core, in accordance with aspects of the present technology, is shown. The arithmetic (A)compute core 1100 can include a fetchunit 1105, anarithmetic unit 1110, awriteback unit 1115 and aswitch 1120. Again, the fetchunit 1105 can be configured to fetch data from an Nth portion of the first memory for thearithmetic unit 1110. The fetchunit 1105 can also be configured to synchronize data movement the Nth portion of the first memory with the inter-layer-communication (ILC) unit. In one implementation, the fetchunit 1105 can be configured to control an operation sequence of thearithmetic unit 1110, to fetch data from an adjacent one of the plurality of regions of the first memory, decrement an inter-layer-communication (ILC) counter, and trigger other units of the arithmetic (A)compute core 1100. - The
arithmetic unit 1110 can be configured to compute arithmetic operations not supported by the multiply accumulate (MAC)array unit 1010. For example, thearithmetic unit 1110 can be configured to compute merge operations and or the like. Thearithmetic unit 1110 can compute one or more output channels at a time. Thearithmetic unit 1110 may not have access to the second memory. Thearithmetic unit 1110 may have no means to pass data between adjacent cores in the same processing region. In one implementation, thearithmetic unit 1110 can be configured to perform main operations such as, but not limited to, add, multiply and bypass. Thearithmetic unit 1110 can also be configured to fuse operations such as, but not limited to, ReLU activation, ReLU-x activation, and leaky ReLU-x activation. - The
writeback unit 1115 can be configured to write data to an N+1th portion of the first memory for thearithmetic unit 1110. Thewriteback unit 1115 can also be configured to synchronize data movement the Nth portion of the first memory with the inter-layer-communication (ILC) unit. In one implementation, thewriteback unit 1115 can be configured to perform a fuse operation, send data to an adjacent region of the first memory or an adjacent compute core in the respective processing region, and to increment an inter-layer-communication (ILC) counter. - The
switch 1120 can be configured to configure memory accesses, chain directions and interfaces of the fetch unit and writeback units to ports of the arithmetic compute core based on configuration information. - Referring now to
FIG. 12 , an input (I) core, in accordance with aspects of the present technology, is shown. The input (I)core 1200 can include aninput port 1205, awriteback unit 1210 andswitch 1215. Theinput port 1205 can be configured to receive data into the memory processing unit and trigger thewriteback unit 1210. Thewriteback unit 1210 can be configured to stream the received data into a first portion of the first memory and increment an inter-layer-communication (ILC) counter. Theswitch 1215 can be configured to connect thewriteback unit 1210 to the adjacent regions of the first memory based on configuration information. In one implementation, an input stage can be comprised of a single or multiple input (I)cores 1200. - Referring now to
FIG. 13 , an output (O) core, in accordance with aspects of the present technology, is shown. The output (O)core 1300 can include a fetchport 1305, anoutput unit 1310 and aswitch 1315. The fetchport 1305 can be configured to stream data out from a last portion of the first memory and trigger theoutput unit 1310. Theoutput unit 1310 can be configured to output data out of the memory processing unit. Theswitch 1315 can be configured to connect the fetchport 1305 to the adjacent regions of the first memory and the inter-layer-communication (ILC) unit based on configuration information. In one implementation, an output stage can be comprised of a single or multiple output (O)cores 1300. - Referring now to
FIG. 14 , a whole channel compute core configuration, in accordance with aspects of the present technology, is shown. The compute cores, of a given processing region, can be configured in whole channel mode, wherein one or more compute cores perform computations independently of the other compute cores in a respective processing region. In the whole channel mode, the compute cores do not passdata 1410 sequentially from a given compute core to an adjacent compute core. Referring now toFIG. 15 , in the whole channel mode, each compute core in the cluster computes a designated number of channels. Each of the cores is responsible for reading data and writing the output result on their own. For example, a whole channel mode configured compute core reads data from the Xth portion of the first memory region, and optionally the second memory region, performs a corresponding calculation and stores the result in the (X+1)th portion of the first memory region. The compute cores in whole channel mode do not share data with other compute cores and work as standalone compute cores. Referring now toFIG. 16 , an exemplary whole channel compute core configuration is illustrated. In the illustrated example, the mapping unit has 22output channels 1610 and is mapped to a three-compute core cluster 1620-1640. Each compute core has four output physical channels. Aninput feature map 1650 is stored in an adjacent first portion of the first memory region, and anoutput feature map 1660 is stored in an adjacent second portion of the first memory region. As further illustrated inFIG. 17 , each compute core 1620-1640 is configured to access weights for the respective output channels. Each compute core is configured to compute a product of the input feature map and the weights of respective sets of the 22output channels 1710 of the output feature map. Each compute core is responsible for almost one-third of the computation workload. The second memory region can be organized based on output channels, and result in the 22output channels 1710 mapped into five and halve virtual channel rows. Although, the compute core cluster is illustrated as mapped over a single macro of the second memory region, the compute core cluster can also be mapped over a plurality of macros of the second memory region. - Referring now to
FIG. 18 , a polymorphic second memory compute core configuration, in accordance with aspects of the present technology, is shown. The compute cores, of a given processing region, can be configured in a polymorphic configuration, wherein one or more compute cores share data from a given portion of thesecond memory region 1810 with adjacent compute cores. In the polymorphic second memory compute core configuration, each compute core of the cluster can compute all the output channels, but work on different pixels of an output feature map. Accordingly, the other compute cores in the cluster operate as workers for the first compute core. The number of compute cores that can be assigned is the number of mapping unit output feature map pixels. The compute cores of the cluster access a different sequence of data in the second memory region since they are working on different pixels. Such a configuration can be used to reduce the number of access to the second memory region by sharing the data among cores in the cluster. Thefirst compute core 1910 in a polymorphic second memory cluster has access to data in the corresponding portion of thesecond memory region 1940 and can share the data with theother compute cores first memory region 1950, and all of the compute cores 1910-1930 can write results to the other adjacent portion of thefirst memory region 1960, as illustrated inFIG. 19 . Referring now toFIGS. 20 and 21 , an exemplary polymorphic second memory compute core configuration is illustrated. In the illustrated example, the compute cores 2010-2030 of a cluster can all access input feature map data in a first adjacent portion of thefirst memory region 2040, as illustrated inFIG. 20 . Thefirst compute core 2010 can access data in thesecond memory region 2110, and share the data with the other compute cores of thecluster FIG. 21 . In one implementation, the cluster can include 3 compute cores 2010-2030 mapped with a total of 22 output channels. Each compute core can have fourphysical channels 2120. Thetop compute core 2010 of the chain is assigned the whole portion of thesecond memory region 2110 needed by the mapping, and access the whole 22 output channels of data. Each compute core computes all 22 output channels, but for different pixels. The other two computecores first compute core 2010 rather than thesecond memory region 2110 to get weight data. The neighbor access can be done in a dataflow manner without special synchronization. Each compute core 2010-2030 in the cluster can then perform a respective computation and write the results as output feature map data to the other adjacent portion of thefirst memory region 2050, as illustrated inFIG. 20 . - Referring now to
FIG. 22 , a polymorphic first memory compute core configuration, in accordance with aspects of the present technology, is shown. The compute cores, of a given processing region, can be configured in a polymorphic configuration, wherein one or more cores share data from a given portion of thefirst memory region 2210 with adjacent compute cores. The polymorphic first memory compute core configured cluster is equivalent to a wider core with more physical channels. Such a configuration can be used to improve reuse of data in the first memory region and reduce the total number of accesses to the corresponding portion of the first memory region. It should also be noted that reuse of data in the first memory region is also an inherent property of the compute core configuration of the plurality of processing region in accordance with aspects of the present technology because the compute cores can share data among the physical channels. Thefirst compute core 2310 in a polymorphic first memory compute cluster has access to data in the corresponding portion of thefirst memory region 2340 and can share the data with theother compute cores second memory region 2350, and all of the compute cores 2310-2330 can write results to the other adjacent portion of thefirst memory region 2360, as illustrated inFIG. 23 . Referring now toFIGS. 24 and 25 , an exemplary polymorphic first memory region compute core configuration is illustrated. In the illustrated example, thefirst compute core 2410 of a cluster can access input feature map data in a first adjacent portion of thefirst memory region 2440. Thefirst compute core 2410 can share in the data of the input feature map with theother compute cores FIG. 24 . Each compute core 2410-2430 in the cluster can also access data in thesecond memory region 2510, as illustrated inFIG. 25 . Each compute core 2410-2430 in the cluster can then perform a respective computation and write the results as output feature map data to the other adjacent portion of thefirst memory region 2450, as illustrated inFIG. 24 . The polymorphic first memory compute cluster can be configured by a mapping algorithm that starts by creating a whole-channel cluster, then converting to the first memory region polymorphic computer cluster. In the illustrated three compute core cluster, each core can be responsible for up to one third of the computer workload. Thesecond memory region 2510 can be configured to have four output channels, that can be mapped into five and a half virtual channel rows in thesecond memory region 2510, as illustrated inFIG. 25 . - Referring now to
FIG. 26 , a compound compute core configuration, in accordance with aspects of the present technology, is shown. Each compute core in a cluster, of a given processing region, can access an adjacent portion of the first memory region. The compute cores can also be configured to share data from a given portion of thesecond memory region FIG. 27 , an exemplary compound compute core configuration is illustrated. In the illustrated example, the mapping unit has 22 output channels and is mapped to a four-compute core cluster 2710-2840 including two sets of two compute cores each. For example, a first set can include first andsecond compute cores fourth compute cores first memory 2750 as illustrated inFIG. 28 . Thefirst compute cores second memory 2770, as illustrated inFIG. 29 . The first compute core in afirst set 2710 can be configured to share data from thesecond memory 2770 with the other compute cores in thefirst set 2720. Similarly, a first compute core in asecond set 2730 can be configured to share data from thesecond memory 2770 with the other compute cores in thesecond set 2740. Each compute core 2710-2740 of each set can store result back as output feature map data to the other adjacent portion of thefirst memory 2760. Accordingly, each set of two compute cores act as stand-alone pixel computing groups. However, the whole result is computed using the two sets of pixel computing groups. At a top level, each of the pixel computing groups can be treated as a standalone compute core set, and the workload can be distributed between them in a whole-channel way. - Referring now to
FIG. 30 , a first memory region sharing feature of the memory processing unit (MPU), in accordance with aspects of the present technology, is shown. As illustrated, the dataflow of computations by the MPU can be visualized as a series of produces 3010-3040 and consumers 3050-3070. For example, a compute core cluster 3010-3040 can consume input feature map data from a first portion of the first memory region and produce feature map data that can be an input to a next compute core cluster 3050-3070 to use. It is to be appreciated that data sharing in general between conventional computing units tends to be a significant obstacle to conventional dataflow accelerators. Therefore, conventional processing units may utilize network-on-chip and or data duplications. In contrast, the MPU in accordance with aspects of the present technology enables a much simpler data sharing technique, wherein producers and consumers write and read to a sharedmemory buffer 3080. Thebuffers 3080 are interleaved portions of the first memory between the plurality of processing regions. Accordingly, data can flow between clusters in the same processing region and or adjacent processing regions. In one implementation, a software layer can be configured to organize the clusters to ensure such adjacency. In the example ofFIG. 30 , two compute core clusters 3010-3040 and 3050-3070 in two different processing regions share abuffer 3080 in a portion of the first processing region. It is to be appreciated that there is no direct communication between the producer and the consumer compute cores. Compute cores in a compute cluster do not directly synchronize with each other. However, compute cores in a compute cluster can be configured to directly communicate data with each other. - In one implementation, data can be shared between processing regions by assigning a large enough buffer in the corresponding portion of the first memory. For example, the buffer can be allocated to carry a whole feature map shared between adjacent processing regions. The size of the buffer can be calculated in accordance with Equation 1:
-
S b=Π∀i F[i] (1) - where F is the vector of the feature map size.
- However, assigning the whole feature map size as a buffer is not enough for the data to flow. Consumers need to avoid reading a buffer entry that is not filled yet by the producer. Assuming a coarse-grain synchronization of the feature map row level, the consumer cannot read from a feature map row that is still being produced. For the sake of simplicity, each feature map row will be illustrated as a single buffer entry in
FIGS. 31-36 . However, it is appreciated that a single row may require the storage of hundreds, thousands, or even more entries. Referring now toFIGS. 31A and 31B , an exemplary buffer utilization by a consumer and a producer is illustrated. The illustratedbuffer 3110 is sized to store a full feature map. Theproducer 3120, for example, can be performing a two-dimensional convolution, and theconsumer 3130 can be performing a two-dimensional convolution having a 3×3 kernel size. Theproducer core 3120 can generate the pixels of a given feature map row before producing the pixels of a next row. In such case, theproducer core 3120 only blocks a single row entry as illustrated inFIG. 31A . As theproducer core 3120 generates the pixels of a given feature map row, theconsumer core 3130 can access the pixels values of the previous three rows. After theproducer core 3120 is done generating the pixels of the given row, theproducer core 3120 can move to generate the pixels of the next row as illustrated inFIG. 31B . At that point, theconsumer core 3130 can shift its consumption to a next three row window if theconsumer core 3130 is ready to start processing the next three row window. Furthermore, it is noted that the rows that have already been consumed can remain in thebuffer 3110 until overwritten by theproducer core 3120 as processing continues. It is appreciated that theconsumer 3130 of a 3×3 kernel consumes three buffer entries simultaneously while theproducer 3120 generates data for one entry before moving to the next one. Furthermore, a number of entries in thebuffer 3110 are not in use at any given time. Therefore, the full feature mapsized buffer 3110 can waste resources in the memory processing unit (MPU). - In another implementation, a smaller partial buffer can be sufficient for the dataflow to support the computations. For example, a circular queue can be utilized as a partial buffer. The partial buffer can be configured to carry enough data for the consumer to operate and have extra entries to allow the producer to generate data while the consumer is working. For example, the partial buffer can include three feature map rows in the case where the consumer is performing a convolution having a 3×3 kernel size. The partial buffer can also include extra entries, referred to as a pipeline margin. Without such a margin, the dataflow performance will degrade significantly since the producer and consumer will not be able to work concurrently. The producer also cannot overwrite data that is not yet consumed, and the consumer needs to wait for the producer to finish writing a new row in the partial buffer before starting to consume it. Referring now to
FIGS. 32A-32D , an exemplary sharedpartial buffer 3410 for a 3×3 kernel size is illustrated. As illustrated, aproducer 3220 generates pixel data for a given row before moving on to the next row, and theconsumer 3230 accesses three rows of data at a time. By utilizing apartial buffer 3210, the size of the sharedbuffer 3210 can be reduced to as littles as four rows. For example, in a first cycle theconsumer 3230 can be accessing the first three rows of pixel data, and theproducer 3220 can be generating data for storing in the fourth row. In a second cycle, theconsumer 3230 can be accessing the second through four rows of data, while theproducer 3220 is storing data in the first row. In a third cycle, theconsumer 3230 can access data in the third, fourth and first rows, while theproducer 3220 stores data in the second row. In a fourth cycle, theconsumer 3230 can access the fourth, first and second rows, while theproducer 3220 stores data in the third row. Thereafter, the first through fourth cycles can be iteratively repeated any number of times. Accordingly, the four-row shared partial buffer can allow the producer and consumer to work smoothly. - Referring now to
FIGS. 33A and 33B , an exemplary shared partial buffer for a 3×3 kernel size with a 2×2 stride is illustrated. Aconsumer 3330 having a stride of 2×2 moves its window two rows at a time. Therefore, a pipeline margin of two is needed to allow the producer to generate the necessary rows for the consumer window shift. For example, aproducer 3320 can store data in a fourth and fifth row, while theconsumer 3330 accesses data in the first through third rows. After theproducer 3320 stores data in the fourth and fifth rows, theconsumer 3330 can move to accessing data in the third through fifth rows, while theproducer 3320 stores data in the first and second rows. - For ease of explanation, aspects of the present technology have been described with regard to a single producing cluster and a single consuming cluster. However, dataflow in the memory processing unit (MPU) can involve dataflow branching into multiple paths that can for example end as different outputs, merge again, and the like. While branching output can be treated the same as multiple single dataflow paths, merging branches can involve additional considerations. If a neural network with merging branches, for example, is not allocated the correct buffer size, the dataflow pipeline might end up in a deadlock or produce incorrect data. With data having multiple consumers, the data validity should be set by the slowest consumer. Typically, a longer data lifetime results in a need for a larger buffer size. Referring now to
FIG. 34 , an example branching dataflow utilizing a full feature-map buffer is illustrated. As illustrated, afirst producer 3410 can perform a convolution (Conv2D) operation, which is consumed by two branches. A first branch, can for example, include a series of two convolution (Conv2D)operations skip connection 3440, for example. The two branches can then be merged together, for example, with the aid of an addition (Add)operation 3450. Each of the convolution (Conv2D)operations add operation 3450 does not have any kernels and therefore only needs a single ready row to operate. However, the producer data cannot be outdated based on the convolution (Conv2D)consumers Add merge node 3450 is ready to use it. - Referring now to
FIG. 35 , an exemplary branching dataflow utilizing a partial feature-map buffer is illustrated. As illustrated, theproducer 3510 at the start of the branch produces two sets of data for consumers (with the aid of bypass operations) of the two branches to facilitate data synchronization. The faster branch is configured to buffer 3520 more data to align with the slower branch, which can be referred to as the branch delay data. It is to be appreciate that not all branches require a delay buffer. For example, balanced branches do not require extra data storage, as illustrated inFIG. 36 . As illustrated, each of the two branches can be configured with a typical size of partial buffer as if each branch is the only data path. - The inter-layer-communication (ILC) unit can be configured to synchronize data movement between one or more compute cores producing given data and one or more other compute cores consuming the given data. Data communication within the memory processing unit can include direct and indirect connections between two modules. Direct synchronization can be implemented by direct wire connections with a producer/consumer handshake. The direct synchronization can be implemented by polymorphic connections between compute cores.
- The inter-layer-communication unit can also synchronize indirect connections between two modules. Indirect synchronization can be implemented by use of a buffer between two modules. Indirect synchronization by the inter-layer-communication unit can be implemented as communication between compute cores and volatile memory (e.g., SRAM). In such an implementation, a producer compute core can write to a shared buffer in a corresponding first memory region and a consumer compute core can read from the shared buffer. The data can be synchronized to avoid data hazards that can occur in the buffer. Exemplary data hazards can include a producer core overwriting data to a buffer before a consumer core can read data from the buffer, or a consumer core reading data from a buffer before the producer core can write the data to the buffer. In one implementation, indirect synchronization can be implemented by the compute cores sending appropriate signals to the buffer to provide visible synchronization. In visible indirect synchronization, the buffers between the compute cores can act as a simple memory used for writing and reading data. The producer core can be configured to ensure that the consumer core is ready for data, and the consumer core can be configured to ensure that there is enough data in the memory so that it can perform a computation operation.
- In another implementation, indirect synchronization can be implemented by the ILC unit to provide invisible synchronization. In the invisible indirect synchronization the ILC unit is responsible for keeping producer compute cores and consumer compute cores in synchronization.
- Referring now to
FIG. 37 , a memory processing unit (MPU), in accordance with aspects of the present technology, is shown. The memory processing unit can include a first memory including a plurality of regions 3705-3710, a plurality of compute cores 3715-3755 organized in a plurality of processing regions, a second memory (not shown) and an inter-layer-communication (ILC) unit 3760-3765. The memory processing unit MPU can be arranged as described above with reference toFIGS. 2-5 . In one implementation, the layer-communication (ILC) unit 3760-3765 can include a plurality of layer-communication (ILC) modules, wherein each layer-communication (ILC)module first memory - In one implementation, data flow between compute cores 3715-3725 of one or more of a plurality of processing regions and corresponding adjacent ones of the plurality of regions of the
first memory 3705 can be configured utilizing direct synchronization between the compute cores and the first memory. Similarly, data flow between the second memory (not shown) and the compute cores 3715-3755 of the one or more of the plurality of processing regions can be configured utilizing direct synchronization between the compute cores 3715-3755 and the second memory. Data flow between compute cores 3715-3725 within respective ones of the one or more of the plurality of processing regions can also be configured utilizing direct synchronization between adjacent compute cores within the respective processing region. - The inter-layer-communication (ILC) unit 3760-3765 can synchronize data movement between one or more compute cores 3715-3725 producing given data and one or more other compute cores 3730-3740 consuming the given data utilizing indirect invisible synchronization. Data movement synchronization by the inter-layer-communication (ILC) unit 3760-3765 will be further described with reference to
FIGS. 38-40 . Referring now toFIG. 38 , an inter-layer-communication method, in accordance with aspect of the present technology, is shown. The inter-layer-communication (ILC) unit 3760-3765 can be configured to receive synchronization commands related torespective buffers 3770 of respective ones of the plurality of regions of the first memory 305 from respective compute cores 3715-3755 of the plurality of processing regions, at 3810. For example, inter-layer-communication (ILC) unit 3760-3765 can receive synchronization commands from a first one 3720 of the plurality of compute cores 3715-3755 related to writing data to a sharedbuffer 3770 in a first portion of thefirst memory 3705. In one implementation, a producer compute core can send an increment synchronization command when it finishes writing a whole feature-memory row to the buffer. The inter-layer-communication (ILC) unit 3760-3765 can also receive access commands from a second one 3730 of the plurality of compute cores 3715-3755 related to reading data from the sharedbuffer 3770 in a first portion of thefirst memory 3705. In one implementation, a consumer compute core can send a decrement synchronization command when it finishes reading a whole feature-memory row from the buffer. - At 3820, the inter-layer-communication (ILC) unit 3760-3765 can track read and write accesses to the respective buffers of respective ones of the plurality of regions of the first memory. In one implementation tracking is done on a coarse grain level, such as a whole feature-map row level. In one implementation, the inter-layer-communication (ILC) unit 3760-3765 can track access to respective buffers with corresponding respective indexes to point to an ILC entry. The inter-layer-communication (ILC) unit 3760-3765 does not need to store buffer region boundaries or other information about the buffer. Instead, the compute cores 3715-3755 can be responsible for accessing the correct ILC entry index that corresponds to a respective shared buffer. In one implementation, an identifier of a given
compute core 3720 received in an synchronization command can be mapped to a count associated with a given region (e.g., buffer) of a given portion of thefirst memory 3705. - Referring now to
FIG. 39 , respective shared buffers 3910-3930 and corresponding respective ILC entry indexes 3940-3960, in accordance with aspects of the present technology, are shown. Each ILC entry index can include a count of the number of synchronization units that one or more producer compute cores have produced (e.g., written) to the corresponding respective shared buffer, and one or more consumer compute cores have yet to consume (e.g., read) from the corresponding respective shared buffer. In one implementation, the MC entry index can include a current unit count (ic), a maximum count (ix), a minimum count (iy), and an initial count (io). - At 3830, the inter-layer-communication (ILC) unit 3760-3765 can control access to the buffers of the respective one of the plurality of regions of the
first memory buffer 3770 from one or more respectiveproducer compute cores 3720 and one or more respectiveconsumer compute cores 3740 based on the corresponding ILC entry index. For example, the inter-layer-communication (ILC) unit 3760-3765 can allow write access to a respective sharedbuffer 3770 as long as the current unit count (ic) in the corresponding MC entry index is less than the maximum count (ix). If the given write access is allowed, the inter-layer-communication (ILC) unit 3760-3765 increments the current unit count (ic) by an amount of units (i+) for the given write access, as illustrated inFIG. 40 . If the current unit count (ic) in the corresponding MC entry index is greater than or equal to the maximum count (ix), the inter-layer-communication (ILC) unit 3760-3765 blocks the given write access to the respective sharedbuffer 3770, and does not increment the current unit count (ic). Similarly, the inter-layer-communication (ILC) unit 3760-3765 can allow read access to a respective sharedbuffer 3770 as long as the current unit count (ic) in the corresponding ILC entry index is greater than the minimum count (in). If the given read access is allowed, the inter-layer-communication (ILC) unit 3760-3765 decrements the current unit count (ic) by an amount of units (i−) for the given read access. If the current unit count (ic) in the corresponding ILC entry index is less than or equal to the minimum count (in), the inter-layer-communication (ILC) unit 3760-3765 blocks the given read access to the respective sharedbuffer 3770, and does not decrement the current unit count (ic). The difference between the initial count (io) and the minimum count (in) represents the amount of data that must be produced (written to the corresponding shared buffer) by one or more producer compute cores before one or more consumer compute cores may start to consume data from the corresponding shared buffer. If there are multiple producer compute cores writing to the same shared buffer, the inter-layer-communication (ILC) unit 3760-3765 may require multiple increment synchronization commands for the compute cores before incrementing the current unit count (ic). Furthermore, the inter-layer-communication (ILC) unit 3760-3765 may need to know from the corresponding computer core when a new data set, such as a new feature map, is received to reset the counter values. Similarly, as compute cores reach the end of a data set, such as a feature map, as indicated by the current unit count (ic) reaching a “o” value, the inter-layer-communication (ILC) unit 3760-3765 can consider the next write command to be the start of a new data set, such as a feature map frame. - Referring now to
FIG. 41 , a 4-dimension array, in accordance with aspects of the present technology, is illustrated. In one implementation, the 4-dimension array may be a weight array utilized in artificial intelligence computations, such as but not limited to convolution neural network computations. In one implementation, the 4-dimensional array can be utilized in 2-dimension convolution layers of a neural network model. The 4-dimension array can be characterized by a kernel width (S), a kernel height ©, input channels © and output channels (M) (e.g., number of kernels per layer). Accordingly, the filters (or kernels) have a dimension of R×S×C, and there are M filters. - Referring now to
FIG. 42 , a 3-dimension array, in accordance with aspects of the present technology, is illustrated. In one implementation, the 3-dimension array can be utilized in a 2-dimensional depth-wise convolution layer of a neural network model. The 3-dimensional array can be characterized by a kernel width (S), a kernel height © and input channels ©. Each kernel has a dimension of R×S, and acts on each input channel separately to produce an output feature map with C output channels. - Referring now to
FIG. 43 , a 2-dimension array, in accordance with aspects of the present technology, is shown. In one implementation, the 2-dimension array can be a dense weight array utilized in a full connected layer of a neural network model. The 2-dimension array can be characterized by flattened input channels © and output channels (M). The 2-dimension weight array is typically used in the end of a neural network mode for classification layers. - Referring to
FIG. 44 , a memory macro of a memory processing unit (MPU), in accordance with aspects of the present technology, is shown. The memory macro appears as a large 2-dimensional memory array. The memory macro can be characterized by a height and a width. The width of the memory macro can be configured to provide a very wide word fetch. The width of the memory macro can be many words per read wide, which can be determined by a needed read bandwidth access for weight arrays. In an exemplary implementation, the access bandwidth of a memory macro can be up to 1024 bits. The height of the memory macro can be a 1-dimensional addressable space. The height of the memory macro can be determined by the total size of the memory macro divided by the width of the memory macro. The memory macro can be logically split into a plurality ofphysical channels 4410. Each physical channel can be considered a “weight prefetch” wide 4420. - Storage of weight arrays in the memory macros, in accordance with aspects of the present technology, can be configured to improve the performance of the memory processing unit (MPU). One or more memory macros can be configured to store all the weights needed for access by the compute cores of a given group. The one or more memory macros can be configured to provide enough memory access bandwidth for the compute cores in a given group. The memory macros can be optimized for read access by the compute cores. The number of internal memory banks, arrangement and the like of the memory can be transparent to the architectural design of the memory processing unit (MPU).
- Referring again to
FIGS. 41-43 , the weight arrays can be organized for storage in memory macros to improve performance of a memory processing unit (MPU). The arrangement of weight arrays can impact data throughput, memory utilization, data reuse, memory access pattern, and mapping. Aspects of the present technology can fit a 4-dimension weight array into a 2-dimension memory macro. Aspects of the present technology can also expand 3-dimension and 2-dimension arrays to look like 4-dimension arrays for storage in 2-dimension memory macros. - Referring now to
FIG. 45 , a method of fitting arrays into a 2-dimension memory, in accordance with aspects of the present technology, is shown. In one implementation the array can be a 4-dimension, 3-dimension or 2-dimension weight array and the 2-dimension memory can be a memory macro. The method of fitting the array into a 2-dimension memory will be explained with reference toFIGS. 46-52 . The method can include expanding the dimension of a 3-dimension or a 2-dimension array, at 4510. If the array is a 3-dimension array of kernel width (S), a kernel height © and input channels ©, the array can be expanded to a 4-dimension array of kernel width (S), a kernel height ©, one input channel and output channels ©, as illustrated inFIG. 46 . If the array is a 2-dimension array of input channels © and output channels (M), the array can be expanded to a 4-dimension array of a single kernel width, a single kernel height, input channels © and output channels (M), as illustrated inFIG. 47 . - At 4520, the 4-dimension array, expanded 3-dimension array or expanded 2-dimension array can be quantized, as illustrated in
FIG. 48 . Each array element can be quantized to an 8-bit value. Each filter can also include a single bias value (b) 4810, 4820 and one scaling exponent (exp) 4830. Thesingle bias value single bias value - At 4530, the filters of the quantized array can be unrolled and the bias value and scaling exponent can be appended, as illustrate in
FIG. 49 . In one implementation, corresponding entries from each channel can be sequentially arranged after thebias value - At 4540, the unrolled and appended filters can be reshaped to fit into a physical channel of a memory, as illustrated in
FIG. 50 . The reshaped filters can be characterized by a weight prefetch height and an entries per virtual channel width. The reshaped filters can be padded with zero element values if necessary to fit the physical channel of the memory. In one implementation, the physical channel of the memory can be the physical channel of a memory macro. - At 4550, the reshaped filters can be rotated, as illustrated in
FIG. 51 . The rotated filters can comprise M virtual channels (e.g., output filters). At 4560, virtual channels of the rotated filters can be packed physical channels of the memory, as illustrated inFIG. 52 . The M virtual channels of the rotated filters can be sequentially stored in the plurality of physical channels of the memory. Physical channels of the memory can be padded with zero (0) values if necessary, such that a weight array for a new layer starts at a first physical channel boundary of the memory. - Again, organizing each of the memory regions into a plurality of memory blocks and coupling a core group of a respective processing region to a set of memory blocks that are proximate to the given core group, while not coupled to memory blocks in the adjacent memory regions that are distal from the given core group, can increase memory bandwidth throughput. Providing more, but smaller, flat memory bocks by organizing each of the plurality of memory regions into respective set of a plurality of memory blocks can provide increased memory bandwidth for increased performance. Further increasing the number of memory blocks in each of the plurality of first memory regions can further increase the memory bandwidth. Referring to
FIGS. 53A-53D , organization of each of the plurality of first memory regions into a plurality of columns and rows, in accordance with aspects of the present technology, is shown. Each memory region 5310 can be organized into a plurality of memory blocks of m blocks wide and n blocks long, wherein m and n can be different or equal. In one implementation, memory regions 110-130, 202-210, 302-310, 402-408, as described above with reference toFIGS. 1-5 , can be between 2 and 128 channels wide. In another implementation, the memory regions 110-130, 202-210, 302-310, 402-408 can be between 2 and 128 words wide. A fetch/write back unit can fetch sets of memory blocks from an adjacent one of the plurality of first memory regions and write back sets of memory blocks to another adjacent one of the first memory regions in accordance with a dataflow configuration. For example, a fetch unit of a respective compute core can be configured to fetch from a set of memory blocks of a respective adjacent one of the plurality of first memory regions. In one implementation, the set of memory blocks can correspond to a channel width of the compute core or cache width of the fetch unit as illustrated inFIG. 53A . Additional data from sets of memory blocks can then be fetched into the cache of the fetch unit, as illustrated inFIGS. 53B-53D respectively. Similarly, a write back unit of a respective compute core can be configured to write data back to sets of memory blocks of respective adjacent one of the plurality of first memory regions, as illustrated inFIG. 54A . For example, data can be written back to a first set of memory block of a respective adjacent one of the plurality of memory regions. Additional data can then be written back to the next set of memory blocks of the respective adjacent one of the plurality of memory regions. The wide plurality of first memory regions organized into a plurality of columns and rows, in accordance with aspects of the present technology, advantageously reduces the number of memory access cycles, which can smother the pipeline, improve arbitration and better latency hiding. However, some compute functions, such as reshape, may need to be based on multiples of the memory block line width. - Referring again to
FIGS. 3-5, 10 and 11 , the compute cores can be configured to compute functions including, but are not limited to, vector products, matrix-dot-products, convolutions, min/max pooling, averaging, scaling, and or the like. For example, near memory (M) compute cores can compute up-sampling, deconvolution, separable convolution, pointwise convolution, MP convolution, and the like functions. The arithmetic (A) compute cores can be configured to compute maximum, minimum, subtract, multiply, concatenate, sigmoid/logistic activation, hyperbolic tangent, mish, swish, constant add, constant multiply, clip and the like functions. In yet another example, rescale or the like functions can be supported by a graph processing core. - Compute functions such as the reshape function can be implemented by the control circuitry and or inter-layer communication (ILC) units 450-456. Reshaping can be supported by adjusting corresponding increment and decrement counts of the inter-layer communication unit. For example, the increment count can be set to +4 and the decrement count can be set to −6 to reshape a 6×4 producer output to a 4×6 consumer input in a per row ILC synchronization scheme, as illustrated in
FIG. 55 . A deconvolution, also known as a two-dimension transpose convolution (Conv2Dtranspose), can include kernel transformation and up-sampling as illustrated inFIGS. 56A-56D . For example, the Conv2Dtranspose can be implementing by transposing (e.g., flipping) kernel weights, while the strides happen on the output feature map instead of the input. This is equivalent to inserting an up-sampling layer with inserted zeros before the two-dimension convolution with transposed kernel weights, as illustrated inFIG. 57 . A sigmoid function is defined as: -
- The sigmoid function can be approximated in the compute cores using the piecewise equation:
-
- as illustrated in
FIG. 58 . - Generally, feature maps can be encoded as integer data, B-float data, group B-float or the like. Referring now to
FIG. 59 , a feature map of kernel width (X), a kernel height (Z) and channels (Z), in accordance with aspects of the present technology, is shown. For integer data, feature map pixels can be encoded as n-bit integer values. For example, the feature map pixels can be represented as 8-bit integers. The fixed-point location of the integer can be estimated offline using a pilot data set. The pilot data set can be utilized to encode the data so that the entries share the same exponent (e.g., static exponent). However, if the runtime conditions or data set differs from the pilot data set, the effective precision is significantly degraded, and network branching can be difficult. - In another implementation, the feature map pixels can be encoded as Brian Floating Point (B-float) values, including a base and exponent. For example, the feature map pixels can be represented by 16 bits, including an 8 bit signed fraction and 8 bit exponent. The 8 bit signed fraction can include a sign bit, 7 explicitly stored faction bits and 1 hidden fraction bit, as illustrated in
FIG. 60 . Each B-float encoded entry can have its own dynamic exponent. The B-float encoding advantageously does not need a pilot data phase, and advantageously adapts to runtime conditions. However, B-float encoded data utilizes double the memory storage and memory bandwidth as compared to integer encoded feature map data. - In yet another implementation, the feature map pixels can be represented by as B-float values, wherein each group of n-channels of pixels have their own dynamic exponent. The n-channels should be less than or equal to the number of physical channels. B-float encoding, wherein groups of n-channels are encoded with a given dynamic exponent is referred to herein as Group B-float encoding. Group B-float encoding advantageously does not need a pilot data phase, and advantageously adapts to runtime conditions. In most cases, Group B-float encoding can advantageously utilize substantially the same memory storage and memory bandwidth as integer encoded data, by storing the group B-float encoded data, in accordance with aspects of the present technology.
- Referring now to
FIG. 61 , a logical view of a feature map encoded in Group B-float from an output side of a computer core, in accordance with aspects of the present technology, is illustrated. For a layer ‘i’, the output channels 0-8 can be encoded by 8-bit exponents e, and 8 bit fractions ‘m.’ The exponent is across multiple channels, since each output channel has one output entry per pixel. Referring now toFIG. 62 a logical view of a feature map encoded in Group B-float from an input side of a compute core, in accordance with aspects of the present technology, is illustrated. For a layer ‘i+1’, the exponent of thefeature map 6210 are shared along a same axis as weights per-channel quantization 6220. Referring now toFIG. 63 , another logical view of a feature map and weights encoded in Group B-float from an input side of a compute core, in accordance with aspects of the present technology, is illustrated. Referring now toFIG. 64 , storage of B-float encoded feature map data in a narrow flat memory organization, in accordance with aspects of the present technology, is shown. In the narrow flat first memory organization, a base and exponent of B-float encoded value for feature map pixels can be stored in corresponding word lines. Referring now to 65, storage of B-float encoded feature map data in a wide memory organization, in accordance with aspects of the present technology, is illustrated. In the wide memory organization, a base and exponent of B-float encoded value for each of a plurality of feature map pixels can be stored in corresponding word lines. For B-float encoding, each pixel entry can have its own dynamic exponent. Therefore, the exponent for each pixel entry needs to be stored with the respective base. Referring now toFIG. 66 , storage of Group B-float encoded feature map data in a wide memory organization, in accordance with aspects of the present technology, is illustrated. In the wide memory organization, an exponent is the same for the pixels of a group of channels. Therefore, the base of the pixels value can be stored with one instance of the dynamic exponent for the given group of channels. Referring now toFIG. 67 , accuracy of calculations on Group B-float encoded ResNet-50 feature map pixels values for different group sizes is illustrated. Referring now toFIG. 68 , accuracy of calculations on Group B-float MobileNet feature map pixels values for different group sizes is illustrated. The combination of Group B-float encoding and wide memory organization for use in memory regions 110-130, 202-210, 302-310, 402-408, as described above with reference toFIGS. 1-5 , can advantageously provide almost two times (2×) the memory bandwidth for the same memory width. The combination of Group B-float encoding and wide memory organization can also advantageously reduce the on-chip memory storage need by almost one half (½). The accuracy achievable for computations utilizing Group B-float encoded, including but not limited to neural network computations, can be substantially equal to B-float encoded values. - The foregoing descriptions of specific embodiments of the present technology have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the present technology to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the present technology and its practical application, to thereby enable others skilled in the art to best utilize the present technology and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents.
Claims (22)
1. A memory processing unit (MPU) comprising:
a first memory including a plurality of memory regions, wherein one or more of the plurality of memory regions are configured in a corresponding pluralities of memory blocks, wherein the memory blocks are configured to store Brian Floating Point (B-float) encoded data; and
a plurality of processing regions interleaved between the plurality of regions of the first memory, wherein the processing regions include a plurality of core groups, wherein the core groups include one or more compute cores.
2. The MPU of claim 1 , wherein the memory blocks are further configured to store Group B-float encoded data.
3. The MPU of claim 2 , wherein the Group B-float encoded data comprises Group B-float encoded feature map pixels values.
4. The MPU of claim 2 , wherein the plurality of memory blocks of each of the plurality of regions of the first memory are arranged in a plurality of columns and rows.
5. The MPU of claim 4 , wherein a plurality of bases and an instance of a given exponent for a corresponding group of channels of the Group B-float encoded data are store in a corresponding row of memory blocks of a corresponding region of the first memory.
6. The MPU of claim 5 , wherein the exponent for corresponding groups of channels of the Group B-float encoded data are dynamic.
7. The MPU of claim 1 , wherein one or more of the plurality of core groups of a respective one of the plurality of processing regions are coupled between adjacent ones of the plurality of memory regions of the first memory, and between adjacent core groups of the respective one of the plurality of processing regions.
8. The MPU of claim 1 , wherein the plurality of core groups of respective ones of the plurality of processing regions are coupled between adjacent ones of the plurality of memory regions of the first memory.
9. The MPU of claim 1 , wherein compute cores of respective ones of the core groups are configured in one or more compute clusters, wherein compute cores in a given compute cluster are configured to compute a given compute function.
10. The MPU of claim 9 , wherein one or more compute groups include one or more memory M-cores and one or more arithmetic A-Cores.
11. A memory processing unit (MPU) comprising:
a first memory including a plurality of memory regions, wherein the plurality of memory regions are configured in corresponding pluralities of memory blocks, and wherein the memory blocks are configured to store Group B-float encoded feature map pixels; and
a plurality of processing regions columnal interleaved between the plurality of regions of the first memory, wherein the plurality of core groups of respective ones of the plurality of processing regions are coupled between adjacent ones of the plurality of memory regions of the first memory and between adjacent core groups within the respective processing region.
12. The MPU of claim 11 , wherein the plurality of memory blocks of each of the plurality of regions of the first memory are arranged in a plurality of columns and rows.
13. The MPU of claim 11 , wherein a plurality of bases and an instance of a give exponent for a corresponding group of channels of the Group B-float encoded data are stored in a corresponding row of memory blocks of a corresponding region of the first memory
14. The MPU of claim 10 , further comprising one or more memory regions of a second memory coupled to the plurality of processing regions.
15. The MPU of claim 14 , wherein the second memory is configured to store weight values.
16. The MPU of claim 14 , wherein respect ones of the second memory regions are coupled to respective ones of the plurality of processing regions.
17. The MPU of claim 14 , further wherein the compute cores in corresponding core groups of the plurality of processing regions are:
configurable for core-to-core dataflow between adjacent compute groups in respective ones of the plurality of processing regions through one or more corresponding memory blocks of a corresponding memory region;
configurable for memory-to-core dataflow from respective ones of memory blocks of the plurality of regions of the first memory to one or more cores within adjacent ones of core groups of the plurality of processing regions;
configurable for core-to-memory dataflow from one or more cores within ones of the plurality of core groups of the plurality of processing regions to adjacent ones of the memory blocks of the plurality of regions of the first memory; and
configurable for memory-to-core dataflow from the second memory region to one or more core groups of corresponding ones of the plurality of processing regions.
18. The MPU of claim 11 , wherein:
the first memory comprises a static volatile memory; and
the second memory comprises a non-volatile memory.
19. A memory processing method comprising:
configuring a first memory to store Group B-float encoded data, wherein the first memory includes a plurality of regions;
configuring data flow between compute cores of one or more of a plurality of processing regions and corresponding adjacent ones of the plurality of regions of the first memory;
configuring data flow between a second memory and the compute cores of the one or more of the plurality of processing regions;
configuring data flow between compute cores within respective ones of the one or more of the plurality of processing regions;
configuring one or more sets of compute cores of one or more of the plurality of processing regions to perform respective compute functions of a neural network model;
loading weights for the neural network model into the second memory;
loading activation data for the neural network model into one or more of the plurality of regions of the first memory;
synchronizing data movement between one or more compute cores producing given data and one or more other compute cores consuming the given data based on the neural network model.
20. The memory processing method according to claim 19 , wherein the plurality of regions of the first memory each include a plurality of memory block arranged in a plurality of columns and rows.
21. The memory processing method according to claim 20 , further comprising configuring the first memory to store a plurality of bases and an instance of a given exponent for a corresponding group of channels of the Group B-float encoded data in a corresponding row of memory blocks of a corresponding region of the first memory.
22. The memory processing method according to claim 21 , wherein the exponent for corresponding groups of channels of the Group B-float encoded data are dynamic.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/109,788 US20230273729A1 (en) | 2022-02-14 | 2023-02-14 | Core group memory processing with group b-float encoding |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202263310031P | 2022-02-14 | 2022-02-14 | |
US18/109,788 US20230273729A1 (en) | 2022-02-14 | 2023-02-14 | Core group memory processing with group b-float encoding |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230273729A1 true US20230273729A1 (en) | 2023-08-31 |
Family
ID=87558507
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/109,788 Pending US20230273729A1 (en) | 2022-02-14 | 2023-02-14 | Core group memory processing with group b-float encoding |
US18/109,790 Pending US20230305807A1 (en) | 2022-02-14 | 2023-02-14 | Core group memory processsing with mac reuse |
US18/109,736 Pending US20230259282A1 (en) | 2022-02-14 | 2023-02-14 | Core group memory processsing unit architectures and configurations |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/109,790 Pending US20230305807A1 (en) | 2022-02-14 | 2023-02-14 | Core group memory processsing with mac reuse |
US18/109,736 Pending US20230259282A1 (en) | 2022-02-14 | 2023-02-14 | Core group memory processsing unit architectures and configurations |
Country Status (1)
Country | Link |
---|---|
US (3) | US20230273729A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220342590A1 (en) * | 2021-04-27 | 2022-10-27 | Microchip Technology Inc. | Method and Apparatus for Gather/Scatter Operations in a Vector Processor |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190155574A1 (en) * | 2017-11-20 | 2019-05-23 | Intel Corporation | Integrated circuits with machine learning extensions |
CN112346652A (en) * | 2019-08-09 | 2021-02-09 | 爱思开海力士有限公司 | Memory controller and operating method thereof |
CN112445526A (en) * | 2019-08-29 | 2021-03-05 | 英特尔公司 | Multivariable stride read operation for accessing matrix operands |
CN116893912A (en) * | 2023-08-01 | 2023-10-17 | 广州汽车集团股份有限公司 | Inter-core communication method, system, device, equipment and medium for vehicle-mounted software |
-
2023
- 2023-02-14 US US18/109,788 patent/US20230273729A1/en active Pending
- 2023-02-14 US US18/109,790 patent/US20230305807A1/en active Pending
- 2023-02-14 US US18/109,736 patent/US20230259282A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190155574A1 (en) * | 2017-11-20 | 2019-05-23 | Intel Corporation | Integrated circuits with machine learning extensions |
CN112346652A (en) * | 2019-08-09 | 2021-02-09 | 爱思开海力士有限公司 | Memory controller and operating method thereof |
CN112445526A (en) * | 2019-08-29 | 2021-03-05 | 英特尔公司 | Multivariable stride read operation for accessing matrix operands |
CN116893912A (en) * | 2023-08-01 | 2023-10-17 | 广州汽车集团股份有限公司 | Inter-core communication method, system, device, equipment and medium for vehicle-mounted software |
Also Published As
Publication number | Publication date |
---|---|
US20230305807A1 (en) | 2023-09-28 |
US20230259282A1 (en) | 2023-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11775313B2 (en) | Hardware accelerator for convolutional neural networks and method of operation thereof | |
CN108268945B (en) | Neural network unit and operation method thereof | |
CN108268932B (en) | Neural network unit | |
CN108268944B (en) | Neural network unit with remodelable memory | |
CN114391135A (en) | Method for performing in-memory processing operations on contiguously allocated data, and related memory device and system | |
Mittal | A survey of accelerator architectures for 3D convolution neural networks | |
US20160224465A1 (en) | Hybrid processor | |
CN112513885A (en) | Neural processor | |
CN110476212B (en) | Apparatus and method for in-memory data switching network | |
EP4010793A1 (en) | Compiler flow logic for reconfigurable architectures | |
US10114795B2 (en) | Processor in non-volatile storage memory | |
US20230061711A1 (en) | Inter-layer communication techniques for memory processing unit architectures | |
CN104571949A (en) | Processor for realizing computing and memory integration based on memristor and operation method thereof | |
US11705207B2 (en) | Processor in non-volatile storage memory | |
KR102450508B1 (en) | Clock signal generation device and memory device including the same | |
KR20220051006A (en) | Method of performing PIM (PROCESSING-IN-MEMORY) operation, and related memory device and system | |
US20230273729A1 (en) | Core group memory processing with group b-float encoding | |
US11823771B2 (en) | Streaming access memory device, system and method | |
CN114830082A (en) | SIMD operand arrangement selected from multiple registers | |
WO2021046566A1 (en) | Spatiotemporal fused-multiply-add, and related systems, methods and devices | |
US20220284274A1 (en) | Neural processing device and operation method of the neural processing device | |
US11488650B2 (en) | Memory processing unit architecture | |
WO2020226903A1 (en) | Memory processing unit architecture | |
US20170139606A1 (en) | Storage processor array for scientific computations | |
Gu | Architecture Supports and Optimizations for Memory-Centric Processing System |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |