WO2017185387A1 - 一种用于执行全连接层神经网络正向运算的装置和方法 - Google Patents

一种用于执行全连接层神经网络正向运算的装置和方法 Download PDF

Info

Publication number
WO2017185387A1
WO2017185387A1 PCT/CN2016/080968 CN2016080968W WO2017185387A1 WO 2017185387 A1 WO2017185387 A1 WO 2017185387A1 CN 2016080968 W CN2016080968 W CN 2016080968W WO 2017185387 A1 WO2017185387 A1 WO 2017185387A1
Authority
WO
WIPO (PCT)
Prior art keywords
module
instruction
storage unit
unit
vector
Prior art date
Application number
PCT/CN2016/080968
Other languages
English (en)
French (fr)
Inventor
刘少礼
兰慧盈
郭崎
陈云霁
陈天石
Original Assignee
北京中科寒武纪科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京中科寒武纪科技有限公司 filed Critical 北京中科寒武纪科技有限公司
Priority to EP16899898.7A priority Critical patent/EP3451236A4/en
Priority to KR1020187033949A priority patent/KR102486030B1/ko
Publication of WO2017185387A1 publication Critical patent/WO2017185387A1/zh
Priority to US16/174,185 priority patent/US11373084B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0875Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with dedicated cache, e.g. instruction or stack
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • G06F9/3001Arithmetic instructions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • G06F9/30036Instructions to perform operations on packed data, e.g. vector, tile or matrix operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/45Caching of specific data in cache memory
    • G06F2212/452Instruction code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3885Concurrent instruction execution, e.g. pipeline or look ahead using a plurality of independent parallel functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound

Definitions

  • the present invention generally relates to artificial neural networks, and in particular to an apparatus and method for performing a full connection layer artificial neural network forward operation.
  • Multi-layer artificial neural networks are widely used in the fields of pattern recognition, image processing, function approximation and optimization calculation. Multi-layer artificial networks have been accepted by academia in recent years due to their high recognition accuracy and good parallelism. The industry is getting more and more attention. Artificial neural networks involve a variety of algorithms. The fully connected layer is an important algorithm in artificial neural networks and is widely used in various artificial neural network models.
  • One known method of supporting the forward operation of a fully connected layer of a multi-layer artificial neural network is to use a general purpose processor.
  • the method supports the above algorithm by executing general purpose instructions using a general purpose register file and generic functions.
  • One of the disadvantages of this approach is that the performance of a single general purpose processor is low and cannot meet the performance requirements of conventional multi-layer artificial neural network operations.
  • communication between general-purpose processors becomes a performance bottleneck.
  • the general-purpose processor needs to decode the multi-layer artificial neural network into a long column operation and a fetch instruction sequence, and the processor front-end decoding brings a large power consumption overhead.
  • Another known method of supporting multi-layer artificial neural network forward operations is to use a graphics processing unit (GPU).
  • the method supports the above algorithm by executing a generic SIMD instruction using a general purpose register file and a generic stream processing unit. Since the GPU is a device dedicated to performing graphics and image operations and scientific calculations, without the special support for multi-layer artificial neural network operations, a large amount of front-end decoding work is still required to perform multi-layer artificial neural network operations, bringing a large number of Additional overhead.
  • the GPU has only a small on-chip cache, and the model data (weight) of the multi-layer artificial neural network needs to be repeatedly transferred from off-chip, and the off-chip bandwidth becomes the main performance bottleneck.
  • An aspect of the present invention provides an apparatus for performing an artificial neural network full connection layer forward operation, including an instruction storage unit, a controller unit, a data access unit, an interconnection module, a main operation module, And multiple slave arithmetic modules, where:
  • the instruction storage unit reads the instruction through the data access unit and stores the read instruction
  • the controller unit reads an instruction from the instruction storage unit, and translates the instruction into a control signal for controlling behavior of other modules, the other module including a data access unit, a main operation module, and the plurality of slave operation modules;
  • the data access unit performs data or instruction read and write operations between the external address space and the device;
  • the interconnect module is used to connect the main operation module and the slave operation module
  • the main operation module is used to implement a function activation operation in the artificial neural network full connection layer algorithm
  • the slave computing module is used to implement multiplication and addition of input neurons and weight parameters in the artificial neural network connection layer algorithm
  • An interconnection module is used for data transmission between the main operation module and the slave operation module.
  • the main operation module transmits the input neuron vector to each through the interconnection module.
  • the interconnect module progressively combines the output neuron values of the respective slave modules into intermediate result vectors, and sends them back to the main operation module for subsequent calculation.
  • Another aspect of the present invention provides a method of performing a single layer artificial neural network full connection layer forward operation using the above apparatus.
  • Another aspect of the present invention provides a method of performing a multi-layer artificial neural network full connection layer forward operation using the above apparatus.
  • the device can be applied to the following (including but not limited to) scenarios: data processing, robots, computers, printers, scanners, phones, tablets, smart terminals, mobile phones, driving recorders, navigators, sensors, cameras, cloud servers , cameras, camcorders, projectors, watches, earphones, mobile storage, wearable devices and other electronic products; aircraft, ships, vehicles and other types of transportation; televisions, air conditioners, microwave ovens, refrigerators, rice cookers, humidifiers, washing machines, Electric lights, gas stoves, range hoods and other household appliances; and including nuclear magnetic resonance instruments, B-ultrasound, electrocardiograph and other medical equipment.
  • FIG 1 shows an example block diagram of the overall structure of an apparatus for performing an artificial neural network full connection layer forward operation, in accordance with an embodiment of the present invention.
  • FIG. 2 is a schematic diagram showing the implementation of an artificial neural network full connection layer forward according to an embodiment of the present invention.
  • FIG. 3 illustrates an example block diagram of a main operational module structure in an apparatus for performing an artificial neural network full connectivity layer forward operation, in accordance with an embodiment of the present invention.
  • FIG. 4 is a block diagram showing an example of a slave module structure in an apparatus for performing an artificial neural network full-layer layer forward operation in accordance with an embodiment of the present invention.
  • FIG. 5 illustrates an example block diagram of a neural network full connectivity layer forward operation process in accordance with an embodiment of the present invention.
  • FIG. 6 illustrates an implementation of a single layer artificial neural network full connectivity layer forward operation, in accordance with one embodiment.
  • FIG. 7 is a flow chart showing a single layer artificial neural network full connection layer operation in accordance with an embodiment of the present invention.
  • the multi-layer artificial neural network full connection layer forward operation includes two or more layers of multiple neurons.
  • the input neuron vector is first subjected to a dot product with the weight vector, and the result is biased and the activation function is used to obtain the output neuron.
  • the addition of offset and activation operations are optional operations, and the activation function can be any of sigmoid, tanh, relu, and softmax.
  • FIG. 1 illustrates a method for performing an artificial neural network full connection layer forward operation according to an embodiment of the present invention.
  • the apparatus includes an instruction storage unit 1, a controller unit 2, a data access unit 3, an interconnection module 4, a main operation module 5, and a plurality of slave operation modules 6.
  • the instruction storage unit 1, the controller unit 2, the data access unit 3, the interconnection module 4, the main operation module 5, and the slave operation module 6 can all pass hardware circuits (including but not limited to FPGA, CGRA, application specific integrated circuit ASIC, analog circuit). Or memristor, etc.).
  • the instruction storage unit 1 reads in an instruction through the data access unit 3 and stores the read instruction.
  • the instruction memory unit 1 can be implemented by various memory devices (SRAM, DRAM, memristor, 3D-DRAM or nonvolatile memory, etc.).
  • the controller unit 2 reads an instruction from the instruction storage unit 1, and translates the instruction into a control signal for controlling the behavior of other modules, for example, the data access unit 3, the main operation module 5, and the slave operation module 6, and the like.
  • the data access unit 3 is capable of accessing an external address space, writing data or instructions directly to respective memory cells within the device, or writing data from various memory cells within the device to an external address space.
  • the interconnect module 4 is used to connect the main operation module and the slave operation module, and can be implemented into different interconnection topologies (such as a tree structure, a ring structure, a grid structure, a hierarchical interconnection, a bus structure, etc.).
  • FIG. 2 schematically shows an embodiment of an interconnection module 4: an H-tree structure.
  • the H-tree module 4 constitutes a data path between the main arithmetic module 5 and the plurality of slave arithmetic modules 6, and has a structure of an H-tree type.
  • the H-tree is a binary tree path composed of multiple nodes. Each node sends the upstream data to the downstream two nodes in the same way, and the data returned by the two downstream nodes are combined and returned to the upstream node.
  • the neuron data in the main operation module 5 is sent to the respective slave operation modules 6 through the H-tree module 4; when the calculation process from the operation module 6 is completed, when the slave operation module is completed, After the calculation process is completed, the value of each neuron output from the arithmetic module is progressively formed into a complete vector composed of neurons in the H-tree module as an intermediate result vector. For example, assuming that there are N slave arithmetic modules in the device, the intermediate result vector is segmented by N, each segment has N elements, and the i-th slave computing module calculates the i-th element in each segment.
  • the N elements are assembled into a vector of length N through the H-tree module and returned to the main arithmetic module. So if the network has only N output neurons, each slave module only needs to output the value of a single neuron. If the network has m*N output neurons, each slave module needs to output m neuron values.
  • FIG. 3 illustrates a method for performing an artificial neural network full connection layer forward operation according to an embodiment of the present invention.
  • the main operation module 5 includes a first operation unit 51, a first data dependency determination unit 52, and a first storage unit 53.
  • the first operation unit 51 includes a vector addition unit 511 and an activation unit 512.
  • the first operation unit 51 receives the control signal from the controller unit, and completes various operation functions of the main operation module 5, and the vector addition unit 511 is configured to implement an offset operation in the forward calculation of the artificial neural network full connection layer, the component
  • the offset vector is added to the intermediate result vector transmitted from the arithmetic module 6 through the interconnect module 4, and the output is a vector-added value
  • the activation operation unit 512 is used to implement the artificial neural network full-connection layer activation function. operating.
  • the input of the component is an intermediate result transmitted from the arithmetic module 6 through the interconnection module 4, or an output result of the vector addition unit 511, and is output as a neuron vector after function activation.
  • the offset vector can be read from the external address space or stored locally.
  • the first data dependency determining unit 52 is a port in which the first computing unit 51 reads and writes the first storage unit 53, and ensures read/write consistency of data in the first storage unit 53. At the same time, the first data dependency determining unit 52 is also responsible for transmitting the data read from the first storage unit 53 to the slave computing module through the interconnect module 4, and the output data from the computing module 6 is directly sent to the slave module 4 through the interconnect module 4.
  • the first arithmetic unit 51 The command output from the controller unit 2 is sent to the calculation unit 51 and the first data dependency determination unit 52 to control its behavior.
  • the storage unit 53 is configured to buffer the input data and the output data used by the main operation module 5 in the calculation process.
  • each slave arithmetic module 6 includes a second arithmetic unit 61, a data dependency determining unit 62, a second storage unit 63, and a third storage unit 64.
  • the second arithmetic unit 61 receives the control signal issued by the controller unit 2 and performs a dot product operation, including a vector multiplication unit 611, and an accumulation operation unit 612.
  • the vector multiplication unit 611 is used to implement the alignment multiplication of the neuron vector and the weight vector
  • the accumulation operation unit 612 is used to implement an operation of adding each item of the vector together.
  • the second data dependency determining unit 62 is responsible for the read and write operations on the second storage unit 63 in the calculation process. Before the second data dependency determining unit 62 performs the read/write operation, it first ensures that there is no read/write consistency conflict between the data used between the instructions. For example, all control signals sent to the data dependency unit 62 are stored in an instruction queue internal to the data dependency unit 62, in which the read command is read. If the range of the read data conflicts with the range of the write command write data of the queue position, the instruction must wait until the dependent write instruction is executed.
  • the second storage unit 63 buffers the input neuron vector data and the output neuron value data of the slave arithmetic module 6.
  • the third storage unit 64 buffers the weight data required by the slave computing module 6 in the calculation process.
  • each slave arithmetic module 6 may store only the weights between all input neurons and partial output neurons.
  • the output neurons are segmented according to the number N of the operation modules, and the weights corresponding to the nth output neurons of each segment are stored in the nth slave operation module.
  • Each slave arithmetic module 6 can calculate only the dot product of the corresponding row of the in column vector and the weight matrix w, and obtain that the output is a one-dimensional component of the intermediate result vector, and the one-dimensional components are successively spliced in the interconnect module 4 to obtain the intermediate Result vector. So the calculation process becomes a parallel computational part of the process and the subsequent splicing process.
  • Each of the slave arithmetic modules 6 calculates an output neuron value, and all of the output neuron values are assembled in the interconnect module 4 to obtain an intermediate result vector. Each slave arithmetic module 6 only needs to calculate the output neuron value corresponding to the module in the intermediate result vector y.
  • an instruction set for performing an artificial neural network forward operation on the aforementioned apparatus includes a CONFIG instruction, a COMPUTE instruction, an IO instruction, a NOP instruction, a JUMP instruction, and a MOVE instruction, among which:
  • the CONFIG command configures various constants required for current layer calculation before each layer of artificial neural network calculation begins;
  • the COMPUTE instruction completes the arithmetic logic calculation of each layer of artificial neural network
  • the IO instruction realizes reading input data required for calculation from the external address space and storing the data back to the external space after the calculation is completed;
  • the NOP instruction is responsible for clearing the control signals in all control signal buffer queues of the current device, and ensuring that all instructions before the NOP instruction are all executed.
  • the NOP instruction itself does not contain any operations;
  • the JUMP instruction is responsible for controlling the jump of the next instruction address to be read from the instruction storage unit, and is used to implement the jump of the control flow;
  • the MOVE instruction is responsible for carrying data of an address in the internal address space of the device to another address in the internal address space of the device.
  • the process is independent of the operation unit and does not occupy the resources of the operation unit during execution.
  • FIG. 5 illustrates an example block diagram of an artificial neural network full connectivity layer forward operation process in accordance with an embodiment of the present invention.
  • the input neuron vector is respectively subjected to a dot product operation with the weight vector of the slave operation module 6, to obtain a corresponding output neuron value, and all of the output neuron values constitute an intermediate result vector, and the intermediate result
  • the vector is subjected to an activation operation, or by adding an offset vector and an activation operation to obtain a final output neuron vector of the layer neural network.
  • the weight vector of each slave arithmetic module 6 is the row vector corresponding to the slave arithmetic module 6 in the weight matrix.
  • the interconnect module sends the input neuron vector [in0,...,inN] to all slave arithmetic modules, temporarily stored in the neuron cache unit. For the i-th slave arithmetic module, calculate the dot product of its corresponding weight vector [w_i0,...,w_iN] and the input neuron vector.
  • the result output from the arithmetic module is integrated into the complete intermediate result vector through the interconnect module and returned to the main operation module, and the activation operation is performed in the main operation module, or the offset and activation operations are performed to obtain the final output neuron vector [ Out0, out1, out2,..., outN].
  • FIG. 6 illustrates an implementation of a single layer artificial neural network full connectivity layer forward operation, in accordance with one embodiment.
  • the flowchart depicts a process for implementing a single layer neural network full connectivity layer forward operation illustrated in FIG. 1 using the apparatus and instruction set of the present invention.
  • Step S1.1 the initial instruction is stored in the instruction storage unit 1;
  • Step S1.2 reading an instruction from the instruction storage unit 1;
  • Step S1.3 decoding the above instruction
  • Step S1.4 performing corresponding operations according to the decoded control signal
  • step S1.5 the operation result is written back to the corresponding storage.
  • step S1.1 an initialization IO instruction may be stored for carrying subsequent instructions.
  • the readable instructions include, but are not limited to, a CONFIG instruction, a COMPUTE instruction, an IO instruction, a NOP instruction, a JUMP instruction, and a MOVE instruction.
  • step S1.3 the control signal of the corresponding module is obtained according to the operation type of the instruction (CONFIG, COMPUTE, IO, NOP, JUMP, MOVE, etc.).
  • the decoding obtains the configuration information of the remaining modules.
  • the control signal of the master-slave operation module is obtained by decoding.
  • the control signal of the data access module is decoded.
  • NOP instruction no actual control is generated The signal is only used to clear the control signals in all control signal storage queues of the current device, and all the instructions before the NOP instruction are executed.
  • For the JUMP instruction a control signal for the jump instruction stream is obtained.
  • MOVE command a control signal for carrying data inside the device is obtained.
  • step S1.4 the above module 2-6 performs a corresponding operation in accordance with the control signal.
  • the interconnection module sends the input neuron vectors [in0, . . . , inN] to all the slave operation modules, and temporarily stores them in the second storage unit 63.
  • the i-th slave arithmetic module calculate the dot product of its corresponding weight vector [w_i0,...,w_iN] and the input neuron vector.
  • the output from the arithmetic module is integrated into the complete output vector through the interconnect module and returned to the main operation module.
  • the activation operation is performed in the main operation module or the offset and activation operations are performed to obtain the final output neuron vector [out0, Out1, out2,...,outN].
  • step S1.5 each module writes the result of the operation back to the corresponding storage unit. Taking the operation of the neural network full connection layer forward as an example, the output neuron vector obtained by the main operation module is written back to the first storage unit 53.
  • Figure 7 illustrates another more detailed implementation of a single layer artificial neural network full connection layer forward operation.
  • step S2.1 an IO instruction is pre-stored at the instruction storage unit 1.
  • step S2.2 the operation starts, the controller unit 2 reads the IO instruction from the instruction storage unit 1, and according to the decoded control signal, the data access unit 3 reads all the corresponding artificial neural network full connections from the external address space.
  • the layer operates on the instruction and stores it in the instruction storage unit 1.
  • step S2.3 the controller unit 2 then reads the next IO instruction from the instruction storage unit 1, and according to the translated control signal, the data access unit 3 reads all the data required by the main operation module 5 from the external address space (for example) , including input neuron vector, interpolation table, constant table, offset, etc.) to the first storage unit 53 of the main operation module 5.
  • the external address space for example
  • step S2.4 the controller unit 2 then reads the next IO instruction from the instruction storage unit 1, and according to the translated control signal, the data access unit 3 reads the weight matrix data required from the arithmetic module 6 from the external address space. .
  • step S2.5 the controller unit 2 then reads the next CONFIG command from the instruction storage unit, and configures various constants required for the calculation of the layer neural network according to the decoded control signal.
  • the first arithmetic unit 51, 61 configures the value of the unit internal register based on parameters in the control signal, such as data required for the activation function, and the like.
  • Step S2.5 is an optional step, and in some cases, step S2.5 can be skipped.
  • step S2.6 the controller unit 2 then reads the next COMPUTE instruction from the instruction storage unit.
  • the main operation module 5 first sends the input neuron vector to each slave arithmetic module through the interconnection module 4. 6. Save to the second storage unit 63 of the slave arithmetic module 6.
  • step S2.7 the weight vector is read from the third storage unit 64 from the second operation unit 61 of the arithmetic module 6 according to the control signal decoded by the COMPUTE instruction (the row corresponding to the slave operation module 6 in the weight matrix) Vector), reading the input neuron vector from the second storage unit 63, completing the dot product operation of the weight vector and the input neuron vector, and returning the intermediate result through the interconnect module.
  • step S2.8 in the interconnection module 4, the intermediate results returned from each of the arithmetic modules 6 are successively assembled into a complete intermediate result vector.
  • step S2.9 the main operation module 5 obtains the return value of the interconnection module 4, reads the offset vector from the first storage unit 53 according to the control signal decoded by the COMPUTE instruction, and the vector passing vector returned by the interconnection module 4
  • the adding unit 512 is added, then the activation unit 511 activates the addition result and writes the last output neuron vector back to the first storage unit 53.
  • step S2.10 the controller unit 2 then reads the next IO instruction from the instruction storage unit, and according to the decoded control signal, the data access unit 3 stores the output neuron vector in the storage unit 53 to the external address space designation address. The operation ends.
  • the implementation process is similar to the single-layer neural network full connection layer.
  • the next layer of operation instructions will be stored in the main operation module.
  • the output neuron address of the previous layer is used as the input neuron address of this layer.
  • the weight address and offset address in the instruction are also changed to the address corresponding to this layer.
  • the processes or methods depicted in the preceding figures may include hardware (eg, circuitry, dedicated logic) Processing logic, such as firmware, software (eg, software embodied on a non-transitory computer readable medium), or a combination of both.
  • processing logic such as firmware, software (eg, software embodied on a non-transitory computer readable medium), or a combination of both.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Image Analysis (AREA)
  • Advance Control (AREA)

Abstract

本发明提供了一种用于执行人工神经网络全连接正向运算的装置,包括指令存储单元、控制器单元、数据访问单元、互连模块、主运算模块、以及多个从运算模块。使用该装置可以实现一层或多层人工神经网络全连接层的正向运算。对于每一层来说,首先对输入神经元向量进行加权求和计算出本层的中间结果向量,然后对该中间结果向量加偏置并激活得到输出神经元向量。将输出神经元向量作为下一层的输入神经元向量。

Description

一种用于执行全连接层神经网络正向运算的装置和方法 技术领域
本发明总体上涉及人工神经网络,具体地涉及一种用于执行全连接层人工神经网络正向运算的装置和方法。
背景技术
多层人工神经网络被广泛应用于模式识别,图像处理,函数逼近和优化计算等领域,多层人工网络在近年来由于其较高的识别准确度和较好的可并行性,受到学术界和工业界越来越广泛的关注。人工神经网络涉及到多种算法,其中全连接层作为人工神经网络中的一种重要算法,被广泛的应用在各种人工神经网络模型中。
一种支持多层人工神经网络全连接层的正向运算的已知方法是使用通用处理器。该方法通过使用通用寄存器堆和通用功能部件执行通用指令来支持上述算法。该方法的缺点之一是单个通用处理器的运算性能较低,无法满足通常的多层人工神经网络运算的性能需求。而多个通用处理器并行执行时,通用处理器之间相互通信又成为了性能瓶颈。另外,通用处理器需要把多层人工神经网络正向运算译码成一长列运算及访存指令序列,处理器前端译码带来了较大的功耗开销
另一种支持多层人工神经网络正向运算的已知方法是使用图形处理器(GPU)。该方法通过使用通用寄存器堆和通用流处理单元执行通用SIMD指令来支持上述算法。由于GPU是专门用来执行图形图像运算以及科学计算的设备,没有对多层人工神经网络运算的专门支持,仍然需要大量的前端译码工作才能执行多层人工神经网络运算,带来了大量的额外开销。另外GPU只有较小的片上缓存,多层人工神经网络的模型数据(权值)需要反复从片外搬运,片外带宽成为了主要性能瓶颈。
发明内容
本发明的一个方面提供了一种用于执行人工神经网络全连接层正向运算的装置,包括指令存储单元、控制器单元、数据访问单元、互连模块、主运算模块、 以及多个从运算模块,其中:
指令存储单元通过数据访问单元读入指令并存储读入的指令;
控制器单元从指令存储单元中读取指令,将指令译成控制其他模块行为的控制信号,所述其他模块包括数据访问单元、主运算模块和所述多个从运算模块;
数据访问单元执行外部地址空间与所述装置之间的数据或指令读写操作;
互连模块用于连接主运算模块和从运算模块;
主运算模块用于实现人工神经网络全连接层算法中的函数激活运算;
从运算模块用于实现人工神经网络全连接层算法中的输入神经元和权值参数的乘法和加法运算;
互连模块用于所述主运算模块和所述从运算模块之间的数据传输,在神经网络全连接层正向运算开始之前,主运算模块通过互连模块将输入神经元向量输送到每一个从运算模块,在从运算模块的计算过程结束后,互连模块逐级将各从运算模块的输出神经元值拼成中间结果向量,输送回主运算模块,用于后续计算。
本发明的另一个方面提供了一种使用上述装置执行单层人工神经网络全连接层正向运算的方法。
本发明的另一方面提供了一种使用上述装置执行多层人工神经网络全连接层正向运算的方法。
该装置可以应用于以下(包括但不限于)场景中:数据处理、机器人、电脑、打印机、扫描仪、电话、平板电脑、智能终端、手机、行车记录仪、导航仪、传感器、摄像头、云端服务器、相机、摄像机、投影仪、手表、耳机、移动存储、可穿戴设备等各类电子产品;飞机、轮船、车辆等各类交通工具;电视、空调、微波炉、冰箱、电饭煲、加湿器、洗衣机、电灯、燃气灶、油烟机等各类家用电器;以及包括核磁共振仪、B超、心电图仪等各类医疗设备。
附图说明
为了更完整地理解本发明及其优势,现在将参考结合附图的以下描述,其中:
图1示出了根据本发明实施例的用于执行人工神经网络全连接层正向运算的装置的整体结构的示例框图。
图2示意性示出了根据本发明实施例的用于执行人工神经网络全连接层正向 运算的装置中H树模块(互连模块的一种实施方式)的结构。
图3示出了根据本发明实施例的用于执行人工神经网络全连接层正向运算的装置中主运算模块结构的示例框图。
图4示出了根据本发明实施例的用于执行人工神经网络全连接层正向运算的装置中从运算模块结构的示例框图。
图5示出了根据本发明实施例的神经网络全连接层正向运算过程的示例框图。
图6示出了根据一个实施例的单层人工神经网络全连接层正向运算的一种实施方法。
图7示出了根据本发明实施例的单层人工神经网络全连接层运算的流程图。
在所有附图中,相同的装置、部件、单元等使用相同的附图标记来表示。
具体实施方式
根据结合附图对本发明示例性实施例的以下详细描述,本发明的其它方面、优势和突出特征对于本领域技术人员将变得显而易见。
在本发明中,术语“包括”和“含有”及其派生词意为包括而非限制;术语“或”是包含性的,意为和/或。
在本说明书中,下述用于描述本发明原理的各种实施例只是说明,不应该以任何方式解释为限制发明的范围。参照附图的下述描述用于帮助全面理解由权利要求及其等同物限定的本发明的示例性实施例。下述描述包括多种具体细节来帮助理解,但这些细节应认为仅仅是示例性的。因此,本领域普通技术人员应认识到,在不背离本发明的范围和精神的情况下,可以对本文中描述的实施例进行多种改变和修改。此外,为了清楚和简洁起见,省略了公知功能和结构的描述。此外,贯穿附图,相同参考数字用于相似功能和操作。
根据本发明实施例的多层人工神经网络全连接层正向运算,包括两层或者两层以上的多个神经元。对于每一层来说,输入神经元向量首先和权值向量进行点积运算,结果经过加偏置,以及激活函数得到输出神经元。其中,加偏置和激活操作都为可选操作,激活函数可以是sigmoid,tanh,relu,softmax中的任一个。
图1示出了根据本发明实施例的用于执行人工神经网络全连接层正向运算的 装置的整体结构的示例框图。如图1所示,该装置包括指令存储单元1、控制器单元2、数据访问单元3、互连模块4、主运算模块5和多个从运算模块6。指令存储单元1、控制器单元2、数据访问单元3、互连模块4、主运算模块5和从运算模块6均可以通过硬件电路(包括但不限于FPGA、CGRA、专用集成电路ASIC、模拟电路或忆阻器等)实现。
指令存储单元1通过数据访问单元3读入指令并存储读入的指令。指令存储单元1可以通过各种不同存储器件(SRAM、DRAM、忆阻器、3D-DRAM或非易失存储等)实现。
控制器单元2从指令存储单元1中读取指令,将指令译成控制其他模块行为的控制信号,所述其他模块例如包括数据访问单元3、主运算模块5和从运算模块6等。
数据访问单元3能够访问外部地址空间,直接向装置内部的各个存储单元写数据或指令,或从装置内部的各个存储单元向外部地址空间写数据。
互连模块4用于连接主运算模块和从运算模块,可以实现成不同的互连拓扑(如树状结构、环状结构、网格状结构、分级互连、总线结构等)。
图2示意性示出了互连模块4的一种实施方式:H树结构。H树模块4构成主运算模块5和多个从运算模块6之间的数据通路,并且是具有H树型的结构。H树是由多个节点构成的二叉树通路,每个节点将上游的数据同样地发给下游的两个节点,将下游的两个节点返回的数据进行合并,并返回给上游的节点。例如,在人工神经网络全连接层开始计算阶段,主运算模块5内的神经元数据通过H树模块4发送给各个从运算模块6;当从运算模块6的计算过程完成后,当从运算模块的计算过程完成后,每个从运算模块输出的神经元的值会在H树模块中逐级拼成一个完整的由神经元组成的向量,作为中间结果向量。举例说明,假设装置中共有N个从运算模块,则中间结果向量按N分段,每段有N个元素,第i个从运算模块计算每段中的第i个元素。N个元素经过H树模块拼成长度为N的向量并返回给主运算模块。所以如果网络只有N个输出神经元,则每个从运算模块只需输出单个神经元的值,若网络有m*N个输出神经元,则每个从运算模块需输出m个神经元值。
图3示出了根据本发明实施例的用于执行人工神经网络全连接层正向运算的 装置中主运算模块5的结构的示例框图。如图3所示,主运算模块5包括第一运算单元51、第一数据依赖关系判定单元52和第一存储单元53。
其中,第一运算单元51包括向量加法单元511以及激活单元512。第一运算单元51接收来自控制器单元的控制信号,完成主运算模块5的各种运算功能,向量加法单元511用于实现人工神经网络全连接层正向计算中的加偏置操作,该部件将偏置向量与通过所述互连模块4从从运算模块6传送回来的中间结果向量相加,输出为向量相加后的值,激活运算单元512用于实现人工神经网络全连接层激活函数操作。该部件的输入为通过所述互连模块4从从运算模块6传送回来的中间结果,或向量加法单元511的输出结果,输出为进行函数激活后的神经元向量。所述偏置向量可以是从外部地址空间读入的,也可以是存储在本地的。
第一数据依赖关系判定单元52是第一运算单元51读写第一存储单元53的端口,保证第一存储单元53中数据的读写一致性。同时,第一数据依赖关系判定单元52也负责将从第一存储单元53读取的数据通过互连模块4发送给从运算模块,而从运算模块6的输出数据通过互连模块4直接发送给第一运算单元51。控制器单元2输出的指令发送给计算单元51和第一数据依赖关系判定单元52,来控制其行为。
存储单元53用于缓存主运算模块5在计算过程中用到的输入数据和输出数据。
图4示出了根据本发明实施例的用于执行人工神经网络全连接层正向运算的装置中从运算模块6的结构的示例框图。如图4所示,每个从运算模块6包括第二运算单元61、数据依赖关系判定单元62、第二存储单元63和第三存储单元64。
第二运算单元61接收控制器单元2发出的控制信号并进行点积运算,包括向量乘单元611,以及累加运算单元612。其中,向量乘单元611用于实现神经元向量和权值向量的对位乘法,累加运算单元612用于实现将向量的每一项累加到一起的操作。
第二数据依赖关系判定单元62负责计算过程中对第二存储单元63的读写操作。第二数据依赖关系判定单元62执行读写操作之前会首先保证指令之间所用的数据不存在读写一致性冲突。例如,所有发往数据依赖关系单元62的控制信号都会被存入数据依赖关系单元62内部的指令队列里,在该队列中,读指令的 读取数据的范围如果与队列位置靠前的写指令写数据的范围发生冲突,则该指令必须等到所依赖的写指令被执行后才能够执行。
第二存储单元63缓存该从运算模块6的输入神经元向量数据和输出神经元值数据。
第三存储单元64缓存该从运算模块6在计算过程中需要的权值数据。根据本发明的实施例,每一个从运算模块6可以只存储全部输入神经元与部分输出神经元之间的权值。输出神经元按照从运算模块的个数N进行分段,每段的第n个输出神经元对应的权值存放在第n个从运算模块中。
从运算模块6实现人工神经网络全连接层正向运算过程中点积运算的并行性。人工神经网络全连接层的正向运算过程为y=f(wx+b),其中权值矩阵w和输入神经元向量in的乘法可以划分为不相关的并行计算子任务,in是列向量,每个从运算模块6可以只计算in列向量与权值矩阵w的对应行的点积,得到输出是中间结果向量的一维分量,这些一维分量在互连模块4中逐级拼接得到中间结果向量。所以计算过程变成了并行的计算部分和的过程和后面的拼接的过程。每个从运算模块6计算出输出神经元值,所有的输出神经元值在互连模块4中拼成得到中间结果向量。每个从运算模块6只需要计算出中间结果向量y中与本模块对应的输出神经元值即可。
根据本发明实施例,还提供了在前述装置上执行人工神经网络正向运算的指令集。指令集中包括CONFIG指令、COMPUTE指令、IO指令、NOP指令、JUMP指令和MOVE指令等,其中:
CONFIG指令在每层人工神经网络计算开始前配置当前层计算需要的各种常数;
COMPUTE指令完成每层人工神经网络的算术逻辑计算;
IO指令实现从外部地址空间读入计算需要的输入数据以及在计算完成后将数据存回至外部空间;
NOP指令负责清空当前装置内部所有控制信号缓存队列中的控制信号,保证NOP指令之前的所有指令全部执行完毕。NOP指令本身不包含任何操作;
JUMP指令负责控制将要从指令存储单元读取的下一条指令地址的跳转,用来实现控制流的跳转;
MOVE指令负责将装置内部地址空间某一地址的数据搬运至装置内部地址空间的另一地址,该过程独立于运算单元,在执行过程中不占用运算单元的资源。
图5示出了根据本发明实施例的人工神经网络全连接层正向运算过程的示例框图。在不同从运算模块6中,输入神经元向量分别与该从运算模块6的权值向量进行点积运算,得到对应的输出神经元值,所有这些输出神经元值组成中间结果向量,该中间结果向量经过激活运算、或通过加偏置向量以及激活运算,得到该层神经网络的最终输出神经元向量,公式描述为out=f(w*in+b),其中out输出神经元向量、in是输入神经元向量、b是偏置向量,w是权值矩阵,f是激活函数(active)。每个从运算模块6的权值向量是权值矩阵中与该从运算模块6相对应的行向量。互连模块将输入神经元向量[in0,...,inN]发送给所有的从运算模块,暂存在神经元缓存单元中。对于第i个从运算模块,计算其相应的权值向量[w_i0,...,w_iN]与输入神经元向量的点积。从运算模块输出的结果经过互连模块拼成完整的中间结果向量并返回给主运算模块,在主运算模块中进行激活运算,或进行加偏置和激活运算,得到最后的输出神经元向量[out0,out1,out2,...,outN]。
图6示出根据一个实施例的单层人工神经网络全连接层正向运算的一种实施方法。该流程图描述利用本发明的装置和指令集实现图1所示的一种单层神经网络全连接层正向运算的过程。
步骤S1.1,将初始指令存放到指令存储单元1;
步骤S1.2,从指令存储单元1中读取一条指令;
步骤S1.3,对上述指令进行译码;
步骤S1.4,根据译码得到的控制信号,进行相应操作;
步骤S1.5,将操作结果写回到相应存储中。
在步骤S1.1中,可以存入初始化IO指令,用于搬运后续指令。
在步骤S1.2中,可读取的指令包括但不限于CONFIG指令、COMPUTE指令、IO指令、NOP指令、JUMP指令和MOVE指令等。
在步骤S1.3中,根据指令的操作类型(CONFIG,COMPUTE,IO,NOP,JUMP,MOVE等)译码得到相应模块的控制信号。对于CONFIG指令,译码得到配置其余模块的配置信息。对于COMPUTE指令,译码得到主从运算模块的控制信号。对于IO指令,译码得到数据访问模块的控制信号。对于NOP指令,不产生实际控 制信号,只用于清空当前装置内部所有控制信号存储队列中的控制信号,保证NOP指令之前的所有指令全部执行完毕。对于JUMP指令,得到跳转指令流的控制信号。对于MOVE指令,得到在装置内部搬运数据的控制信号。
在步骤S1.4中,上述模块2-6根据控制信号执行相应操作。以执行神经网络全连接层正向的COMPUTE指令为例,互连模块将输入神经元向量[in0,...,inN]发送给所有的从运算模块,暂存在第二存储单元63中。对于第i个从运算模块,计算其相应的权值向量[w_i0,...,w_iN]与输入神经元向量的点积。从运算模块输出的结果经过互连模块拼成完整的输出向量并返回给主运算模块,在主运算模块中进行激活运算或进行加偏置和激活运算,得到最后的输出神经元向量[out0,out1,out2,...,outN]。
在步骤S1.5中,各个模块将操作结果写回到相应存储单元中。以执行神经网络全连接层正向的运算为例,主运算模块得到的输出神经元向量被写回到第一存储单元53。
图7示出单层人工神经网络全连接层正向运算的另一种更详细的实施方法。
在步骤S2.1,在指令存储单元1处预先存入一条IO指令。
在步骤S2.2,运算开始,控制器单元2从指令存储单元1读取该条IO指令,根据译出的控制信号,数据访问单元3从外部地址空间读取相应的所有人工神经网络全连接层运算指令,并将其存储在指令存储单元1中。
在步骤S2.3,控制器单元2接着从指令存储单元1读入下一条IO指令,根据译出的控制信号,数据访问单元3从外部地址空间读取主运算模块5需要的所有数据(例如,包括输入神经元向量、插值表、常数表和偏置等)至主运算模块5的第一存储单元53。
在步骤S2.4,控制器单元2接着从指令存储单元1读入下一条IO指令,根据译出的控制信号,数据访问单元3从外部地址空间读取从运算模块6需要的权值矩阵数据。
在步骤S2.5,控制器单元2接着从指令存储单元读入下一条CONFIG指令,根据译出的控制信号,配置该层神经网络计算需要的各种常数。例如,第一运算单元51、61根据控制信号里的参数配置单元内部寄存器的值,所述参数包括例如激活函数需要的数据等。步骤S2.5为可选步骤,在一些情况下可以跳过步骤S2.5直 接执行下面的步骤。
在步骤S2.6,控制器单元2接着从指令存储单元读入下一条COMPUTE指令,根据译出的控制信号,主运算模块5首先通过互连模块4将输入神经元向量发给各从运算模块6,保存至从运算模块6的第二存储单元63。
在步骤S2.7,根据COMPUTE指令译出的控制信号,从运算模块6的第二运算单元61从第三存储单元64读取权值向量(权值矩阵中对应于该从运算模块6的行向量),从第二存储单元63读取输入神经元向量,完成权值向量和输入神经元向量的点积运算,将中间结果通过互连模块返回。
在步骤S2.8,在互连模块4中,各从运算模块6返回的中间结果被逐级拼成完整的中间结果向量。
在步骤S2.9,主运算模块5得到互连模块4的返回值,根据COMPUTE指令译出的控制信号,从第一存储单元53读取偏置向量,与互连模块4返回的向量通过向量加单元512相加,然后激活单元511对相加结果做激活,并将最后的输出神经元向量写回至第一存储单元53中。
在步骤S2.10,控制器单元2接着从指令存储单元读入下一条IO指令,根据译出的控制信号,数据访问单元3将存储单元53中的输出神经元向量存至外部地址空间指定地址,运算结束。
对于多层神经网络全连接层,其实现过程与单层神经网络全连接层类似,当上一层人工神经网络全连接层执行完毕后,下一层的运算指令会将主运算模块中存储的上一层的输出神经元地址作为本层的输入神经元地址。同样地,指令中的权值地址和偏置地址也会变更至本层对应的地址。
通过采用用于执行人工神经网络全连接层正向运算的装置和指令集,解决了CPU和GPU运算性能不足,前端译码开销大的问题。有效提高了对多层人工神经网络全连接层正向运算的支持。
通过采用针对多层人工神经网络全连接层正向运算的专用片上存储,充分挖掘了输入神经元和权值数据的重用性,避免了反复向内存读取这些数据,降低了内存访问带宽,避免了内存带宽成为多层人工神经网络全连接层正向运算性能瓶颈的问题。
前面的附图中所描绘的进程或方法可通过包括硬件(例如,电路、专用逻辑 等)、固件、软件(例如,被具体化在非瞬态计算机可读介质上的软件),或两者的组合的处理逻辑来执行。虽然上文按照某些顺序操作描述了进程或方法,但是,应该理解,所描述的某些操作能以不同顺序来执行。此外,可并行地而非顺序地执行一些操作。
在前述的说明书中,参考其特定示例性实施例描述了本发明的各实施例。显然,可对各实施例做出各种修改,而不背离所附权利要求所述的本发明的更广泛的精神和范围。相应地,说明书和附图应当被认为是说明性的,而不是限制性的。

Claims (12)

  1. 一种用于执行人工神经网络全连接层正向运算的装置,包括指令存储单元、控制器单元、数据访问单元、互连模块、主运算模块、以及多个从运算模块,其中:
    指令存储单元通过数据访问单元读入指令并存储读入的指令;
    控制器单元从指令存储单元中读取指令,将指令译成控制其他模块行为的控制信号,所述其他模块包括数据访问单元、主运算模块和所述多个从运算模块;
    数据访问单元执行外部地址空间与所述装置之间的数据或指令读写操作;
    互连模块用于连接主运算模块和从运算模块;
    主运算模块用于实现人工神经网络全连接层算法中的函数激活运算;
    从运算模块用于实现人工神经网络全连接层算法中的输入神经元和权值参数的乘法和加法运算;
    互连模块用于所述主运算模块和所述从运算模块之间的数据传输,在神经网络全连接层正向运算开始之前,主运算模块通过互连模块将输入神经元向量输送到每一个从运算模块,在从运算模块的计算过程结束后,互连模块逐级将各从运算模块的输出神经元值拼成中间结果向量,输送回主运算模块,用于后续计算。
  2. 根据权利要求1所述的装置,其中,多个从运算模块利用相同的输入神经元向量和各自的权值向量,并行地计算出各自的输出神经元值,每个从运算模块的权值向量是权值矩阵中与该从运算模块相对应的行向量。
  3. 根据权利要求1所述的装置,其中主运算模块使用的激活函数active是非线性函数sigmoid,tanh,relu,softmax中的任一个或线性函数。
  4. 根据权利要求1所述的装置,其中主运算模块对中间结果向量加偏置,然后执行激活操作。
  5. 根据权利要求1所述的装置,其中,互连模块构成主运算模块和所述多个从运算模块之间的连续或离散化数据的数据通路,互连模块为以下任一种结构:树状结构、环状结构、网格状结构、分级互连、总线结构。
  6. 根据权利要求1所述的装置,其中,主运算模块包括第一存储单元、第一运算单元、第一数据依赖关系判定单元和第一存储单元,其中:
    神经元缓存单元用于缓存主运算模块在计算过程中用到的输入数据和输出 数据;
    第一运算单元完成主运算模块的各种运算功能;
    数据依赖关系判定单元是第一运算单元读写第一存储单元的端口,保证对第一存储单元的数据读写不存在一致性冲突,并且负责从第一存储单元读取输入的神经元向量,并通过互连模块发送给从运算模块;以及
    来自互连模块的中间结果向量被发送到第一运算单元。
  7. 根据权利要求1所述的装置,其中,每个从运算模块包括第二运算单元、第二数据依赖关系判定单元、第二存储单元和第三存储单元,其中:
    第二运算单元接收控制器单元发出的控制信号并进行算数逻辑运算;
    第二数据依赖关系判定单元负责计算过程中对第二存储单元和第三存储单元的读写操作,保证对第二存储单元和第三存储单元的读写不存在一致性冲突;
    第二存储单元缓存输入神经元向量的数据以及该从运算模块计算得到的输出神经元值;以及
    第三存储单元缓存该从运算模块在计算过程中需要的权值向量。
  8. 根据权利要求6或7所述的装置,其中,第一和第二数据依赖关系判定单元通过以下方式保证读写不存在一致性冲突:判断尚未执行的控制信号与正在执行过程中的控制信号的数据之间是否存在依赖关系,如果不存在,允许该条控制信号立即发射,否则需要等到该条控制信号所依赖的所有控制信号全部执行完成后该条控制信号才允许被发射。
  9. 一种使用根据权利要求1-8中的任一项的装置执行单层人工神经网络全连接层正向运算的方法,包括:
    步骤S1.1,将初始指令存放到指令存储单元;
    步骤S1.2,从指令存储单元中读取一条指令;
    步骤S1.3,对读取的指令进行译码;
    步骤S1.4,根据译码得到的控制信号,进行相应操作;
    步骤S1.5,将操作结果写回到相应存储单元中。
  10. 一种使用根据权利要求1-8中的任一项的装置执行单层人工神经网络全连接层正向运算的方法,包括:
    在步骤S2.1,在指令存储单元处预先存入一条IO指令;
    在步骤S2.2,运算开始,控制器单元从指令存储单元读取该条IO指令,根据译出的控制信号,数据访问单元从外部地址空间读取相应的所有人工神经网络全连接层运算指令,并将其存储在指令存储单元中;
    在步骤S2.3,控制器单元接着从指令存储单元读入下一条IO指令,根据译出的控制信号,数据访问单元从外部地址空间读取主运算模块需要的所有数据至主运算模块的第一存储单元;
    在步骤S2.4,控制器单元接着从指令存储单元读入下一条IO指令,根据译出的控制信号,数据访问单元从外部地址空间读取从运算模块需要的权值矩阵数据;
    在步骤S2.6,控制器单元接着从指令存储单元读入下一条COMPUTE指令,根据译出的控制信号,主运算模块首先通过互连模块将输入神经元向量发给各从运算模块,保存至从运算模块的第二存储单元;
    在步骤S2.7,根据COMPUTE指令译出的控制信号,从运算模块的第二运算单元从第三存储单元读取权值向量,从第二存储单元读取输入神经元向量,完成权值向量和输入神经元向量的点积运算,将中间结果通过互连模块返回;
    在步骤S2.8,在互连模块中,各从运算模块返回的中间结果被逐级拼成完整的中间结果向量;
    在步骤S2.9,主运算模块得到互连模块的返回值,根据COMPUTE指令译出的控制信号,从第一存储单元读取偏置向量,与互连模块返回的向量通过向量加单元相加,然后激活单元对相加结果做激活,并将最后的输出神经元向量写回至第一存储单元中;
    在步骤S2.10,控制器单元接着从指令存储单元读入下一条IO指令,根据译出的控制信号,数据访问单元将存储单元中的输出神经元向量存至外部地址空间指定地址,运算结束。
  11. 根据权利要求10所述的方法,还包括,在步骤S2.4和步骤S2.6之间执行:
    步骤S2.5,控制器单元接着从指令存储单元读入下一条CONFIG指令,根据译出的控制信号,配置该层神经网络计算需要的各种常数。
  12. 一种执行多层人工神经网络全连接层正向运算的方法,包括:
    对于每一层人工神经网络全连接层,执行如权利要求10所述的方法,其中,当上一层人工神经网络全连接层执行完毕后,下一层的运算指令将主运算模块中存 储的上一层的输出神经元地址作为本层的输入神经元地址,并将指令中的权值地址和/或偏置地址变更至本层对应的地址。
PCT/CN2016/080968 2016-04-27 2016-05-04 一种用于执行全连接层神经网络正向运算的装置和方法 WO2017185387A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP16899898.7A EP3451236A4 (en) 2016-04-27 2016-05-04 METHOD AND DEVICE FOR CARRYING OUT A FORWARDING OPERATION OF A FULLY CONNECTED LAYERED NEURONAL NETWORK
KR1020187033949A KR102486030B1 (ko) 2016-04-27 2016-05-04 완전연결층 신경망 정방향 연산 실행용 장치와 방법
US16/174,185 US11373084B2 (en) 2016-04-27 2018-10-29 Apparatus and methods for forward propagation in fully connected layers of convolutional neural networks

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610270004.0 2016-04-27
CN201610270004.0A CN107315571B (zh) 2016-04-27 2016-04-27 一种用于执行全连接层神经网络正向运算的装置和方法

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US16/174,185 Continuation-In-Part US11373084B2 (en) 2016-04-27 2018-10-29 Apparatus and methods for forward propagation in fully connected layers of convolutional neural networks
US16/174,185 Continuation US11373084B2 (en) 2016-04-27 2018-10-29 Apparatus and methods for forward propagation in fully connected layers of convolutional neural networks

Publications (1)

Publication Number Publication Date
WO2017185387A1 true WO2017185387A1 (zh) 2017-11-02

Family

ID=60160564

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/080968 WO2017185387A1 (zh) 2016-04-27 2016-05-04 一种用于执行全连接层神经网络正向运算的装置和方法

Country Status (5)

Country Link
US (1) US11373084B2 (zh)
EP (1) EP3451236A4 (zh)
KR (1) KR102486030B1 (zh)
CN (3) CN109375951B (zh)
WO (1) WO2017185387A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11373084B2 (en) 2016-04-27 2022-06-28 Cambricon Technologies Corporation Limited Apparatus and methods for forward propagation in fully connected layers of convolutional neural networks
US11423284B2 (en) * 2018-09-07 2022-08-23 Black Sesame Technologies, Inc Subgraph tile fusion in a convolutional neural network
US11977928B2 (en) 2018-12-12 2024-05-07 Samsung Electronics Co., Ltd. Apparatus and method for performing a recognition operation in a neural network

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242294B (zh) * 2017-12-14 2023-08-25 中科寒武纪科技股份有限公司 集成电路芯片装置及相关产品
CN107957975B (zh) * 2017-12-15 2021-01-05 安徽寒武纪信息科技有限公司 一种计算方法及相关产品
CN109976809B (zh) * 2017-12-28 2020-08-25 中科寒武纪科技股份有限公司 调度方法及相关装置
CN109976887B (zh) * 2017-12-28 2020-03-24 中科寒武纪科技股份有限公司 调度方法及相关装置
CN109993289B (zh) * 2017-12-30 2021-09-21 中科寒武纪科技股份有限公司 集成电路芯片装置及相关产品
CN110097181B (zh) * 2018-01-30 2023-07-11 上海寒武纪信息科技有限公司 用于执行人工神经网络正向运算的装置和方法
CN110147249B (zh) * 2018-02-12 2021-02-09 上海寒武纪信息科技有限公司 一种网络模型的计算方法及装置
CN110163363B (zh) * 2018-02-13 2021-05-11 上海寒武纪信息科技有限公司 一种计算装置及方法
CN110264229A (zh) * 2018-03-12 2019-09-20 优估(上海)信息科技有限公司 基于全连接神经网络的二手车定价方法,装置,及***
CN110728364A (zh) * 2018-07-17 2020-01-24 上海寒武纪信息科技有限公司 一种运算装置和运算方法
US11138350B2 (en) 2018-08-09 2021-10-05 Zoox, Inc. Procedural world generation using tertiary data
CN111079925B (zh) * 2018-10-19 2021-04-09 中科寒武纪科技股份有限公司 运算方法、装置及相关产品
CN111078286B (zh) * 2018-10-19 2023-09-01 上海寒武纪信息科技有限公司 数据通信方法、计算***和存储介质
CN109711539B (zh) * 2018-12-17 2020-05-29 中科寒武纪科技股份有限公司 运算方法、装置及相关产品
CN110020720B (zh) * 2019-04-01 2021-05-11 中科寒武纪科技股份有限公司 算子拼接方法及装置
CN110032450B (zh) * 2019-04-17 2021-04-20 中山大学 一种基于固态盘扩展内存的大规模深度学习方法及***
CN111831328A (zh) * 2019-04-18 2020-10-27 华为技术有限公司 数据处理的方法及装置
WO2020240525A1 (en) * 2019-05-31 2020-12-03 Georgetown University Assessing diseases by analyzing gait measurements
CN112348177B (zh) * 2019-07-05 2024-01-09 安徽寒武纪信息科技有限公司 神经网络模型验证方法、装置、计算机设备和存储介质
CN112070220B (zh) * 2020-08-06 2023-01-17 北京大学 一种基于非线性器件的原位自激活神经网络电路及神经网络运算方法
CN113791996B (zh) * 2021-09-10 2024-02-06 中科寒武纪科技股份有限公司 集成电路装置、电子设备、板卡和计算方法
KR20240085458A (ko) * 2022-12-08 2024-06-17 재단법인대구경북과학기술원 Ssd 오프로딩을 이용한 인공지능 추론 및 학습 시스템 및 방법

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103680496A (zh) * 2013-12-19 2014-03-26 百度在线网络技术(北京)有限公司 基于深层神经网络的声学模型训练方法、主机和***
CN104376389A (zh) * 2014-12-10 2015-02-25 国电南京自动化股份有限公司 基于负载均衡的主从式微电网功率负荷预测***及其方法
CN105184366A (zh) * 2015-09-15 2015-12-23 中国科学院计算技术研究所 一种时分复用的通用神经网络处理器

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5065339A (en) * 1990-05-22 1991-11-12 International Business Machines Corporation Orthogonal row-column neural processor
JPH06195322A (ja) * 1992-10-29 1994-07-15 Hitachi Ltd 汎用型ニューロコンピュータとして用いられる情報処理装置
JP2000322400A (ja) * 1999-05-10 2000-11-24 Fuji Xerox Co Ltd 情報処理装置
CN100545804C (zh) * 2003-08-18 2009-09-30 上海海尔集成电路有限公司 一种基于cisc结构的微控制器及其指令集的实现方法
FR2884008A1 (fr) * 2005-03-31 2006-10-06 France Telecom Systeme et procede de localisation de points d'interet dans une image d'objet mettant en oeuvre un reseau de neurones
US7747070B2 (en) * 2005-08-31 2010-06-29 Microsoft Corporation Training convolutional neural networks on graphics processing units
JP5171118B2 (ja) * 2007-06-13 2013-03-27 キヤノン株式会社 演算処理装置及びその制御方法
US20100312736A1 (en) * 2009-06-05 2010-12-09 The Regents Of The University Of California Critical Branching Neural Computation Apparatus and Methods
US8655815B2 (en) * 2010-05-19 2014-02-18 The Regents Of The University Of California Neural processing unit
US9015092B2 (en) * 2012-06-04 2015-04-21 Brain Corporation Dynamically reconfigurable stochastic learning apparatus and methods
US8918351B2 (en) * 2012-07-30 2014-12-23 International Business Machines Corporation Providing transposable access to a synapse array using column aggregation
US9147153B2 (en) * 2012-11-06 2015-09-29 Rockwell Automation Technologies, Inc. Empirical modeling with globally enforced general constraints
US9190053B2 (en) * 2013-03-25 2015-11-17 The Governing Council Of The Univeristy Of Toronto System and method for applying a convolutional neural network to speech recognition
CN104077842B (zh) * 2014-07-02 2017-02-15 浙江大学 基于图像识别的自选餐厅自助付费装置及其使用方法
US10417525B2 (en) * 2014-09-22 2019-09-17 Samsung Electronics Co., Ltd. Object recognition with reduced neural network weight precision
US9411726B2 (en) * 2014-09-30 2016-08-09 Samsung Electronics Co., Ltd. Low power computation architecture
CN105488565A (zh) * 2015-11-17 2016-04-13 中国科学院计算技术研究所 加速深度神经网络算法的加速芯片的运算装置及方法
CN111353589B (zh) * 2016-01-20 2024-03-01 中科寒武纪科技股份有限公司 用于执行人工神经网络正向运算的装置和方法
CN105512723B (zh) * 2016-01-20 2018-02-16 南京艾溪信息科技有限公司 一种用于稀疏连接的人工神经网络计算装置和方法
CN109358900B (zh) * 2016-04-15 2020-07-03 中科寒武纪科技股份有限公司 支持离散数据表示的人工神经网络正向运算装置和方法
CN109375951B (zh) 2016-04-27 2020-10-09 中科寒武纪科技股份有限公司 一种用于执行全连接层神经网络正向运算的装置和方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103680496A (zh) * 2013-12-19 2014-03-26 百度在线网络技术(北京)有限公司 基于深层神经网络的声学模型训练方法、主机和***
CN104376389A (zh) * 2014-12-10 2015-02-25 国电南京自动化股份有限公司 基于负载均衡的主从式微电网功率负荷预测***及其方法
CN105184366A (zh) * 2015-09-15 2015-12-23 中国科学院计算技术研究所 一种时分复用的通用神经网络处理器

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
See also references of EP3451236A4 *
ZHANG, XIAN: "An Algorithm for Training Back-propagation Neural Networks Based on Data Parallelism", CHINA MASTER'S THESES FULL-TEXT DATABASE INFORMATION TECHNOLOGY, vol. 2010, no. 05, 10 May 2015 (2015-05-10), XP009513124, ISSN: 1674-0246 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11373084B2 (en) 2016-04-27 2022-06-28 Cambricon Technologies Corporation Limited Apparatus and methods for forward propagation in fully connected layers of convolutional neural networks
US11423284B2 (en) * 2018-09-07 2022-08-23 Black Sesame Technologies, Inc Subgraph tile fusion in a convolutional neural network
US11977928B2 (en) 2018-12-12 2024-05-07 Samsung Electronics Co., Ltd. Apparatus and method for performing a recognition operation in a neural network

Also Published As

Publication number Publication date
KR102486030B1 (ko) 2023-01-06
US20190065934A1 (en) 2019-02-28
CN109375951B (zh) 2020-10-09
CN111860811B (zh) 2024-01-16
US11373084B2 (en) 2022-06-28
CN107315571A (zh) 2017-11-03
EP3451236A1 (en) 2019-03-06
EP3451236A4 (en) 2019-12-25
CN111860811A (zh) 2020-10-30
KR20190003611A (ko) 2019-01-09
CN109375951A (zh) 2019-02-22
CN107315571B (zh) 2020-07-31

Similar Documents

Publication Publication Date Title
WO2017185387A1 (zh) 一种用于执行全连接层神经网络正向运算的装置和方法
WO2017185386A1 (zh) 一种用于执行卷积神经网络正向运算的装置和方法
KR102470264B1 (ko) 완전연결층 신경망 역방향 트레이닝 실행용 장치와 방법
CN111860812B (zh) 一种用于执行卷积神经网络训练的装置和方法
CN109358900B (zh) 支持离散数据表示的人工神经网络正向运算装置和方法
CN106991476B (zh) 用于执行人工神经网络正向运算的装置和方法
CN109284825B (zh) 用于执行lstm运算的装置和方法
WO2017185347A1 (zh) 用于执行循环神经网络和lstm运算的装置和方法
WO2017124641A1 (zh) 用于执行人工神经网络反向训练的装置和方法
CN107886166B (zh) 一种执行人工神经网络运算的装置和方法
WO2017185336A1 (zh) 用于执行pooling运算的装置和方法
WO2017185248A1 (zh) 用于执行人工神经网络自学习运算的装置和方法
WO2018058452A1 (zh) 一种执行人工神经网络运算的装置和方法
WO2017177446A1 (zh) 支持离散数据表示的人工神经网络反向训练装置和方法
WO2017185335A1 (zh) 一种用于执行batch normalization运算的装置和方法
CN111860772B (zh) 一种用于执行人工神经网络pooling运算的装置和方法

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20187033949

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2016899898

Country of ref document: EP

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16899898

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016899898

Country of ref document: EP

Effective date: 20181127