WO2017124642A1 - 用于执行人工神经网络正向运算的装置和方法 - Google Patents

用于执行人工神经网络正向运算的装置和方法 Download PDF

Info

Publication number
WO2017124642A1
WO2017124642A1 PCT/CN2016/078281 CN2016078281W WO2017124642A1 WO 2017124642 A1 WO2017124642 A1 WO 2017124642A1 CN 2016078281 W CN2016078281 W CN 2016078281W WO 2017124642 A1 WO2017124642 A1 WO 2017124642A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
module
neuron
vector
data
Prior art date
Application number
PCT/CN2016/078281
Other languages
English (en)
French (fr)
Inventor
刘少礼
郭崎
陈云霁
陈天石
Original Assignee
北京中科寒武纪科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京中科寒武纪科技有限公司 filed Critical 北京中科寒武纪科技有限公司
Priority to EP21202782.5A priority Critical patent/EP3971789B1/en
Priority to EP16885906.4A priority patent/EP3407265B1/en
Priority to KR1020187015434A priority patent/KR102203746B1/ko
Priority to KR1020207034359A priority patent/KR102331978B1/ko
Publication of WO2017124642A1 publication Critical patent/WO2017124642A1/zh
Priority to US16/039,567 priority patent/US10410112B2/en
Priority to US16/441,025 priority patent/US10860917B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/061Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3885Concurrent instruction execution, e.g. pipeline or look ahead using a plurality of independent parallel functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present invention generally relates to artificial neural networks, and in particular to an apparatus and method for performing artificial neural network forward operations.
  • Multi-layer artificial neural networks are widely used in the fields of pattern recognition, image processing, function approximation and optimization calculation.
  • Multi-layer artificial networks have been accepted by Kirin, image processing, function approximation and optimization calculation.
  • Multi-layer artificial networks have been accepted by Kirin, image processing, function approximation and optimization calculation.
  • Multi-layer artificial networks have been accepted by Kirin, image processing, function approximation and optimization calculation.
  • Multi-layer artificial networks have been accepted by Kir in recent years due to their high recognition accuracy and good parallelism. The industry is getting more and more attention.
  • One known method of supporting multi-layer artificial neural network forward operations is to use a general purpose processor.
  • the method supports the above algorithm by executing general purpose instructions using a general purpose register file and generic functions.
  • One of the disadvantages of this approach is that the performance of a single general purpose processor is low and cannot meet the performance requirements of conventional multi-layer artificial neural network operations.
  • communication between general-purpose processors becomes a performance bottleneck.
  • the general-purpose processor needs to decode the multi-layer artificial neural network into a long column operation and a fetch instruction sequence, and the processor front-end decoding brings a large power consumption overhead.
  • Another known method of supporting multi-layer artificial neural network reverse training is to use a graphics processing unit (GPU).
  • the method supports the above algorithm by executing a generic SIMD instruction using a general purpose register file and a generic stream processing unit.
  • the GPU is a device dedicated to performing graphics and image operations and scientific calculations, without the special support for multi-layer artificial neural network operations, a large amount of front-end decoding work is still required to perform multi-layer artificial neural network operations, bringing a large number of Additional overhead.
  • the GPU has only a small on-chip cache, and the model data (weight) of the multi-layer artificial neural network needs to be repeatedly transferred from off-chip, and the off-chip bandwidth becomes the main performance bottleneck.
  • the GPU has only a small on-chip cache, and the model data (weight) of the multi-layer artificial neural network needs to be repeatedly transferred from off-chip. The off-chip bandwidth becomes the main performance bottleneck, and brings huge power consumption overhead.
  • An aspect of the present invention provides an apparatus for performing an artificial neural network forward operation, including an instruction cache unit, a controller unit, a direct memory access unit, an H-tree module, a main operation module, and a plurality of slave operation modules,
  • the instruction cache unit is used to read the instruction through the direct memory access unit and slow down The read-in instruction
  • the controller unit is configured to read the instruction from the instruction cache unit, and decode the instruction into a micro-instruction that controls the H-tree module, the main operation module, and the behavior of the slave module
  • the direct memory access unit is used for Writing data from the external address space to the main data module and the corresponding data buffer unit of each slave arithmetic module or reading data from the data buffer unit to the external address space
  • the H-tree module is used to start the reverse training in each layer of the neural network
  • the main operation module transmits the input neuron vector of the layer to all the slave modules through the H-tree module.
  • Another aspect of the present invention provides a method of performing a single layer artificial neural network forward operation using the above apparatus.
  • Another aspect of the present invention provides a method of performing a multi-layer artificial neural network forward operation using the above apparatus.
  • FIG. 1 shows an example block diagram of the overall structure of an apparatus for performing an artificial neural network forward operation in accordance with an embodiment of the present invention.
  • FIG. 2 is a diagram schematically showing the structure of an H-tree module in an apparatus for performing an artificial neural network forward operation according to an embodiment of the present invention.
  • FIG. 3 illustrates an example block diagram of a main operational module structure in an apparatus for performing artificial neural network forward operations in accordance with an embodiment of the present invention.
  • FIG. 4 illustrates an example block diagram of a slave arithmetic module structure in an apparatus for performing artificial neural network forward operations in accordance with an embodiment of the present invention.
  • FIG. 5 illustrates an example block diagram of a neural network forward operation process in accordance with an embodiment of the present invention.
  • FIG. 6 shows a flow chart of a single layer artificial neural network operation in accordance with an embodiment of the present invention.
  • the forward operation of the multi-layer artificial neural network includes two or more layers of multiple neurons.
  • the input neuron vector is first subjected to a dot product with the weight vector, and the result is the output neuron through the activation function.
  • the activation function can be a sigmoid function, a tanh, a relu, a softmax function, and the like.
  • the apparatus includes an instruction cache unit 1, a controller unit 2, a direct memory access unit 3, an H-tree module 4, a main operation module 5, and a plurality of slave operation modules 6.
  • the instruction cache unit 1, the controller unit 2, the direct memory access unit 3, the H-tree module 4, the main arithmetic module 5, and the slave arithmetic module 6 can all be implemented by a hardware circuit (for example, an application-specific integrated circuit ASIC).
  • the instruction cache unit 1 reads in an instruction through the direct memory access unit 3 and caches the read instruction.
  • the controller unit 2 reads instructions from the instruction cache unit 1 and translates the instructions into micro-instructions that control the behavior of other modules, such as the direct memory access unit 3, the main arithmetic module 5, and the slave arithmetic module 6.
  • the direct memory access unit 3 can access the external address space, directly read and write data to each cache unit inside the device, and complete data loading and storage.
  • FIG. 2 schematically shows the structure of the H-tree module 4.
  • the H-tree module 4 constitutes a data path between the main arithmetic module 5 and the plurality of slave arithmetic modules 6, and has an H-tree structure.
  • the H-tree is a binary tree path composed of multiple nodes. Each node sends the upstream data to the downstream two nodes in the same way, and the data returned by the two downstream nodes are combined and returned to the upstream node.
  • the neuron data in the main operation module 5 is sent to the respective slave arithmetic modules 6 through the H-tree module 4; when the calculation process from the arithmetic module 6 is completed, after the calculation process from the arithmetic module is completed, each slave The values of the neurons output by the arithmetic module are progressively combined into a complete vector of neurons in the H-tree as an intermediate result vector.
  • the neural network full connection layer is described. Assuming that there are N slave arithmetic modules in the device, the intermediate result vector is segmented by N, each segment has N elements, and the i-th slave computing module calculates the i-th element in each segment. .
  • the N elements are assembled into a vector of length N through the H-tree module and returned to the main arithmetic module. So if the network has only N output neurons, each slave unit only needs to output the value of a single neuron. If the network has m*N output neurons, each slave unit needs to output m neuron values.
  • FIG. 3 shows an example block diagram of the structure of the main operation module 5 in the apparatus for performing artificial neural network forward operation according to an embodiment of the present invention.
  • the main operation module 5 includes an operation unit 51, a data dependency determination unit 52, and a neuron buffer unit 53.
  • the neuron buffer unit 53 is configured to buffer input data and output data used by the main operation module 5 in the calculation process, and the operation unit 51 performs various operation functions of the main operation module 5, and the data dependency determination unit 52 is read by the operation unit 51.
  • the port of the neuron buffer unit 53 is written, and at the same time, the read/write consistency of data in the neuron cache unit can be ensured.
  • the data dependency determining unit 52 is also responsible for transmitting the read data to the slave computing module through the H-tree module 4, and the output data from the computing module 6 is directly sent to the computing unit 51 via the H-tree module 4.
  • the command output from the controller unit 2 is sent to the calculation unit 51 and the data dependency determination unit 52 to control its behavior.
  • each slave arithmetic module 6 includes an arithmetic unit 61, a data dependency determining unit 62, a neuron buffer unit 63, and a weight buffer unit 64.
  • the arithmetic unit 61 receives the microinstructions issued by the controller unit 2 and performs an arithmetic logic operation.
  • the data dependency judging unit 62 is responsible for reading and writing operations to the neuron cache unit in the calculation process. Before the data dependency determination unit 62 performs the read/write operation, it first ensures that there is no read/write consistency conflict between the data used between the instructions. For example, all microinstructions sent to the data dependency unit 62 are stored in an instruction queue inside the data dependency unit 62, in which the range of read data of the read instruction is a write command ahead of the queue position. If the range of write data conflicts, the instruction must wait until the write instruction it depends on is executed.
  • the neuron buffer unit 63 buffers the input neuron vector data and the output neuron value data of the slave arithmetic module 6.
  • the weight buffer unit 64 buffers the weight data required by the slave computing module 6 in the calculation process. For each slave arithmetic module 6, only the weights between all input neurons and partial output neurons are stored. Taking the all-connected layer as an example, the output neurons are segmented according to the number N of the operation units, and the weights corresponding to the n-th output neurons of each segment are stored in the n-th slave operation unit.
  • the first half of the parallel operation of each layer of artificial neural network can be realized from the operation module 6.
  • the partial sum is added step by step in the H-tree module 4 to obtain the final result. So the computational process becomes a parallel computational part of the process and the subsequent accumulation process.
  • Each of the slave arithmetic modules 6 calculates an output neuron value, and all of the output neuron values are assembled in the H-tree module 4 to obtain an intermediate result vector.
  • Each slave arithmetic module 6 only needs to calculate the output neuron value corresponding to the module in the intermediate result vector y.
  • the H-tree module 4 sums all the neuron values output from the arithmetic module 6 to obtain a final intermediate result vector y.
  • the main operation module 5 performs subsequent calculations based on the intermediate result vector y, such as adding offset, pooling (for example, MAXPOOLING or AVGPOOLING, etc.), performing activation and sampling.
  • an instruction set for performing an artificial neural network forward operation on the aforementioned apparatus includes the CONFIG instruction, the COMPUTE instruction, the IO instruction, the NOP instruction, the JUMP instruction, and the MOVE instruction, where:
  • the CONFIG command configures various constants required for current layer calculation before each layer of artificial neural network calculation begins;
  • the COMPUTE instruction completes the arithmetic logic calculation of each layer of artificial neural network
  • the IO instruction realizes reading input data required for calculation from the external address space and storing the data back to the external space after the calculation is completed;
  • the NOP instruction is responsible for clearing the microinstructions currently loaded into all internal microinstruction buffer queues, ensuring that all instructions preceding the NOP instruction are completed.
  • the NOP instruction itself does not contain any operations;
  • the JUMP instruction is responsible for the jump of the next instruction address that the controller will read from the instruction cache unit. Used to implement the jump of the control flow;
  • the MOVE instruction is responsible for carrying data of an address in the internal address space of the device to another address in the internal address space of the device.
  • the process is independent of the operation unit and does not occupy the resources of the operation unit during execution.
  • FIG. 5 illustrates an example block diagram of a neural network forward operation process in accordance with an embodiment of the present invention.
  • the input neuron vector is respectively subjected to a dot product operation with the weight vector of the slave operation module 6, to obtain a corresponding output neuron value, and all of the output neuron values constitute an intermediate result vector, and the intermediate result
  • the vector is obtained by adding the offset vector and the activation operation to obtain the final output neuron vector of the layer neural network.
  • the weight vector of each slave arithmetic module 6 is a column vector corresponding to the slave arithmetic module 6 in the weight matrix.
  • the H-tree module sends the input neuron vectors [in0,...,inN] to all slave arithmetic units, temporarily stored in the neuron cache unit.
  • the dot product of its corresponding weight vector [w_i0, . . . , w_iN] and the input neuron vector is calculated.
  • the result output from the arithmetic unit is integrated into the complete output vector through the H-tree module and returned to the main operation unit, and the activation operation is performed in the main operation unit to obtain the final output neuron vector [out0, out1, out2, ..., outN] .
  • FIG. 5 is a flow chart showing a single layer artificial neural network forward operation according to an embodiment.
  • the flowchart depicts the process of implementing a single layer neural network forward operation as shown in FIG. 4 using the apparatus and instruction set of the present invention.
  • step S1 an IO instruction is pre-stored at the first address of the instruction cache unit 1.
  • step S2 the operation starts, the controller unit 2 reads the IO instruction from the first address of the instruction cache unit 1, and according to the translated microinstruction, the direct memory access unit 3 reads all the corresponding artificial neural networks from the external address space. The instruction is operated and cached in the instruction cache unit 1.
  • step S3 the controller unit 2 then reads the next IO instruction from the instruction cache unit, and according to the translated microinstruction, the direct memory access unit 3 reads all the data required by the main operation module 5 from the external address space (for example, including The neuron vector, the interpolation table, the constant table, the offset, and the like are input to the neuron buffer unit 53 of the main operation module 5.
  • the external address space for example, including The neuron vector, the interpolation table, the constant table, the offset, and the like are input to the neuron buffer unit 53 of the main operation module 5.
  • step S4 the controller unit 2 then reads the next IO instruction from the instruction cache unit, and according to the translated microinstruction, the direct memory access unit 3 reads the weight matrix data required from the arithmetic module 6 from the external address space.
  • step S5 the controller unit 2 then reads the next CONFIG command from the instruction cache unit, and according to the translated microinstruction, the device configures various constants required for the calculation of the layer neural network.
  • the arithmetic unit 51, 61 Configuring the value of the internal register of the unit according to the parameters in the microinstruction, for example, including the precision setting of the layer calculation, the data of the activation function (for example, the precision bit of the layer calculation, the rang parameter of the Lrn layer algorithm, and the AveragePooling layer algorithm window) Reciprocal of size, etc.)
  • step S6 the controller unit 2 then reads the next COMPUTE instruction from the instruction cache unit.
  • the main operation module 5 first sends the input neuron vector to each slave operation module 6 through the H-tree module 4.
  • the neuron buffer unit 63 is saved to the slave arithmetic module 6.
  • step S7 according to the microinstruction decoded by the COMPUTE instruction, the weight vector (the column vector corresponding to the slave operation module 6 in the weight matrix) is read from the weight buffer unit 64 from the operation unit 61 of the operation module 6.
  • the neuron cache unit reads the input neuron vector, completes the dot product operation of the weight vector and the input neuron vector, and returns the intermediate result through the H-tree.
  • step S8 in the H-tree module 4, the intermediate results returned from each of the arithmetic modules 6 are successively assembled into a complete intermediate result vector.
  • step S9 the main operation module 5 obtains the return value of the H-tree module 4, reads the offset vector from the neuron buffer unit 53 according to the micro-instruction decoded by the COMPUTE instruction, and adds it to the vector returned by the H-tree module 4, and then The addition result is then activated and the last output neuron vector is written back to the neuron buffer unit 53.
  • step S10 the controller unit then reads the next IO instruction from the instruction cache unit, and according to the translated microinstruction, the direct memory access unit 3 stores the output neuron vector in the neuron buffer unit 53 to the external address space specified address. The operation ends.
  • the implementation process is similar to that of a single-layer neural network.
  • the next-level operation instruction will output the output neuron address of the upper layer stored in the main operation unit. As the input neuron address of this layer. Similarly, the weight address and offset address in the instruction are also changed to the address corresponding to this layer.
  • the processes or methods depicted in the preceding figures may include hardware (eg, circuitry, dedicated logic) Processing logic, such as firmware, software (eg, software embodied on a non-transitory computer readable medium), or a combination of both.
  • processing logic such as firmware, software (eg, software embodied on a non-transitory computer readable medium), or a combination of both.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Neurology (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Image Analysis (AREA)
  • Advance Control (AREA)

Abstract

本发明提供了一种用于执行人工神经网络正向运算的装置,包括指令缓存单元、控制器单元、直接内存访问单元、H树模块、主运算模块、以及多个从运算模块。使用该装置可以实现多层人工神经网络的正向运算。对于每一层来说,首先对输入神经元向量进行加权求和计算出本层的中间结果向量。该中间结果向量加偏置并激活得到输出神经元向量。将输出神经元向量作为下一层的输入神经元向量。

Description

用于执行人工神经网络正向运算的装置和方法 技术领域
本发明总体上涉及人工神经网络,具体地涉及一种用于执行人工神经网络正向运算的装置和方法。
背景技术
多层人工神经网络被广泛应用于模式识别,图像处理,函数逼近和优化计算等领域,多层人工网络在近年来由于其较高的识别准确度和较好的可并行性,受到学术界和工业界越来越广泛的关注。
一种支持多层人工神经网络正向运算的已知方法是使用通用处理器。该方法通过使用通用寄存器堆和通用功能部件执行通用指令来支持上述算法。该方法的缺点之一是单个通用处理器的运算性能较低,无法满足通常的多层人工神经网络运算的性能需求。而多个通用处理器并行执行时,通用处理器之间相互通信又成为了性能瓶颈。另外,通用处理器需要把多层人工神经网络正向运算译码成一长列运算及访存指令序列,处理器前端译码带来了较大的功耗开销
另一种支持多层人工神经网络反向训练的已知方法是使用图形处理器(GPU)。该方法通过使用通用寄存器堆和通用流处理单元执行通用SIMD指令来支持上述算法。由于GPU是专门用来执行图形图像运算以及科学计算的设备,没有对多层人工神经网络运算的专门支持,仍然需要大量的前端译码工作才能执行多层人工神经网络运算,带来了大量的额外开销。另外GPU只有较小的片上缓存,多层人工神经网络的模型数据(权值)需要反复从片外搬运,片外带宽成为了主要性能瓶颈。另外,GPU只有较小的片上缓存,多层人工神经网络的模型数据(权值)需要反复从片外搬运,片外带宽成为了主要性能瓶颈,同时带来了巨大的功耗开销。
发明内容
本发明的一个方面提供了一种用于执行人工神经网络正向运算的装置,包括指令缓存单元、控制器单元、直接内存访问单元、H树模块、主运算模块、以及多个从运算模块,其中:指令缓存单元用于通过直接内存访问单元读入指令并缓 存读入的指令;控制器单元用于从指令缓存单元读取指令,并将该指令译码成控制H树模块、主运算模块、以及从运算模块行为的微指令;直接内存访问单元用于从外部地址空间向主运算模块和各从运算模块的相应数据缓存单元中写数据或从所述数据缓存单元向外部地址空间读数据;H树模块用于,在每层神经网络反向训练开始计算的阶段,主运算模块通过H树模块向所有的从运算模块传输本层的输入神经元向量,在从计算模块的计算过程完成后,H树模块逐级将各从计算模块的输出神经元值拼成中间结果向量;主运算模块用于利用中间结果向量完成后续计算。
本发明的另一个方面提供了一种使用上述装置执行单层人工神经网络正向运算的方法。
本发明的另一方面提供了一种使用上述装置执行多层人工神经网络正向运算的方法。
附图说明
为了更完整地理解本发明及其优势,现在将参考结合附图的以下描述,其中:
图1示出了根据本发明实施例的用于执行人工神经网络正向运算的装置的整体结构的示例框图。
图2示意性示出了根据本发明实施例的用于执行人工神经网络正向运算的装置中H树模块的结构。
图3示出了根据本发明实施例的用于执行人工神经网络正向运算的装置中主运算模块结构的示例框图。
图4示出了根据本发明实施例的用于执行人工神经网络正向运算的装置中从运算模块结构的示例框图。
图5示出了根据本发明实施例的神经网络正向运算过程的示例框图。
图6示出了根据本发明实施例的单层人工神经网络运算的流程图。
在所有附图中,相同的装置、部件、单元等使用相同的附图标记来表示。
具体实施方式
根据结合附图对本发明示例性实施例的以下详细描述,本发明的其它方面、 优势和突出特征对于本领域技术人员将变得显而易见。
在本发明中,术语“包括”和“含有”及其派生词意为包括而非限制;术语“或”是包含性的,意为和/或。
在本说明书中,下述用于描述本发明原理的各种实施例只是说明,不应该以任何方式解释为限制发明的范围。参照附图的下述描述用于帮助全面理解由权利要求及其等同物限定的本发明的示例性实施例。下述描述包括多种具体细节来帮助理解,但这些细节应认为仅仅是示例性的。因此,本领域普通技术人员应认识到,在不背离本发明的范围和精神的情况下,可以对本文中描述的实施例进行多种改变和修改。此外,为了清楚和简洁起见,省略了公知功能和结构的描述。此外,贯穿附图,相同参考数字用于相似功能和操作。
根据本发明实施例的多层人工神经网络的正向运算,包括两层或者两层以上的多个神经元。对于每一层来说,输入神经元向量首先和权值向量进行点积运算,结果经过激活函数得到输出神经元。其中激活函数可以是sigmoid函数,tanh、relu、softmax函数等。
图1示出了根据本发明实施例的用于执行人工神经网络正向运算的装置的整体结构的示例框图。如图1所示,该装置包括指令缓存单元1、控制器单元2、直接内存访问单元3、H树模块4、主运算模块5和多个从运算模块6。指令缓存单元1、控制器单元2、直接内存访问单元3、H树模块4、主运算模块5和从运算模块6均可以通过硬件电路(例如专用集成电路ASIC)实现。
指令缓存单元1通过直接内存访问单元3读入指令并缓存读入的指令。
控制器单元2从指令缓存单元1中读取指令,将指令译成控制其他模块行为的微指令,所述其他模块例如直接内存访问单元3、主运算模块5和从运算模块6等。
直接内存访问单元3能够访存外部地址空间,直接向装置内部的各个缓存单元读写数据,完成数据的加载和存储。
图2示意性示出了H树模块4的结构。H树模块4构成主运算模块5和多个从运算模块6之间的数据通路,并具有H树型的结构。H树是由多个节点构成的二叉树通路,每个节点将上游的数据同样地发给下游的两个节点,将下游的两个节点返回的数据进行合并,并返回给上游的节点。例如,在每层人工神经网络开 始计算阶段,主运算模块5内的神经元数据通过H树模块4发送给各个从运算模块6;当从运算模块6的计算过程完成后,当从运算模块的计算过程完成后,每个从运算模块输出的神经元的值会在H树中逐级拼成一个完整的由神经元组成的向量,作为中间结果向量。以神经网络全连接层进行说明,假设装置中共有N个从运算模块,则中间结果向量按N分段,每段有N个元素,第i个从运算模块计算每段中的第i个元素。N个元素经过H树模块拼成长度为N的向量并返回给主运算模块。所以如果网络只有N个输出神经元,则每个从运算单元只需输出单个神经元的值,若网络有m*N个输出神经元,则每个从运算单元需输出m个神经元值。
图3示出了根据本发明实施例的用于执行人工神经网络正向运算的装置中主运算模块5的结构的示例框图。如图3所示,主运算模块5包括运算单元51、数据依赖关系判断单元52和神经元缓存单元53。
神经元缓存单元53用于缓存主运算模块5在计算过程中用到的输入数据和输出数据,运算单元51完成主运算模块5的各种运算功能,数据依赖关系判断单元52是运算单元51读写神经元缓存单元53的端口,同时能够保证神经元缓存单元中数据的读写一致性。同时,数据依赖关系判断单元52也负责将读取数据通过H树模块4发送给从计算模块,而从计算模块6的输出数据通过H树模块4直接发送给运算单元51。控制器单元2输出的指令发送给计算单元51和数据依赖关系判断单元52,来控制其行为。
图4示出了根据本发明实施例的用于执行人工神经网络正向运算的装置中从运算模块6的结构的示例框图。如图4所示,每个从运算模块6包括运算单元61、数据依赖关系判定单元62、神经元缓存单元63和权值缓存单元64。
运算单元61接收控制器单元2发出的微指令并进行算数逻辑运算。
数据依赖关系判断单元62负责计算过程中对神经元缓存单元的读写操作。数据依赖关系判断单元62执行读写操作之前会首先保证指令之间所用的数据不存在读写一致性冲突。例如,所有发往数据依赖关系单元62的微指令都会被存入数据依赖关系单元62内部的指令队列里,在该队列中,读指令的读取数据的范围如果与队列位置靠前的写指令写数据的范围发生冲突,则该指令必须等到所依赖的写指令被执行后才能够执行。
神经元缓存单元63缓存该从运算模块6的输入神经元向量数据和输出神经元值数据。
权值缓存单元64缓存该从运算模块6在计算过程中需要的权值数据。对于每一个从运算模块6,都只会存储全部输入神经元与部分输出神经元之间的权值。以全连接层为例,输出神经元按照从运算单元的个数N进行分段,每段的第n个输出神经元对应的权值存放在第n个从运算单元中。
从运算模块6实现每层人工神经网络正向运算过程中可以并行的前半部分。以人工神经网络全连接层(MLP)为例,过程为y=f(wx+b),其中权值矩阵w和输入神经元向量x的乘法可以划分为不相关的并行计算子任务,out与in是列向量,每个从运算模块6只计算in中相应的部分标量元素与权值矩阵w对应的列的乘积,得到的每个输出向量都是最终结果的一个待累加的部分和,这些部分和在H树模块4中逐级两两相加得到最后的结果。所以计算过程变成了并行的计算部分和的过程和后面的累加的过程。每个从运算模块6计算出输出神经元值,所有的输出神经元值在H树模块4中拼成得到中间结果向量。每个从运算模块6只需要计算出中间结果向量y中与本模块对应的输出神经元值即可。H树模块4对所有从运算模块6输出的神经元值求和,得到最终的中间结果向量y。主运算模块5基于中间结果向量y进行后续计算,比如加偏置、池化(例如最大值池化(MAXPOOLING)或平均值池化(AVGPOOLING)等)、做激活和做采样等。
根据本发明实施例,还提供了在前述装置上执行人工神经网络正向运算的指令集。指令集中包括CONFIG指令、COMPUTE指令、IO指令、NOP指令、JUMP指令和MOVE指令,其中:
CONFIG指令在每层人工神经网络计算开始前配置当前层计算需要的各种常数;
COMPUTE指令完成每层人工神经网络的算术逻辑计算;
IO指令实现从外部地址空间读入计算需要的输入数据以及在计算完成后将数据存回至外部空间;
NOP指令负责清空当前装至内部所有微指令缓存队列中的微指令,保证NOP指令之前的所有指令全部指令完毕。NOP指令本身不包含任何操作;
JUMP指令负责控制器将要从指令缓存单元读取的下一条指令地址的跳转, 用来实现控制流的跳转;
MOVE指令负责将装置内部地址空间某一地址的数据搬运至装置内部地址空间的另一地址,该过程独立于运算单元,在执行过程中不占用运算单元的资源。
图5示出了根据本发明实施例的神经网络正向运算过程的示例框图。在不同从运算模块6中,输入神经元向量分别与该从运算模块6的权值向量进行点积运算,得到对应的输出神经元值,所有这些输出神经元值组成中间结果向量,该中间结果向量经过加偏置向量以及激活运算得到该层神经网络的最终输出神经元向量,公式描述为out=f(w*in+b),其中out输出神经元向量、in是输入神经元向量、b是偏置向量,w是权值矩阵,f是激活函数。每个从运算模块6的权值向量是权值矩阵中与该从运算模块6相对应的列向量。H树模块将输入神经元向量[in0,…,inN]发送给所有的从运算单元,暂存在神经元缓存单元中。对于第i个从运算单元,计算其相应的权值向量[w_i0,…,w_iN]与输入神经元向量的点积。从运算单元输出的结果经过H树模块拼成完整的输出向量并返回给主运算单元,在主运算单元中进行激活运算,得到最后的输出神经元向量[out0,out1,out2,…,outN]。
图5是示出根据一个实施例的单层人工神经网络正向运算流程图。该流程图描述利用本发明的装置和指令集实现图4所示的一种单层神经网络正向运算的过程。
在步骤S1,在指令缓存单元1的首地址处预先存入一条IO指令。
在步骤S2,运算开始,控制器单元2从指令缓存单元1的首地址读取该条IO指令,根据译出的微指令,直接内存访问单元3从外部地址空间读取相应的所有人工神经网络运算指令,并将其缓存在指令缓存单元1中。
在步骤S3,控制器单元2接着从指令缓存单元读入下一条IO指令,根据译出的微指令,直接内存访问单元3从外部地址空间读取主运算模块5需要的所有数据(例如,包括输入神经元向量、插值表、常数表和偏置等)至主运算模块5的神经元缓存单元53。
在步骤S4,控制器单元2接着从指令缓存单元读入下一条IO指令,根据译出的微指令,直接内存访问单元3从外部地址空间读取从运算模块6需要的权值矩阵数据。
在步骤S5,控制器单元2接着从指令缓存单元读入下一条CONFIG指令,根据译出的微指令,装置配置该层神经网络计算需要的各种常数。例如,运算单元51、 61根据微指令里的参数配置单元内部寄存器的值,所述参数例如包括本层计算的精度设置、激活函数的数据(例如本层计算的精度位,Lrn层算法的rang参数,AveragePooling层算法窗口大小的倒数等)
在步骤S6,控制器单元2接着从指令缓存单元读入下一条COMPUTE指令,根据译出的微指令,主运算模块5首先通过H树模块4将输入神经元向量发给各从运算模块6,保存至从运算模块6的神经元缓存单元63。
在步骤S7,根据COMPUTE指令译出的微指令,从运算模块6的运算单元61从权值缓存单元64读取权值向量(权值矩阵中对应于该从运算模块6的列向量),从神经元缓存单元读取输入神经元向量,完成权值向量和输入神经元向量的点积运算,将中间结果通过H树返回。
在步骤S8,在H树模块4中,各从运算模块6返回的中间结果被逐级拼成完整的中间结果向量。
在步骤S9,主运算模块5得到H树模块4的返回值,根据COMPUTE指令译出的微指令,从神经元缓存单元53读取偏置向量,与H树模块4返回的向量相加,然后再对相加结果做激活,并将最后的输出神经元向量写回至神经元缓存单元53。
在步骤S10,控制器单元接着从指令缓存单元读入下一条IO指令,根据译出的微指令,直接内存访问单元3将神经元缓存单元53中的输出神经元向量存至外部地址空间指定地址,运算结束。
对于多层人工神经网络,其实现过程与单层神经网络类似,当上一层人工神经网络执行完毕后,下一层的运算指令会将主运算单元中存储的上一层的输出神经元地址作为本层的输入神经元地址。同样地,指令中的权值地址和偏置地址也会变更至本层对应的地址。
通过采用用于执行人工神经网络正向运算的装置和指令集,解决了CPU和GPU运算性能不足,前端译码开销大的问题。有效提高了对多层人工神经网络正向运算的支持。
通过采用针对多层人工神经网络正向运算的专用片上缓存,充分挖掘了输入神经元和权值数据的重用性,避免了反复向内存读取这些数据,降低了内存访问带宽,避免了内存带宽成为多层人工神经网络正向运算性能瓶颈的问题。
前面的附图中所描绘的进程或方法可通过包括硬件(例如,电路、专用逻辑 等)、固件、软件(例如,被具体化在非瞬态计算机可读介质上的软件),或两者的组合的处理逻辑来执行。虽然上文按照某些顺序操作描述了进程或方法,但是,应该理解,所描述的某些操作能以不同顺序来执行。此外,可并行地而非顺序地执行一些操作。
在前述的说明书中,参考其特定示例性实施例描述了本发明的各实施例。显然,可对各实施例做出各种修改,而不背离所附权利要求所述的本发明的更广泛的精神和范围。相应地,说明书和附图应当被认为是说明性的,而不是限制性的。

Claims (10)

  1. 一种用于执行人工神经网络正向运算的装置,包括指令缓存单元、控制器单元、直接内存访问单元、H树模块、主运算模块、以及多个从运算模块,其中:
    指令缓存单元用于通过直接内存访问单元读入指令并缓存读入的指令;
    控制器单元用于从指令缓存单元读取指令,并将该指令译码成控制H树模块、主运算模块、以及从运算模块行为的微指令;
    直接内存访问单元用于从外部地址空间向主运算模块和各从运算模块的相应数据缓存单元中写数据或从所述数据缓存单元向外部地址空间读数据;
    H树模块用于,在每层神经网络反向训练开始计算的阶段,主运算模块通过H树模块向所有的从运算模块传输本层的输入神经元向量,在从计算模块的计算过程完成后,H树模块逐级将各从计算模块的输出神经元值拼成中间结果向量;
    主运算模块用于利用中间结果向量完成后续计算。
  2. 根据权利要求1所述的装置,其中,多个从运算模块利用相同的输入神经元向量和各自不同的权值向量,并行地计算出各自的输出神经元值。
  3. 根据权利要求1所述的装置,其中,主运算模块对中间结果向量执行以下任一项操作:
    对加偏置操作,在中间结果向量上加上偏置;
    对中间结果向量进行激活,激活函数active是sigmoid,tanh,relu,softmax中的任一个;
    采样操作,将中间结果向量与随机数比较,大于随机数则输出1,小于随机数则输出0;或者
    池化操作,包括最大值池化或平均值池化(AVGPOOLING)。
  4. 根据权利要求1所述的装置,其中,从运算模块包括输入神经元缓存单元,用于缓存输入神经元数据。
  5. 根据权利要求1所述的装置,其中,H树模块构成主运算模块和所述多个从运算模块之间的数据通路,并具有H树型的结构,H树是由多个节点构成的二叉树通路,每个节点将上游的数据同样地发给下游的两个节点,将下游的两个节 点返回的数据合并,并返回给上游的节点。
  6. 根据权利要求1所述的装置,其中,主运算模块包括运算单元、数据依赖关系判断单元和神经元缓存单元,其中:
    神经元缓存单元用于缓存主运算模块在计算过程中用到的输入数据和输出数据;
    运算单元完成主运算模块的各种运算功能;
    数据依赖关系判断单元是运算单元读写神经元缓存单元的端口,保证对神经元缓存单元中数据读写不存在一致性冲突,并且负责从神经元缓存单元读取输入神经元向量通过H树模块发送给从运算模块;以及
    来自H树模块的中间结果向量被发送到运算单元。
  7. 根据权利要求1所述的装置,其中,每个从运算模块包括运算单元、数据依赖关系判定单元、神经元缓存单元和权值缓存单元,其中:
    运算单元接收控制器单元发出的微指令并进行算数逻辑运算;
    数据依赖关系判断单元负责计算过程中对神经元缓存单元和权值缓存单元的读写操作,保证对神经元缓存单元和权值缓存单元的读写不存在一致性冲突;
    神经元缓存单元缓存输入神经元向量数据以及该从运算模块计算得到的输出神经元值;以及
    权值缓存单元缓存该从运算模块在计算过程中需要的权值向量。
  8. 根据权利要求6或7所述的装置,其中,通过以下方式保证读写不存在一致性冲突:判断尚未执行的微指令与正在执行过程中的微指令的数据之间是否存在依赖关系,如果不存在,允许该条微指令立即发射,否则需要等到该条微指令所依赖的所有微指令全部执行完成后该条微指令才允许被发射。
  9. 一种使用根据权利要求1-7中的任一项的装置执行单层人工神经网络正向运算的方法,包括:
    直接内存访问单元从外部地址空间读取与该层人工神经网络正向运算有关的所有人工神经网络运算指令,并将其缓存在指令缓存单元中;
    直接内存访问单元从外部地址空间读取主运算模块需要的与该层人工神经网络正向运算有关的所有数据至主运算模块的神经元缓存单元;
    直接内存访问单元从外部地址空间读取从运算模块需要的权值矩阵数据;
    配置该层神经网络正向运算需要的各种常数;
    主运算模块首先通过H树模块将输入神经元向量发给各从运算模块,保存至从运算模块的神经元缓存单元;
    从运算模块的运算单元从权值缓存单元读取权值向量,从神经元缓存单元读取输入神经元向量,完成权值向量和输入神经元向量的点积运算,将得到的神经元值通过H树模块返回;
    在H树模块中,各从运算模块返回的神经元值被逐级拼成完整的中间结果向量;
    主运算模块从神经元缓存单元读取偏置向量,与H树模块返回的中间结果向量相加,然后再对相加结果做激活,得到输出神经元向量写回至神经元缓存单元;以及
    直接内存访问单元将神经元缓存单元中的输出神经元向量存至外部地址空间指定地址。
  10. 一种执行多层人工神经网络正向运算的方法,包括:
    针对每一层,执行根据权利要求9所述的方法,其中:
    当针对上一层人工神经网络执行完毕后,将主运算模块中存储的上一层的输出神经元地址作为本层的输入神经元地址,针对所述本层再次执行根据权利要求9所述的方法。
PCT/CN2016/078281 2016-01-20 2016-04-01 用于执行人工神经网络正向运算的装置和方法 WO2017124642A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
EP21202782.5A EP3971789B1 (en) 2016-01-20 2016-04-01 Device and method for executing forward calculation of artificial neural network
EP16885906.4A EP3407265B1 (en) 2016-01-20 2016-04-01 Device and method for executing forward calculation of artificial neural network
KR1020187015434A KR102203746B1 (ko) 2016-01-20 2016-04-01 인공 신경망 정방향 연산 실행용 장치와 방법
KR1020207034359A KR102331978B1 (ko) 2016-01-20 2016-04-01 인공 신경망 정방향 연산 실행용 장치와 방법
US16/039,567 US10410112B2 (en) 2016-01-20 2018-07-19 Apparatus and method for performing a forward operation of artificil neural networks
US16/441,025 US10860917B2 (en) 2016-01-20 2019-06-14 Apparatus and method for performing a forward operation of artificial neural networks

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610037645.1 2016-01-20
CN201610037645.1A CN106991476B (zh) 2016-01-20 2016-01-20 用于执行人工神经网络正向运算的装置和方法

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/039,567 Continuation-In-Part US10410112B2 (en) 2016-01-20 2018-07-19 Apparatus and method for performing a forward operation of artificil neural networks

Publications (1)

Publication Number Publication Date
WO2017124642A1 true WO2017124642A1 (zh) 2017-07-27

Family

ID=59361382

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/078281 WO2017124642A1 (zh) 2016-01-20 2016-04-01 用于执行人工神经网络正向运算的装置和方法

Country Status (5)

Country Link
US (2) US10410112B2 (zh)
EP (2) EP3407265B1 (zh)
KR (2) KR102331978B1 (zh)
CN (5) CN111353589B (zh)
WO (1) WO2017124642A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11663461B2 (en) 2018-07-05 2023-05-30 International Business Machines Corporation Instruction distribution in an array of neural network cores

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353589B (zh) * 2016-01-20 2024-03-01 中科寒武纪科技股份有限公司 用于执行人工神经网络正向运算的装置和方法
CN111353588B (zh) * 2016-01-20 2024-03-05 中科寒武纪科技股份有限公司 用于执行人工神经网络反向训练的装置和方法
CN109375951B (zh) * 2016-04-27 2020-10-09 中科寒武纪科技股份有限公司 一种用于执行全连接层神经网络正向运算的装置和方法
EP3451239A4 (en) * 2016-04-29 2020-01-01 Cambricon Technologies Corporation Limited APPARATUS AND METHOD FOR PERFORMING RECURRENT NEURONAL NETWORK AND LTSM CALCULATIONS
EP3654210A1 (en) 2017-08-31 2020-05-20 Cambricon Technologies Corporation Limited Chip device and related products
CN107748914A (zh) * 2017-10-19 2018-03-02 珠海格力电器股份有限公司 人工神经网络运算电路
CN107993206A (zh) * 2017-10-30 2018-05-04 上海寒武纪信息科技有限公司 一种信息处理方法及相关产品
CN109726807B (zh) * 2017-10-31 2023-11-24 上海寒武纪信息科技有限公司 神经网络处理器、运算方法及存储介质
TW201926147A (zh) * 2017-12-01 2019-07-01 阿比特電子科技有限公司 電子裝置、加速器、適用於神經網路運算的加速方法及神經網路加速系統
WO2019114842A1 (zh) 2017-12-14 2019-06-20 北京中科寒武纪科技有限公司 一种集成电路芯片装置
CN109961138B (zh) * 2017-12-14 2020-04-14 中科寒武纪科技股份有限公司 神经网络训练方法及相关产品
CN110097181B (zh) * 2018-01-30 2023-07-11 上海寒武纪信息科技有限公司 用于执行人工神经网络正向运算的装置和方法
CN110163363B (zh) * 2018-02-13 2021-05-11 上海寒武纪信息科技有限公司 一种计算装置及方法
CN110472734B (zh) * 2018-05-11 2024-03-29 上海寒武纪信息科技有限公司 一种计算装置及相关产品
US20210133854A1 (en) 2018-09-13 2021-05-06 Shanghai Cambricon Information Technology Co., Ltd. Information processing method and terminal device
US20220004854A1 (en) * 2018-10-08 2022-01-06 Deeper-I Co., Inc. Artificial neural network computation acceleration apparatus for distributed processing, artificial neural network acceleration system using same, and artificial neural network acceleration method therefor
CN111079925B (zh) * 2018-10-19 2021-04-09 中科寒武纪科技股份有限公司 运算方法、装置及相关产品
CN111176582A (zh) * 2019-12-31 2020-05-19 北京百度网讯科技有限公司 矩阵存储方法、矩阵访问方法、装置和电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5524175A (en) * 1992-10-29 1996-06-04 Hitachi, Ltd. Neuro-computer system for executing a plurality of controlling algorithms
CN201927073U (zh) * 2010-11-25 2011-08-10 福建师范大学 一种可编程硬件bp神经元处理器
CN104145281A (zh) * 2012-02-03 2014-11-12 安秉益 神经网络计算装置和***及其方法
CN105095966A (zh) * 2015-07-16 2015-11-25 清华大学 人工神经网络和脉冲神经网络的混合计算***
CN105184366A (zh) * 2015-09-15 2015-12-23 中国科学院计算技术研究所 一种时分复用的通用神经网络处理器

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9205587D0 (en) * 1992-03-13 1992-04-29 Pilkington Micro Electronics Improved artificial digital neuron,neuron network and network algorithm
JP3172352B2 (ja) * 1993-12-27 2001-06-04 松下電器産業株式会社 ニューラルネットワーク回路
JPH0934858A (ja) * 1995-07-14 1997-02-07 Hitachi Ltd 人工ニューロン
JPH09101944A (ja) * 1995-10-06 1997-04-15 Nippon Telegr & Teleph Corp <Ntt> ニューラルネットワーク回路
GB9902115D0 (en) * 1999-02-01 1999-03-24 Axeon Limited Neural networks
CN1516070A (zh) * 2003-01-08 2004-07-28 剑 王 一种联想记忆神经网络
US7747070B2 (en) * 2005-08-31 2010-06-29 Microsoft Corporation Training convolutional neural networks on graphics processing units
AU2015207873B2 (en) * 2005-11-15 2017-05-04 Bernadette Garner Method for training neural networks
CN101527010B (zh) * 2008-03-06 2011-12-07 上海理工大学 人工神经网络算法的硬件实现方法及其***
US20100312736A1 (en) * 2009-06-05 2010-12-09 The Regents Of The University Of California Critical Branching Neural Computation Apparatus and Methods
CN101639901A (zh) * 2009-09-03 2010-02-03 王连明 基于多核技术的前馈神经网络硬件实现方法
US8515885B2 (en) * 2010-10-29 2013-08-20 International Business Machines Corporation Neuromorphic and synaptronic spiking neural network with synaptic weights learned using simulation
CN102004446A (zh) * 2010-11-25 2011-04-06 福建师范大学 具有多层结构的bp神经元自适应方法
CN102012893B (zh) * 2010-11-25 2012-07-18 中国人民解放军国防科学技术大学 一种可扩展向量运算装置
US9222348B2 (en) * 2011-08-05 2015-12-29 Halliburton Energy Services, Inc. Methods for monitoring the formation and transport of an acidizing fluid using opticoanalytical devices
US9092729B2 (en) * 2011-08-11 2015-07-28 Greenray Industries, Inc. Trim effect compensation using an artificial neural network
US8442825B1 (en) * 2011-08-16 2013-05-14 The United States Of America As Represented By The Director, National Security Agency Biomimetic voice identifier
CN102426293A (zh) * 2011-09-08 2012-04-25 天津理工大学 基于神经网络最小方均根的apf谐波检测***及检测方法
CN102866982A (zh) * 2012-09-14 2013-01-09 复旦大学 基于fpga的8位复杂指令集中央处理器
CN103971163B (zh) * 2014-05-09 2017-02-15 哈尔滨工程大学 一种基于归一化最小均方自适应滤波的自适应学习率小波神经网络控制方法
CN104036451B (zh) * 2014-06-20 2018-12-11 深圳市腾讯计算机***有限公司 基于多图形处理器的模型并行处理方法及装置
CN104297504A (zh) * 2014-10-22 2015-01-21 上海申腾信息技术有限公司 一种自动化气相色谱控制***
CN104463324A (zh) * 2014-11-21 2015-03-25 长沙马沙电子科技有限公司 一种基于大规模高性能集群的卷积神经网络并行处理方法
CN104612898B (zh) * 2014-11-27 2017-09-08 江苏科技大学 一种风电变桨距多变量模糊神经网络pid控制方法
CN104376262B (zh) * 2014-12-08 2018-01-09 中国科学院深圳先进技术研究院 一种基于Dalvik指令和权限组合的安卓恶意软件检测方法
CN105095967B (zh) * 2015-07-16 2018-02-16 清华大学 一种多模态神经形态网络核
CN111353589B (zh) * 2016-01-20 2024-03-01 中科寒武纪科技股份有限公司 用于执行人工神经网络正向运算的装置和方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5524175A (en) * 1992-10-29 1996-06-04 Hitachi, Ltd. Neuro-computer system for executing a plurality of controlling algorithms
CN201927073U (zh) * 2010-11-25 2011-08-10 福建师范大学 一种可编程硬件bp神经元处理器
CN104145281A (zh) * 2012-02-03 2014-11-12 安秉益 神经网络计算装置和***及其方法
CN105095966A (zh) * 2015-07-16 2015-11-25 清华大学 人工神经网络和脉冲神经网络的混合计算***
CN105184366A (zh) * 2015-09-15 2015-12-23 中国科学院计算技术研究所 一种时分复用的通用神经网络处理器

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3407265A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11663461B2 (en) 2018-07-05 2023-05-30 International Business Machines Corporation Instruction distribution in an array of neural network cores

Also Published As

Publication number Publication date
CN111353589B (zh) 2024-03-01
CN111353589A (zh) 2020-06-30
US10410112B2 (en) 2019-09-10
KR102203746B1 (ko) 2021-01-15
CN109993285A (zh) 2019-07-09
CN106991476B (zh) 2020-04-10
EP3407265A1 (en) 2018-11-28
CN109242094A (zh) 2019-01-18
CN106991476A (zh) 2017-07-28
KR102331978B1 (ko) 2021-12-01
CN111340200B (zh) 2024-05-03
CN109993285B (zh) 2020-02-07
US10860917B2 (en) 2020-12-08
EP3407265A4 (en) 2019-09-04
KR20180102059A (ko) 2018-09-14
EP3971789A1 (en) 2022-03-23
EP3407265B1 (en) 2021-11-10
US20180322381A1 (en) 2018-11-08
CN109242094B (zh) 2020-05-08
KR20200136514A (ko) 2020-12-07
US20190294951A1 (en) 2019-09-26
EP3971789B1 (en) 2024-05-29
CN111340200A (zh) 2020-06-26

Similar Documents

Publication Publication Date Title
WO2017124642A1 (zh) 用于执行人工神经网络正向运算的装置和方法
WO2017124641A1 (zh) 用于执行人工神经网络反向训练的装置和方法
WO2017185387A1 (zh) 一种用于执行全连接层神经网络正向运算的装置和方法
WO2017185391A1 (zh) 一种用于执行卷积神经网络训练的装置和方法
WO2017185347A1 (zh) 用于执行循环神经网络和lstm运算的装置和方法
WO2017124644A1 (zh) 一种人工神经网络压缩编码装置和方法
CN110929863B (zh) 用于执行lstm运算的装置和方法
WO2017185386A1 (zh) 一种用于执行卷积神经网络正向运算的装置和方法
JP2020508532A (ja) 加速化ディープラーニング
WO2017177442A1 (zh) 支持离散数据表示的人工神经网络正向运算装置和方法
WO2017185336A1 (zh) 用于执行pooling运算的装置和方法
WO2017185248A1 (zh) 用于执行人工神经网络自学习运算的装置和方法
WO2018058452A1 (zh) 一种执行人工神经网络运算的装置和方法
WO2017177446A1 (zh) 支持离散数据表示的人工神经网络反向训练装置和方法
WO2017185335A1 (zh) 一种用于执行batch normalization运算的装置和方法
WO2017181336A1 (zh) maxout层运算装置和方法
CN111860772B (zh) 一种用于执行人工神经网络pooling运算的装置和方法
CN109993276B (zh) 用于执行人工神经网络反向训练的装置和方法
WO2021072060A1 (en) Method and system for executing neural network
CN110097181B (zh) 用于执行人工神经网络正向运算的装置和方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16885906

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20187015434

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE