CN107832841B - Power consumption optimization method and circuit of neural network chip - Google Patents

Power consumption optimization method and circuit of neural network chip Download PDF

Info

Publication number
CN107832841B
CN107832841B CN201711121900.1A CN201711121900A CN107832841B CN 107832841 B CN107832841 B CN 107832841B CN 201711121900 A CN201711121900 A CN 201711121900A CN 107832841 B CN107832841 B CN 107832841B
Authority
CN
China
Prior art keywords
convolution
unit
power domain
matrix
control unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711121900.1A
Other languages
Chinese (zh)
Other versions
CN107832841A (en
Inventor
廖裕民
陈幸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rockchip Electronics Co Ltd
Original Assignee
Fuzhou Rockchip Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou Rockchip Electronics Co Ltd filed Critical Fuzhou Rockchip Electronics Co Ltd
Priority to CN201711121900.1A priority Critical patent/CN107832841B/en
Publication of CN107832841A publication Critical patent/CN107832841A/en
Application granted granted Critical
Publication of CN107832841B publication Critical patent/CN107832841B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections

Abstract

The invention provides a power consumption optimization method and a circuit of a neural network chip, wherein a power domain is independently arranged for each convolution calculation network layer, a power domain is independently arranged for each convolution operation unit, data blocks in a matrix to be convolved are connected with a gated clock unit according to row units, and the power of each row of data blocks is connected with a gated clock unit; the matrix analysis unit is used for analyzing the n rows of data blocks of the to-be-convolved matrix, and the analysis result controls the convolution calculation network layer power domain switch control unit, the convolution unit power domain switch control unit and the convolution unit clock switch control unit through the power consumption control unit, so that the on or off of each convolution calculation network layer power domain, each convolution power domain or each gate control clock unit is controlled. The invention makes the single neuron processing unit and the whole neural network layer into multi-level power domains, and can dynamically shut off according to requirements, thereby effectively reducing the power consumption consumed in the operation process of the convolutional neural network circuit.

Description

Power consumption optimization method and circuit of neural network chip
Technical Field
The invention relates to the technical field of chips, in particular to a power consumption optimization method and circuit of a neural network chip.
Background
With the rise of the artificial intelligence industry, the chip special for artificial intelligence is rapidly developed. However, a big problem of the current artificial intelligence chip is that due to the complexity of the deep learning neural network, the operation circuit is very large, which causes high chip cost and high chip power consumption. If the characteristic of deep learning can be used, it is very meaningful to further reduce the cost and power consumption of the deep learning artificial intelligence chip.
This world's information is complex, but the information processed by the human brain is sparse. It is impossible to directly hard-handle the organoleptically complex input, and a process of information extraction is required, which is called abstraction in human brain. Deep learning works its way because it mimics to some extent the abstract process of human brain processing information. In neuroscience, neuroscientists have also discovered sparse activation of neurons. In 2001, Attwell et al speculate that the neuron coding work mode has sparseness and distribution on the basis of observation and learning of brain energy consumption. Lennie et al estimated that neurons activated simultaneously in the brain were only 1-4% in 2003, further indicating the sparsity of neuronal work. From the aspect of signals, namely, the neurons only selectively respond to a small part of input signals at the same time, and a large number of signals are deliberately shielded, so that the learning precision can be improved, and sparse features can be better and faster extracted. Sparsity is here the mapping of past matrices with many 0 elements. Therefore, sparsity in the operation process of the neural network is a great characteristic, and the invention provides a targeted power consumption optimization method aiming at sparsity matrix operation of the convolutional neural network in the operation process. The method can effectively reduce the power consumption consumed in the operation process of the convolutional neural network circuit.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a power consumption optimization method and circuit for a neural network chip, so as to effectively reduce the power consumption consumed in the operation process of a convolutional neural network circuit.
The method of the invention is realized as follows: a power consumption optimization method of a neural network chip comprises a plurality of convolution calculation network layers, each convolution calculation network layer comprises a plurality of convolution operation units, each convolution operation unit is responsible for the operation of a whole row of data blocks corresponding to the height of a convolution kernel of a matrix to be convolved, the matrix to be convolved comprises n rows of data blocks and is stored in a corresponding hidden layer matrix storage unit, and the power consumption optimization method comprises the following steps:
step S1, independently setting a power domain for each convolution calculation network layer, calculating the power domain of the network layer for the convolution calculation, and connecting a convolution calculation network layer power domain switch control unit;
a power domain is independently arranged for each convolution operation unit, is a convolution power domain and is connected with a convolution unit power domain switch control unit;
the data blocks in the matrix to be convolved are arranged in rows, the power supply of each row of data blocks is connected with a gate control clock unit, and each gate control clock unit is connected with a convolution unit clock switch control unit;
and step S2, analyzing the n rows of data blocks of the matrix to be convolved through a matrix analysis unit, and controlling the convolution calculation network layer power domain switch control unit, the convolution unit power domain switch control unit and the convolution unit clock switch control unit through a power consumption control unit according to the analysis result so as to control the on or off of each convolution calculation network layer power domain, each convolution power domain or each gate control clock unit.
The matrix analysis unit analyzes the n rows of data blocks of the matrix to be convolved:
(1) scanning the matrix to be convolved line by line according to the size of a convolution kernel, judging whether each data block in a whole line is all zero one by one, and if a certain data block is all zero, marking the data block and closing a clock;
(2) after the judgment of a whole row of data blocks is finished, judging whether all the data blocks in the whole row are all zero or not once, if so, marking a convolution operation unit for operating the whole row of data blocks to close a convolution power domain;
(3) and finally, judging whether the data block of the whole matrix to be convolved is all zero, and if so, marking that the power domain of the whole convolution calculation network layer can be closed.
The circuit of the invention is realized as follows: a power consumption optimization circuit of a neural network chip comprises a plurality of convolution calculation network layers, each convolution calculation network layer comprises a plurality of convolution operation units, each convolution operation unit is responsible for the operation of a whole row of data blocks corresponding to the height of a convolution kernel of a matrix to be convolved, and the matrix to be convolved comprises n rows of data blocks and is stored in a corresponding hidden layer matrix storage unit;
the power consumption optimization circuit comprises power domain control circuits which are arranged in one-to-one correspondence with a plurality of convolution calculation network layers, and each power domain control circuit comprises a matrix analysis unit, a power consumption control unit, a convolution calculation network layer power domain switch control unit, a convolution unit clock switch control unit, a convolution calculation network layer power domain, n convolution power domains and n gate control clock units;
the matrix analysis unit is respectively connected with the corresponding hidden layer matrix storage unit and the power consumption control unit; the power consumption control unit is respectively connected with the convolution calculation network layer power domain switch control unit, the convolution unit power domain switch control unit and the convolution unit clock switch control unit; the convolution calculation network layer power domain switch control unit is connected with the convolution calculation network layer power domain; the convolution unit power domain switch control unit is respectively connected with n convolution power domains; the convolution unit clock switch control unit is respectively connected with n gate control clock units, and the n gate control clock units are respectively and correspondingly connected with n rows of data blocks.
Furthermore, the matrix analysis unit in each power domain control circuit is a matrix analysis unit, and the power consumption control unit in each power domain control circuit is a power consumption control unit.
The invention has the following advantages: the invention makes the single neuron processing unit and the whole neural network layer into multi-level power domains, and can dynamically shut off according to requirements, thereby effectively reducing the power consumption consumed in the operation process of the convolutional neural network circuit.
Drawings
The invention will be further described with reference to the following examples with reference to the accompanying drawings.
Fig. 1 is a schematic block diagram of the circuit of the present invention.
Detailed Description
The invention relates to a power consumption optimization method of a neural network chip, which comprises a plurality of convolution calculation network layers, wherein each convolution calculation network layer comprises a plurality of convolution operation units, each convolution operation unit is responsible for the operation of a whole row of data blocks corresponding to the height of a convolution kernel of a matrix to be convolved, the matrix to be convolved comprises n rows of data blocks and is stored in a corresponding hidden layer matrix storage unit, and the power consumption optimization method comprises the following steps:
step S1, independently setting a power domain for each convolution calculation network layer, calculating the power domain of the network layer for the convolution calculation, and connecting a convolution calculation network layer power domain switch control unit;
a power domain is independently arranged for each convolution operation unit, is a convolution power domain and is connected with a convolution unit power domain switch control unit;
the data blocks in the matrix to be convolved are arranged in rows, the power supply of each row of data blocks is connected with a gate control clock unit, and each gate control clock unit is connected with a convolution unit clock switch control unit;
and step S2, analyzing the n rows of data blocks of the matrix to be convolved through a matrix analysis unit, and controlling the convolution calculation network layer power domain switch control unit, the convolution unit power domain switch control unit and the convolution unit clock switch control unit through a power consumption control unit according to the analysis result so as to control the on or off of each convolution calculation network layer power domain, each convolution power domain or each gate control clock unit.
The matrix analysis unit analyzes the n rows of data blocks of the to-be-convolved matrix:
(1) scanning the matrix to be convolved line by line according to the size of a convolution kernel, judging whether each data block in a whole line is all zero one by one, and if a certain data block is all zero, marking the data block and closing a clock;
(2) after the judgment of a whole row of data blocks is finished, judging whether all the data blocks in the whole row are all zero or not once, if so, marking a convolution operation unit for operating the whole row of data blocks to close a convolution power domain;
(3) and finally, judging whether the data block of the whole matrix to be convolved is all zero, and if so, marking that the power domain of the whole convolution calculation network layer can be closed.
Referring to fig. 1, the neural network chip provided by the present invention includes a plurality of convolution calculation network layers, each convolution calculation network layer includes a plurality of convolution operation units, each convolution operation unit is responsible for the operation of a whole row of data blocks corresponding to the height of a convolution kernel of a matrix to be convolved, the matrix to be convolved includes n rows of data blocks and is stored in a corresponding hidden layer matrix storage unit; the neural network chip comprises a neuron synapse input unit, an input layer convolution operation unit and a convolution kernel; each convolution calculation network layer is also provided with an activation function operation unit, a pooling processing unit and a hidden layer matrix storage unit;
the power consumption optimization circuit comprises power domain control circuits which are arranged in one-to-one correspondence with a plurality of convolution calculation network layers, and each power domain control circuit comprises a matrix analysis unit, a power consumption control unit, a convolution calculation network layer power domain switch control unit, a convolution unit clock switch control unit, a convolution calculation network layer power domain, n convolution power domains and n gate control clock units; in a specific embodiment, each power domain control circuit includes a matrix parsing unit and a power consumption control unit, but the invention is not limited thereto, and may also be: the matrix analysis unit in each power domain control circuit is a matrix analysis unit, and the power consumption control unit in each power domain control circuit is a power consumption control unit.
The matrix analysis unit is respectively connected with the corresponding hidden layer matrix storage unit and the power consumption control unit; the power consumption control unit is respectively connected with the convolution calculation network layer power domain switch control unit, the convolution unit power domain switch control unit and the convolution unit clock switch control unit; the convolution calculation network layer power domain switch control unit is connected with the convolution calculation network layer power domain; the convolution unit power domain switch control unit is respectively connected with n convolution power domains; the convolution unit clock switch control unit is respectively connected with n gate control clock units, and the n gate control clock units are respectively and correspondingly connected with n rows of data blocks.
Wherein the content of the first and second substances,
the neuron synapse input unit is responsible for sending a value acquired by neuron synapses to the input layer convolution operation unit;
the input layer convolution operation unit is responsible for carrying out convolution processing on data input by the synapse of the neuron according to convolution cores, and after the convolution is finished, a convolution value is sent to the activation function operation unit;
the active function operation unit performs active function operation on the convolution value, and the generated data matrix is already a sparse matrix due to the characteristics of the active function;
the pooling processing unit is responsible for pooling the activated matrix and then sending the matrix to be convolved to the hidden layer matrix storage unit for storage; different matrixes processed by different convolution kernels are stored in different addresses, such as a matrix A and a matrix B in the figure;
the convolution calculation network layer is responsible for convolution operation of a corresponding neural network hidden layer and is of a parallel processing structure, namely each convolution operation unit is responsible for operation of a whole row corresponding to the height of a convolution kernel of a to-be-convolved matrix;
the convolution kernel is a convolution kernel with a size, such as 4x4, in a 320x180 image, if a whole row is to be swept, the convolution kernel 1 is responsible for the whole row in the 320x4 image, and the convolution kernel 2 is responsible for the rows 5 to 8 in the image, such a whole row corresponding to the height of the convolution kernel, in the present invention, the row of the n rows of data blocks of the matrix to be convolved is a whole row corresponding to the height of the convolution kernel;
the matrix analysis unit is responsible for sequentially analyzing the matrix of the hidden layer, and judging whether a certain data block clock is required to be closed or not, whether a power domain of a corresponding convolution operation unit is required to be closed or not, or whether a power domain of the whole neural network layer convolution circuit is required to be closed or not when convolution operation is carried out on the matrix;
the power consumption control unit is responsible for controlling the convolution unit power domain switch control unit, the convolution unit clock switch control unit and the neural network whole-layer power domain switch control unit when convolution operation is started on the matrix, and finely controlling the clock and the power domain of each convolution operation unit and the whole-layer neural network power domain;
the convolution operation unit starts convolution operation on the matrix after the matrix analysis unit completes matrix analysis, and in the operation process, the power consumption control unit controls the turn-off of the clock and the power domain of each convolution operation unit and the turn-off of the power domain of the whole layer of neural network according to the analysis and marking result, so that the effect of saving power consumption is achieved;
accordingly, the optimization process of the power consumption optimization circuit of the invention is as follows:
1. the neuron synapse input unit transmits a value acquired by the neuron synapses to the input layer convolution operation unit; the input layer convolution operation unit is used for performing convolution processing on data input by the neuron synapses according to the convolution kernel; after the convolution is finished, the convolution value is sent to a first convolution calculation network layer (namely a first hidden layer), namely, the convolution value is sent to a first activation function operation unit for activation function operation; and the first pooling processing unit is used for pooling the activated matrix and then sending the matrix to be convolved to a first hidden layer matrix storage unit for storage.
2. Then, the matrix analysis unit is responsible for sequentially analyzing the matrices of the first convolution calculation network layer, and taking the convolution kernel of 4 × 4 as an example, the specific analysis process is as follows:
starting from the upper left corner of the matrix, judging whether the first 4x4 data block is all zero, if so, marking that the first data block can close the clock when the matrix is operated by the convolution operation unit 1, then moving a data to the right, then taking a 4x4 data block to judge whether the data block is all zero, if so, marking that the second data block can close the clock when the matrix is operated by the convolution operation unit 1, and so on until the last 4x4 data block of the row is judged, judging whether the clock can be closed by the last data block of the convolution operation unit 1, then judging whether the clock can be closed by the data blocks corresponding to all the convolution operation units 1 once, if all the data blocks can be closed by the clock entirely, then closing the power domain by the convolution operation unit 1 when the matrix is operated; and then, judging the convolution operation unit 2 by a method of judging the convolution operation unit 1 until the convolution operation unit n finishes judging and labeling, and finally judging whether the data block of the whole matrix is all zero, wherein if the data block is all zero, the power domain of the whole convolution calculation network layer can be closed. And then, the labeling result is sent to a power consumption control unit, and the power consumption control unit controls the on or off of each convolution calculation network layer power domain, each convolution power domain or each gate control clock unit by controlling a convolution calculation network layer power domain switch control unit, a convolution unit power domain switch control unit and a convolution unit clock switch control unit, so that the purpose of optimizing power consumption is achieved.
3. After the convolution operation of the cost layer is completed, the neural network operation of the next layer is continuously started, namely after the convolution of the first convolution calculation network layer is completed, the convolution result is sent to the second convolution calculation network layer, namely to the second activation function operation unit, and then is sent to the second pooling processing unit, the operation of the second layer is continuously started, the operation mode of each hidden layer can be consistent with that of the first layer until the last full connection layer is completed, and the result judgment is completed.
Therefore, the single neuron processing unit and the whole neural network layer are made into multi-layer power domains, and can be dynamically turned off according to requirements, so that the power consumption consumed in the operation process of the convolutional neural network circuit can be effectively reduced.
Although specific embodiments of the invention have been described above, it will be understood by those skilled in the art that the specific embodiments described are illustrative only and are not limiting upon the scope of the invention, and that equivalent modifications and variations can be made by those skilled in the art without departing from the spirit of the invention, which is to be limited only by the appended claims.

Claims (4)

1. A power consumption optimization method of a neural network chip comprises a plurality of convolution calculation network layers, each convolution calculation network layer comprises a plurality of convolution operation units, each convolution operation unit is responsible for operation of a whole row of data blocks corresponding to the height of a convolution kernel of a matrix to be convolved, the matrix to be convolved comprises n rows of data blocks and is stored in a corresponding hidden layer matrix storage unit, n is the height of the whole matrix to be convolved divided by the height of a single data block processed by the convolution kernel, and the power consumption optimization method is characterized in that: the power consumption optimization method comprises the following steps:
step S1, independently setting a power domain for each convolution calculation network layer, calculating the power domain of the network layer for the convolution calculation, and connecting a convolution calculation network layer power domain switch control unit;
a power domain is independently arranged for each convolution operation unit, is a convolution power domain and is connected with a convolution unit power domain switch control unit;
the data blocks in the matrix to be convolved are arranged in rows, the power supply of each row of data blocks is connected with a gate control clock unit, and each gate control clock unit is connected with a convolution unit clock switch control unit;
and step S2, analyzing the n rows of data blocks of the matrix to be convolved through a matrix analysis unit, and controlling the convolution calculation network layer power domain switch control unit, the convolution unit power domain switch control unit and the convolution unit clock switch control unit through a power consumption control unit according to the analysis result so as to control the on or off of each convolution calculation network layer power domain, each convolution power domain or each gate control clock unit.
2. The method of claim 1, wherein the method comprises:
the matrix analysis unit analyzes the n rows of data blocks of the matrix to be convolved:
(1) scanning the matrix to be convolved line by line according to the size of a convolution kernel, judging whether each data block in a whole line is all zero one by one, and if a certain data block is all zero, marking the data block and closing a clock;
(2) after the judgment of a whole row of data blocks is finished, judging whether all the data blocks in the whole row are all zero or not once, if so, marking a convolution operation unit for operating the whole row of data blocks to close a convolution power domain;
(3) and finally, judging whether the data block of the whole matrix to be convolved is all zero, and if so, marking that the power domain of the whole convolution calculation network layer can be closed.
3. The utility model provides a power consumption optimization circuit of neural network chip, this neural network chip includes the convolution calculation network layer of complex number layer, and every convolution calculation network layer includes a plurality of convolution arithmetic unit, and every convolution arithmetic unit is responsible for the operation of treating a whole row of data block that the convolution kernel height of convolution matrix corresponds, should treat that the convolution matrix includes n row of data block and stores in the hidden layer matrix memory cell that corresponds, its characterized in that:
the power consumption optimization circuit comprises power domain control circuits which are arranged in one-to-one correspondence with a plurality of convolution calculation network layers, and each power domain control circuit comprises a matrix analysis unit, a power consumption control unit, a convolution calculation network layer power domain switch control unit, a convolution unit clock switch control unit, a convolution calculation network layer power domain, n convolution power domains and n gate control clock units;
the matrix analysis unit is respectively connected with the corresponding hidden layer matrix storage unit and the power consumption control unit; the power consumption control unit is respectively connected with the convolution calculation network layer power domain switch control unit, the convolution unit power domain switch control unit and the convolution unit clock switch control unit; the convolution calculation network layer power domain switch control unit is connected with the convolution calculation network layer power domain; the convolution unit power domain switch control unit is respectively connected with n convolution power domains; the convolution unit clock switch control unit is respectively connected with n gate control clock units, and each gate control clock unit is respectively and correspondingly connected with a row of data blocks.
4. The power consumption optimization circuit of the neural network chip according to claim 3, wherein: the matrix analysis unit in each power domain control circuit is a matrix analysis unit, and the power consumption control unit in each power domain control circuit is a power consumption control unit.
CN201711121900.1A 2017-11-14 2017-11-14 Power consumption optimization method and circuit of neural network chip Active CN107832841B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711121900.1A CN107832841B (en) 2017-11-14 2017-11-14 Power consumption optimization method and circuit of neural network chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711121900.1A CN107832841B (en) 2017-11-14 2017-11-14 Power consumption optimization method and circuit of neural network chip

Publications (2)

Publication Number Publication Date
CN107832841A CN107832841A (en) 2018-03-23
CN107832841B true CN107832841B (en) 2020-05-05

Family

ID=61655395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711121900.1A Active CN107832841B (en) 2017-11-14 2017-11-14 Power consumption optimization method and circuit of neural network chip

Country Status (1)

Country Link
CN (1) CN107832841B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108712630A (en) * 2018-04-19 2018-10-26 安凯(广州)微电子技术有限公司 A kind of internet camera system and its implementation based on deep learning
CN108647774B (en) * 2018-04-23 2020-11-20 瑞芯微电子股份有限公司 Neural network method and circuit for optimizing sparsity matrix operation
CN109948775B (en) * 2019-02-21 2021-10-19 山东师范大学 Configurable neural convolution network chip system and configuration method thereof
CN111199273B (en) * 2019-12-31 2024-03-26 深圳云天励飞技术有限公司 Convolution calculation method, device, equipment and storage medium
CN115017850A (en) * 2022-06-20 2022-09-06 东南大学 Digital integrated circuit optimization method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8977583B2 (en) * 2012-03-29 2015-03-10 International Business Machines Corporation Synaptic, dendritic, somatic, and axonal plasticity in a network of neural cores using a plastic multi-stage crossbar switching
JP6387913B2 (en) * 2015-07-08 2018-09-12 株式会社デンソー Arithmetic processing unit
CN106127302A (en) * 2016-06-23 2016-11-16 杭州华为数字技术有限公司 Process the circuit of data, image processing system, the method and apparatus of process data

Also Published As

Publication number Publication date
CN107832841A (en) 2018-03-23

Similar Documents

Publication Publication Date Title
CN107832841B (en) Power consumption optimization method and circuit of neural network chip
US11157800B2 (en) Neural processor based accelerator system and method
CN105913119B (en) The heterogeneous polynuclear heart class brain chip and its application method of ranks interconnection
US8812415B2 (en) Neuromorphic and synaptronic spiking neural network crossbar circuits with synaptic weights learned using a one-to-one correspondence with a simulation
Hashmi et al. Automatic abstraction and fault tolerance in cortical microachitectures
CN108985447A (en) A kind of hardware pulse nerve network system
CN105930903B (en) A kind of numerical model analysis neural network chip architecture
CN111095302A (en) Compression of sparse deep convolutional network weights
Taha et al. Memristor crossbar based multicore neuromorphic processors
CN110807519B (en) Parallel acceleration method of neural network based on memristor, processor and device
CN105893159A (en) Data processing method and device
CN110998611A (en) Neuromorphic processing device
Zhao et al. A memristor-based spiking neural network with high scalability and learning efficiency
CN109165730B (en) State quantization network implementation method in cross array neuromorphic hardware
CN109416758A (en) The method of neural network and neural metwork training
Tang et al. Spike counts based low complexity SNN architecture with binary synapse
US11562220B2 (en) Neural processing unit capable of reusing data and method thereof
Ravichandran et al. Artificial neural networks based on memristive devices
CN114792378B (en) Quantum image recognition method and device
CA2921831A1 (en) Methods and apparatus for implementation of group tags for neural models
Tang et al. Rank order coding based spiking convolutional neural network architecture with energy-efficient membrane voltage updates
CN112149815A (en) Population clustering and population routing method for large-scale brain-like computing network
US20190005379A1 (en) Cortical processing with thermodynamic ram
Elbez et al. Progressive compression and weight reinforcement for spiking neural networks
Luo et al. Achieving green ai with energy-efficient deep learning using neuromorphic computing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 350000 building, No. 89, software Avenue, Gulou District, Fujian, Fuzhou 18, China

Patentee after: Ruixin Microelectronics Co., Ltd

Address before: 350000 building, No. 89, software Avenue, Gulou District, Fujian, Fuzhou 18, China

Patentee before: Fuzhou Rockchips Electronics Co.,Ltd.

CP01 Change in the name or title of a patent holder