CN108717570A - A kind of impulsive neural networks parameter quantification method - Google Patents

A kind of impulsive neural networks parameter quantification method Download PDF

Info

Publication number
CN108717570A
CN108717570A CN201810501442.2A CN201810501442A CN108717570A CN 108717570 A CN108717570 A CN 108717570A CN 201810501442 A CN201810501442 A CN 201810501442A CN 108717570 A CN108717570 A CN 108717570A
Authority
CN
China
Prior art keywords
parameter
neural network
training
neural networks
impulsive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810501442.2A
Other languages
Chinese (zh)
Inventor
胡绍刚
乔冠超
张成明
罗鑫
刘夏凯
宁宁
刘洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201810501442.2A priority Critical patent/CN108717570A/en
Publication of CN108717570A publication Critical patent/CN108717570A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/061Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Abstract

The present invention relates to nerual network technique field more particularly to a kind of impulsive neural networks parameter quantification methods.The present invention method by map offline or on-line training obtain training complete original pulse neural network, the parameters such as impulsive neural networks weights, threshold value, leakage constant, set voltage, refractory period, the synaptic delay completed to training quantify, and all layers of neural network can share same group of quantization parameter or respectively one group of quantization parameter.Impulsive neural networks after parameter quantization only need a small amount of parameter that high-precision pulse neural network function can be realized.This method is high-precision simultaneously in holding, and effectively save impulsive neural networks parameter storage space improves arithmetic speed, reduces operation power consumption.

Description

A kind of impulsive neural networks parameter quantification method
Technical field
The present invention relates to nerual network technique field more particularly to a kind of impulsive neural networks parameter quantification methods.
Background technology
Impulsive neural networks (abbreviation SNN) are referred to as third generation neural network, it handles the side of information closer to human brain Formula is the developing direction of the following nerual network technique.SNN receives information based on pulse train, has many coding modes can be with Pulse train is construed to an actual number, common coding mode has pulse code and frequency coding.Between neuron Communication be also to be carried out by pulse, when the film potential of a neuron is more than its threshold value, it will produce pulse letter Number other neurons are passed to, increases or decreases its film potential.The hardware platform of SNN is referred to as neuromorphic chip or class brain core Piece, has overturned traditional von Neumann framework completely, and this kind of chip has the features such as low-power consumption, resource consumption is few, in classification and The performance in the classes human brain fields such as identification will be significantly better than traditional die.There are mainly two types of the training methods of SNN, one is by The corresponding artificial neural network (abbreviation ANN) of training is mapped to trained parameter in SNN again under specified conditions, but is reflecting It often may require that during penetrating and transmit a large amount of parameter;Another is the direct on-line study for carrying out SNN, equally also along with Generate a large amount of parameter.Huge memory space is needed if storing parameter according to legacy memory (such as SRAM, DRAM etc.), if Parameter is stored using new devices such as memristors, then is difficult to accurately and stably realize numerous parameters;Meanwhile huge parameter amount meeting It reduces arithmetic speed, increase operation power consumption.There is presently no a kind of methods that can be compressed to the quantity of parameters in SNN.
Invention content
In order to solve problems in the prior art, the present invention provides a kind of methods that can reduce SNN parameter storage spaces.
Technical scheme is as follows:
A kind of impulsive neural networks parameter quantification method, which is characterized in that include the following steps:
Obtain the original SNN that training is completed.Neuron is spiking neuron (such as LIF neurons) in SNN, has input arteries and veins Accumulation function is integrated in punching and function is provided in pulse, and for SNN using pulse train as input, the major parameter of SNN includes weights, threshold Value, leakage constant, set voltage, refractory period, synaptic delay etc..The SNN that training is completed has the work(such as high-precision classification, identification Energy.Obtaining the neural network that training is completed, there are mainly two types of methods:One is by map offline obtain training complete SNN, By the methods of the training common stochastic gradient descents of ANN training ANN (including MLP, CNN, RNN, LSTM etc.), obtains satisfaction and refer to Training process is completed after the ANN of mark requirement (such as classification, accuracy of identification etc.), then the parameter for the ANN that training is completed is mapped to In the identical SNN of topological structure, the input of ANN is encoded into (such as pulse train of Poisson distribution) conduct afterwards using pulse train The input of SNN, to obtain the SNN of training completion;One is the SNN completed by on-line training acquisition training, establish as certainly The SNN or SNN of other structures is organized, the learning rules such as plasticity (STDP) are relied on using cynapse pulse sequence, using pulse sequence It arranges (such as Poisson distribution pulse train, time encoding pulse train etc.) and SNN is trained by on-line study, adjusted in training process The parameters such as weights, threshold value, leakage constant, set voltage, refractory period, the synaptic delay of SNN, acquisition meet index request (such as Classification, accuracy of identification etc.) training process is completed after SNN, the weights of SNN, threshold value, leakage constant, set are fixed after training The parameters such as voltage, refractory period, synaptic delay, to obtain the SNN of training completion;
Choose the one or more parameters for needing to quantify.The parameter that can quantify includes weights, threshold value, leakage constant, sets Position voltage, refractory period, synaptic delay etc..
A certain layer or certain several layers of or all layer parameter distribution situation are counted respectively;
It attempts to carry out interval division to parameter.Selecting All Parameters interval division method and section number, the division methods in section Equal point-score, non-equal point-score, confidence interval partitioning etc. can be used, the method and number of demarcation interval can be according to specific nerves Network structure and task type and index request are attempted and are adjusted by parameter adjustment experience;
Parameter is quantified according to section.The parameter in all sections is traversed, is distributed in all in same section Parameter is quantified as the same value (i.e. a quantized value), the size of quantized value and the specific neural network structure of positive and negative basis and appoints Service type and index request are attempted and are adjusted by parameter adjustment experience;
It is replaced with the parameter after quantization and corresponds to parameter in original SNN, obtained parameter and quantify SNN;
Using the input of original SNN as input, parameter quantization SNN is tested, if test result meets index and wants It asks, terminates, otherwise return to Selecting All Parameters interval division method again and section number, and interval division is carried out with after to parameter Continuous process.
Beneficial effects of the present invention are that method of the invention can be converted into ANN SNN, and realize the parameter amount of SNN Change, the quantization method is easy to operate flexibly to may be implemented the quantization of diversified forms, and to the performance of neural network almost without Any influence can save storage resource, improve calculating speed.Especially when needing to realize SNN Hardwares, this method can To reduce the consumption of the Resources on Chip such as RAM and computation complexity, hardware calculating speed and performance are improved.
Description of the drawings
Fig. 1 is a kind of SNN parameter quantification methods schematic diagram that present example provides;
Fig. 2 is one of ANN examples MLP schematic diagrames in Fig. 1;
Fig. 3 is one of ANN examples CNN schematic diagrames in Fig. 1;
Fig. 4 is one of ANN examples RNN schematic diagrames in Fig. 1;
Fig. 5 is one of ANN examples LSTM schematic diagrames in Fig. 1;
Fig. 6 is one of SNN examples self-organizing network schematic diagram in Fig. 1;
Fig. 7 is one of quantization parameter example weights distribution map and interval division situation in Fig. 1;
Fig. 8 is one of quantization parameter example threshold value distribution map and interval division situation in Fig. 2;
Fig. 9 is one of quantization parameter example leakage constant distribution map and interval division situation in Fig. 2;
Figure 10 is one of quantization parameter example set voltage distribution map and interval division situation in Fig. 2;
Figure 11 is one of quantization parameter example refractory period distribution map and interval division situation in Fig. 2;
Figure 12 is one of quantization parameter example synaptic delay distribution map and interval division situation in Fig. 2.
Figure 13 is a kind of exemplary method schematic diagram that SNN realizes parameter quantization in Fig. 1.
Specific implementation mode
The present invention is described in detail below in conjunction with the accompanying drawings, so that those skilled in the art more fully understands this hair It is bright.Requiring particular attention is that in the following description, when perhaps known function and the detailed description of design can desalinate this When the main contents of invention, these descriptions will be ignored herein.
As shown in Figure 1, a kind of SNN parameter quantification methods, include the following steps:
S1:Obtain the original SNN that training is completed.
Neuron is spiking neuron (such as LIF neurons) in SNN, has the function of that input pulse integrates accumulation and pulse Provide function, SNN using pulse train as inputting, the major parameter of SNN include weights, threshold value, leakage constant, set voltage, Refractory period, synaptic delay etc..The SNN that training is completed has the function of high-precision classification, identification etc..Obtain the nerve that training is completed There are mainly two types of methods for network:One is the SNN that training completion is obtained by mapping offline, commonly random by training ANN The ANN such as the methods of gradient decline training MLP (refering to Fig. 2), CNN (refering to Fig. 3), RNN (refering to Fig. 4), LSTM (refering to Fig. 5), Acquisition completes training process after meeting the ANN of index request (such as classification, accuracy of identification etc.), then the ANN that training is completed Parameter is mapped in the identical SNN of topological structure, by the input of ANN using pulse train coding (such as the pulse of Poisson distribution Sequence) afterwards as the input of SNN, to obtain the SNN of training completion;One is obtain what training was completed by on-line training SNN, establishes the SNN such as self-organizing SNN (refering to Fig. 6) or other structures, and plasticity (STDP) is relied on using cynapse pulse sequence Equal learning rules, pass through on-line study using pulse train (such as Poisson distribution pulse train, time encoding pulse train etc.) SNN is trained, the parameters such as weights, threshold value, leakage constant, set voltage, refractory period, the synaptic delay of SNN are adjusted in training process, Acquisition completes training process after meeting index request (such as classification, accuracy of identification etc.) SNN, and the power of SNN is fixed after training The parameters such as value, threshold value, leakage constant, set voltage, refractory period, synaptic delay, to obtain the SNN of training completion.
S2:Choose the one or more parameters for needing to quantify.
A parameter can once be quantified, can also once quantify multiple parameters.The parameter that can quantify includes weights, threshold Value, leakage constant, set voltage, refractory period, synaptic delay etc..
S3:A certain layer or certain several layers of or all layer parameter distribution situation are counted respectively.
For the parameter that some needs quantifies, counts a certain layers of SNN or certain several layers of or all layers parameter and draw ginseng Number distribution map.
S4:It attempts to carry out interval division to parameter.
The division methods of Selecting All Parameters interval division method and section number, section can be used equal point-score, non-equal point-score, set Believe interval division method etc., the method and number of demarcation interval according to specific neural network structure and task type and can refer to Mark requires, and is attempted and is adjusted by parameter adjustment experience.Described in S3 on the basis of parameter distribution figure, using equal point-score to power It is worth (refering to Fig. 7), threshold value (refering to Fig. 8), leakage constant (refering to Fig. 9), set voltage (refering to fig. 1 0), refractory period (refering to figure 11), the result after the parameters such as synaptic delay (refering to fig. 1 2) progress interval division provides in the accompanying drawings.
S5:Parameter is quantified according to section.
The parameter in all sections is traversed, all parameters being distributed in same section are quantified as the same value (i.e. one A quantized value), the size of quantized value and the specific neural network structure of positive and negative basis and task type and index request pass through Parameter adjustment experience is attempted and is adjusted.
S6:It obtains parameter and quantifies SNN.
It is replaced with the parameter after quantization and corresponds to parameter in original SNN, obtained parameter and quantify SNN.
S7:Test parameter quantifies SNN.
Using the input of original SNN as input, parameter quantization SNN is tested, if test result meets index and wants It asks, terminates, otherwise return to Selecting All Parameters interval division method again and section number, and interval division is carried out with after to parameter Continuous process.
Refering to fig. 13, the quantization method is carried out for realizing MNIST Handwritten Digit Recognition tasks using MLP below Further explanation includes the following steps:
S8:Set target identification accuracy.
That is index request described in SI, target identification accuracy of the setting nerve network system to MNIST test sets.
S9:MLP is trained using BP algorithm and trained weights are mapped directly into SNN.
That is off-line training method described in S1 needs to meet some specific conditions when training MLP in this example:(1) All units of MLP all should use Relu functions as activation primitive;(2) in the training process, the deviation of neuron is fixed It is 0.
S10:The threshold value and maximum frequency of SNN are set.
This is the peculiar parameter of SNN.The input of SNN needs to change into impulse form, so needing to carry out input picture Coding, in this example encodes each pixel of picture using the mode of Poisson distribution frequency coding, the frequency of pulse It is proportionate with the size of input pixel.The maximum frequency of pulse and the threshold value of LIF neurons according to the sizes of mapping parameters and The discrimination of subsequent feedback is attempted and is adjusted by parameter adjustment experience.
S11:Test SNN discriminations.
It carries out in next step, otherwise returning to S8 re -trainings MLP and follow-up mistake if meeting target identification accuracy described in S8 Journey.
S12:Obtain all layers of weights distribution map and the respectively section of weights.
That is it attempts to carry out interval division to parameter described in all layer parameter distribution situations of statistics and S4 described in S3.In this example Statistics and interval division only are carried out to weights, attempt to be divided using equal point-score, section number is 4.
S13:Weights are quantified and (attempt to be quantified using interval midpoint) according to section.
That is parameter is quantified according to section described in S5.It attempts to be quantified using interval midpoint in this example.
S14:Traversal maximizing wmax and minimum value wmin is carried out to all weights of SNN, takes putting down for wmax and wmin Mean value is denoted as w0.
The step of being quantified using interval midpoint.
S15:The average value of wmin and w0 is taken to be denoted as w-1 again;The average value of wmax and w0 is taken to be denoted as w1.
The step of being quantified using interval midpoint.
S16:All weights are traversed again, if weights between wmin and w-1, enable weights be equal to the flat of the two Mean value x1;If between w-1 and w0, enables weights be equal to the average value x2 of the two, similarly obtain x3 and x4.
The step of being quantified using interval midpoint.
S17:Test SNN discriminations.
That is test parameter described in S7 quantifies SNN.Terminate if meeting performance indicator, otherwise returns to S12 and choose area again Between division methods and subsequent process.

Claims (4)

1. a kind of impulsive neural networks parameter quantification method, which is characterized in that include the following steps:
S1, the original pulse neural network that training is completed is obtained, the parameter of original pulse neural network includes weights, threshold value, lets out Leak constant, set voltage, refractory period, synaptic delay;
S2, one or more parameter for needing to quantify is chosen;
Distribution situation of the parameter in neural network selected by S3, statistics;
S4, interval division is carried out to selected parameter;
S5, parameter is quantified according to section, that is, will be distributed over the parameter in same section and is quantified as the same value;
S6, parameter quantification impulse neural network is obtained, that is, the quantized value obtained in step 5 is used to substitute original pulse neural network It can only strange corresponding initial parameter;
S7, the step S6 parameter quantification impulse neural networks obtained are tested:Using the defeated of primitivation impulsive neural networks Enter as input, parameter quantization neural network is tested, meets if desired indicator requires if test result and terminate, otherwise return Return step S3.
2. a kind of impulsive neural networks parameter quantification method according to claim 1, which is characterized in that described in step S1 Obtaining the specific method of original pulse neural network that training is completed is:
By the corresponding artificial neural network of training, the artificial neural network is multilayer perceptron, convolutional neural networks, cycle One kind in neural network, shot and long term memory network, then training parameter is mapped to the identical impulsive neural networks of topological structure, Input data is encoded using pulse train, obtains original pulse neural network;
Alternatively,
Impulsive neural networks are established, using pulse train as inputting, the learning machine of plasticity is relied on using cynapse pulse sequence On-line training impulsive neural networks processed, the parameter of fixed pulse neural network after training, to obtain the original of training completion Initial pulse neural network.
3. a kind of impulsive neural networks parameter quantification method according to claim 2, which is characterized in that the step S3's Specific method is:
Parameter distribution situation of the selected parameter of statistics in a certain layer network;
Alternatively,
The parameter that Selecting All Parameters are counted in a few layer networks distinguishes situation;
Again alternatively,
The parameter that Selecting All Parameters are counted in each layer network distinguishes situation.
4. a kind of impulsive neural networks parameter quantification method according to claim 3, which is characterized in that the step S4's Specific method is:
Parameter section is divided using a kind of in equal point-score, non-equal point-score, confidence interval partitioning, and obtains interval number Mesh.
CN201810501442.2A 2018-05-23 2018-05-23 A kind of impulsive neural networks parameter quantification method Pending CN108717570A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810501442.2A CN108717570A (en) 2018-05-23 2018-05-23 A kind of impulsive neural networks parameter quantification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810501442.2A CN108717570A (en) 2018-05-23 2018-05-23 A kind of impulsive neural networks parameter quantification method

Publications (1)

Publication Number Publication Date
CN108717570A true CN108717570A (en) 2018-10-30

Family

ID=63900490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810501442.2A Pending CN108717570A (en) 2018-05-23 2018-05-23 A kind of impulsive neural networks parameter quantification method

Country Status (1)

Country Link
CN (1) CN108717570A (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635938A (en) * 2018-12-29 2019-04-16 电子科技大学 A kind of autonomous learning impulsive neural networks weight quantization method
CN110059822A (en) * 2019-04-24 2019-07-26 苏州浪潮智能科技有限公司 One kind compressing quantization method based on channel packet low bit neural network parameter
CN110364232A (en) * 2019-07-08 2019-10-22 河海大学 It is a kind of based on memristor-gradient descent method neural network Strength of High Performance Concrete prediction technique
CN110796231A (en) * 2019-09-09 2020-02-14 珠海格力电器股份有限公司 Data processing method, data processing device, computer equipment and storage medium
WO2020155741A1 (en) * 2019-01-29 2020-08-06 清华大学 Fusion structure and method of convolutional neural network and pulse neural network
CN112085190A (en) * 2019-06-12 2020-12-15 上海寒武纪信息科技有限公司 Neural network quantitative parameter determination method and related product
WO2020253692A1 (en) * 2019-06-17 2020-12-24 浙江大学 Quantification method for deep learning network parameters
WO2021036890A1 (en) * 2019-08-23 2021-03-04 安徽寒武纪信息科技有限公司 Data processing method and apparatus, computer device, and storage medium
WO2021036908A1 (en) * 2019-08-23 2021-03-04 安徽寒武纪信息科技有限公司 Data processing method and apparatus, computer equipment and storage medium
CN113111758A (en) * 2021-04-06 2021-07-13 中山大学 SAR image ship target identification method based on pulse neural network
CN113111997A (en) * 2020-01-13 2021-07-13 中科寒武纪科技股份有限公司 Method, apparatus and computer-readable storage medium for neural network data quantization
CN113974607A (en) * 2021-11-17 2022-01-28 杭州电子科技大学 Sleep snore detecting system based on impulse neural network
US11397579B2 (en) 2018-02-13 2022-07-26 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
US11437032B2 (en) 2017-09-29 2022-09-06 Shanghai Cambricon Information Technology Co., Ltd Image processing apparatus and method
US11442785B2 (en) 2018-05-18 2022-09-13 Shanghai Cambricon Information Technology Co., Ltd Computation method and product thereof
US11513586B2 (en) 2018-02-14 2022-11-29 Shanghai Cambricon Information Technology Co., Ltd Control device, method and equipment for processor
US11544059B2 (en) 2018-12-28 2023-01-03 Cambricon (Xi'an) Semiconductor Co., Ltd. Signal processing device, signal processing method and related products
US11609760B2 (en) 2018-02-13 2023-03-21 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
US11630666B2 (en) 2018-02-13 2023-04-18 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
US11676028B2 (en) 2019-06-12 2023-06-13 Shanghai Cambricon Information Technology Co., Ltd Neural network quantization parameter determination method and related products
US11703939B2 (en) 2018-09-28 2023-07-18 Shanghai Cambricon Information Technology Co., Ltd Signal processing device and related products
US11762690B2 (en) 2019-04-18 2023-09-19 Cambricon Technologies Corporation Limited Data processing method and related products
US11789847B2 (en) 2018-06-27 2023-10-17 Shanghai Cambricon Information Technology Co., Ltd On-chip code breakpoint debugging method, on-chip processor, and chip breakpoint debugging system
US11847554B2 (en) 2019-04-18 2023-12-19 Cambricon Technologies Corporation Limited Data processing method and related products
US11966583B2 (en) 2018-08-28 2024-04-23 Cambricon Technologies Corporation Limited Data pre-processing method and device, and related computer device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106022614A (en) * 2016-05-22 2016-10-12 广州供电局有限公司 Data mining method of neural network based on nearest neighbor clustering
CN107704917A (en) * 2017-08-24 2018-02-16 北京理工大学 A kind of method of effectively training depth convolutional neural networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106022614A (en) * 2016-05-22 2016-10-12 广州供电局有限公司 Data mining method of neural network based on nearest neighbor clustering
CN107704917A (en) * 2017-08-24 2018-02-16 北京理工大学 A kind of method of effectively training depth convolutional neural networks

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11437032B2 (en) 2017-09-29 2022-09-06 Shanghai Cambricon Information Technology Co., Ltd Image processing apparatus and method
US11663002B2 (en) 2018-02-13 2023-05-30 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
US11609760B2 (en) 2018-02-13 2023-03-21 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
US11740898B2 (en) 2018-02-13 2023-08-29 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
US11720357B2 (en) 2018-02-13 2023-08-08 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
US11630666B2 (en) 2018-02-13 2023-04-18 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
US11704125B2 (en) 2018-02-13 2023-07-18 Cambricon (Xi'an) Semiconductor Co., Ltd. Computing device and method
US11397579B2 (en) 2018-02-13 2022-07-26 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
US11507370B2 (en) 2018-02-13 2022-11-22 Cambricon (Xi'an) Semiconductor Co., Ltd. Method and device for dynamically adjusting decimal point positions in neural network computations
US11709672B2 (en) 2018-02-13 2023-07-25 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
US11620130B2 (en) 2018-02-13 2023-04-04 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
US11513586B2 (en) 2018-02-14 2022-11-29 Shanghai Cambricon Information Technology Co., Ltd Control device, method and equipment for processor
US11442786B2 (en) 2018-05-18 2022-09-13 Shanghai Cambricon Information Technology Co., Ltd Computation method and product thereof
US11442785B2 (en) 2018-05-18 2022-09-13 Shanghai Cambricon Information Technology Co., Ltd Computation method and product thereof
US11789847B2 (en) 2018-06-27 2023-10-17 Shanghai Cambricon Information Technology Co., Ltd On-chip code breakpoint debugging method, on-chip processor, and chip breakpoint debugging system
US11966583B2 (en) 2018-08-28 2024-04-23 Cambricon Technologies Corporation Limited Data pre-processing method and device, and related computer device and storage medium
US11703939B2 (en) 2018-09-28 2023-07-18 Shanghai Cambricon Information Technology Co., Ltd Signal processing device and related products
US11544059B2 (en) 2018-12-28 2023-01-03 Cambricon (Xi'an) Semiconductor Co., Ltd. Signal processing device, signal processing method and related products
CN109635938A (en) * 2018-12-29 2019-04-16 电子科技大学 A kind of autonomous learning impulsive neural networks weight quantization method
CN109635938B (en) * 2018-12-29 2022-05-17 电子科技大学 Weight quantization method for autonomous learning impulse neural network
WO2020155741A1 (en) * 2019-01-29 2020-08-06 清华大学 Fusion structure and method of convolutional neural network and pulse neural network
US11847554B2 (en) 2019-04-18 2023-12-19 Cambricon Technologies Corporation Limited Data processing method and related products
US11762690B2 (en) 2019-04-18 2023-09-19 Cambricon Technologies Corporation Limited Data processing method and related products
US11934940B2 (en) 2019-04-18 2024-03-19 Cambricon Technologies Corporation Limited AI processor simulation
CN110059822A (en) * 2019-04-24 2019-07-26 苏州浪潮智能科技有限公司 One kind compressing quantization method based on channel packet low bit neural network parameter
CN112085190B (en) * 2019-06-12 2024-04-02 上海寒武纪信息科技有限公司 Method for determining quantization parameter of neural network and related product
US11675676B2 (en) 2019-06-12 2023-06-13 Shanghai Cambricon Information Technology Co., Ltd Neural network quantization parameter determination method and related products
US11676029B2 (en) 2019-06-12 2023-06-13 Shanghai Cambricon Information Technology Co., Ltd Neural network quantization parameter determination method and related products
US11676028B2 (en) 2019-06-12 2023-06-13 Shanghai Cambricon Information Technology Co., Ltd Neural network quantization parameter determination method and related products
CN112085190A (en) * 2019-06-12 2020-12-15 上海寒武纪信息科技有限公司 Neural network quantitative parameter determination method and related product
WO2020253692A1 (en) * 2019-06-17 2020-12-24 浙江大学 Quantification method for deep learning network parameters
CN110364232B (en) * 2019-07-08 2021-06-11 河海大学 High-performance concrete strength prediction method based on memristor-gradient descent method neural network
CN110364232A (en) * 2019-07-08 2019-10-22 河海大学 It is a kind of based on memristor-gradient descent method neural network Strength of High Performance Concrete prediction technique
WO2021036908A1 (en) * 2019-08-23 2021-03-04 安徽寒武纪信息科技有限公司 Data processing method and apparatus, computer equipment and storage medium
WO2021036890A1 (en) * 2019-08-23 2021-03-04 安徽寒武纪信息科技有限公司 Data processing method and apparatus, computer device, and storage medium
CN110796231A (en) * 2019-09-09 2020-02-14 珠海格力电器股份有限公司 Data processing method, data processing device, computer equipment and storage medium
CN113111997B (en) * 2020-01-13 2024-03-22 中科寒武纪科技股份有限公司 Method, apparatus and related products for neural network data quantization
CN113111997A (en) * 2020-01-13 2021-07-13 中科寒武纪科技股份有限公司 Method, apparatus and computer-readable storage medium for neural network data quantization
CN113111758B (en) * 2021-04-06 2024-01-12 中山大学 SAR image ship target recognition method based on impulse neural network
CN113111758A (en) * 2021-04-06 2021-07-13 中山大学 SAR image ship target identification method based on pulse neural network
CN113974607A (en) * 2021-11-17 2022-01-28 杭州电子科技大学 Sleep snore detecting system based on impulse neural network
CN113974607B (en) * 2021-11-17 2024-04-26 杭州电子科技大学 Sleep snore detecting system based on pulse neural network

Similar Documents

Publication Publication Date Title
CN108717570A (en) A kind of impulsive neural networks parameter quantification method
CN109635917B (en) Multi-agent cooperation decision and training method
Stromatias et al. Scalable energy-efficient, low-latency implementations of trained spiking deep belief networks on spinnaker
CN105095961B (en) A kind of hybrid system of artificial neural network and impulsive neural networks
CN107247989A (en) A kind of neural network training method and device
CN107358293A (en) A kind of neural network training method and device
Cheng et al. Predicting productivity loss caused by change orders using the evolutionary fuzzy support vector machine inference model
CN110222760B (en) Quick image processing method based on winograd algorithm
CN106982359A (en) A kind of binocular video monitoring method, system and computer-readable recording medium
CN105095967A (en) Multi-mode neural morphological network core
CN109165730B (en) State quantization network implementation method in cross array neuromorphic hardware
CN110223515B (en) Vehicle track generation method
Zambrano et al. Efficient computation in adaptive artificial spiking neural networks
KR970008532B1 (en) Neural metwork
CN113706151A (en) Data processing method and device, computer equipment and storage medium
CN107609637A (en) A kind of combination data represent the method with the raising pattern-recognition precision of pseudo- reversal learning self-encoding encoder
CN110084371A (en) Model iteration update method, device and computer equipment based on machine learning
CN115018039A (en) Neural network distillation method, target detection method and device
Gao et al. Road Traffic Freight Volume Forecast Using Support Vector Machine Combining Forecasting.
CN112446462A (en) Generation method and device of target neural network model
AU2021100614A4 (en) A novel regression prediction method for electronic nose based on broad learning system
CN108073985A (en) A kind of importing ultra-deep study method for voice recognition of artificial intelligence
CN106357437A (en) Web Service Qos prediction method based on multivariate time series
Lee et al. Semi-supervised learning for spiking neural networks based on spike-timing-dependent plasticity
KR102535635B1 (en) Neuromorphic computing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20181030

WD01 Invention patent application deemed withdrawn after publication