CN114220089A - Method for carrying out pattern recognition based on segmented progressive pulse neural network - Google Patents

Method for carrying out pattern recognition based on segmented progressive pulse neural network Download PDF

Info

Publication number
CN114220089A
CN114220089A CN202111436510.XA CN202111436510A CN114220089A CN 114220089 A CN114220089 A CN 114220089A CN 202111436510 A CN202111436510 A CN 202111436510A CN 114220089 A CN114220089 A CN 114220089A
Authority
CN
China
Prior art keywords
pulse
layer
neural network
neurons
neuron
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111436510.XA
Other languages
Chinese (zh)
Inventor
杨旭
雷云霖
王淼
蔡建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202111436510.XA priority Critical patent/CN114220089A/en
Publication of CN114220089A publication Critical patent/CN114220089A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A method for carrying out pattern recognition based on a sectional type progressive pulse neural network comprises the steps of obtaining a sample of a pattern recognition task, establishing a coding layer, processing the sample and coding the sample into a form which can be processed by the pulse neural network; establishing an input layer, and converting the output of the coding layer into a pulse sequence; establishing a memory layer, wherein pulse neurons in the memory layer are used for establishing memory; then, a child learning stage is carried out, an output layer is established, synapses of a memory layer and the output layer are established through a heuristic method, and further the memory of the memory layer can be accurately extracted and a decision is made; then, performing an accurate learning stage, inputting samples of all the pattern recognition tasks, only performing synapse weight adjustment, using teacher signals to guide weight adjustment, and obtaining a pulse neural network which can be used for the current pattern recognition task after learning is completed; and fixing the learned synaptic weight and synaptic structure of the impulse neural network, and inputting data to be subjected to pattern recognition to obtain a pattern recognition result.

Description

Method for carrying out pattern recognition based on segmented progressive pulse neural network
Technical Field
The invention belongs to the technical field of artificial intelligence and neural networks, and particularly relates to a method for carrying out pattern recognition based on a segmented progressive pulse neural network.
Background
Spiking neural networks are considered to have great potential in artificial intelligence because their neuron models are closer to the true neuron models, and are therefore referred to as third generation artificial neuron networks following deep neural networks. However, when the existing impulse neural network is actually applied, for example, an efficient learning method is lacked in the problem of pattern recognition, the existing impulse neural network learning method generally has two directions, one direction is to train the impulse neural network by means of the existing deep neural network learning method, and although the effect of the existing impulse neural network is not weaker than that of the deep neural network, the direction cannot perfectly utilize the characteristics of the bionic neurons of the impulse neural network, so that the potential of the impulse neural network cannot be fully exploited. The other direction is to acquire inspiration from brain science, and the neural network of pulse learning is realized by using a brain-like algorithm, so that the direction has more intelligent potential and is a hot spot of the research of the neural network of pulse at present, but the algorithm with high efficiency in the direction is relatively lacked at present. There are many mechanisms of synaptic plasticity in the brain for learning, one of which is the brain Long-term inhibition (LTD) and Long-term potentiation (LTP) mechanisms, which can be described as if a synapse is stimulated with high frequency, it will lead to its membrane potential of the postsynaptic neuron population remaining high for a Long time, and under the influence of this mechanism, synaptic efficacy increases and increases in number. The LTD mechanism can be described as if a synapse is stimulated at a low frequency, it will result in its post-synaptic neuron population's membrane potential remaining low for a long time, and a decrease in synaptic efficacy and number. Meanwhile, the learning process of a human body is staged, in the period of children, the brain learns under the drive of background knowledge, synapses grow in a large amount, but the cognitive ability is not strong enough, the memory is not accurate enough, and then the brain finely prunes and adjusts the synapses growing in a large amount on the basis, so that the human body has strong cognitive ability and accurate memory.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a method for pattern recognition based on a segmented progressive pulse neural network, which can effectively generate memory, can be widely applied to various pattern recognition tasks, has good robustness, and allows the network to perform accurate structure learning by introducing brain LTD and LTP mechanisms, so that the network scale is reduced on the premise of not losing a lot of performances, the learning is more effective, and the operation efficiency is higher.
In order to achieve the purpose, the invention adopts the technical scheme that:
a method for pattern recognition based on a segmented progressive pulse neural network comprises the following steps:
step 1: acquiring a sample of a pattern recognition task, establishing a coding layer of a pulse neural network, processing the sample on the coding layer, and coding the sample into a form which can be processed by the pulse neural network;
step 2: establishing an input layer of a pulse neural network, and converting the output of the coding layer into a pulse sequence at the input layer;
and step 3: establishing a memory layer of the impulse neural network, wherein impulse neurons in the memory layer are used for establishing memory;
and 4, step 4: performing a stage of learning for the child
Inputting a part of samples of the pattern recognition task (namely samples before unprocessed in the step 1), introducing a Long-term suppression (LTD) and a Long-term enhancement (LTP) mechanism of a brain, and using a synaptic weight adjustment algorithm and a progressive synaptic structure adjustment algorithm to enable a pulse neural network to generate a structure suitable for the current task; this structure can be considered as the preliminary memory of the network to the current task. This is similar to the process of human learning, and in general the number of synapses in the human brain is reached in infancy, because before that the brain has just begun to learn about the world, the synaptic plasticity is very strong, creating very rough memories that are not necessarily useful later, but initially establish an understanding of the world.
And 5: establishing an output layer of the impulse neural network, and establishing synapses of a memory layer and the output layer by a heuristic method, so that the memory of the memory layer can be accurately extracted and a decision can be made;
step 6: performing a precise learning phase
Inputting all samples of the pattern recognition task (namely samples before unprocessed in the step 1), only carrying out synapse weight adjustment, and using teacher signals to guide weight adjustment, which is similar to a human learning process, after the growth of a 'child' period, a brain can prune synapses to generate accurate memory, thereby effectively completing the task, and obtaining a pulse neural network which can be used for the current pattern recognition task after learning is completed;
and 7: and fixing the learned synaptic weight and synaptic structure of the impulse neural network, and inputting data to be subjected to pattern recognition to obtain a pattern recognition result.
In one embodiment, the sample is a picture or audio, and the like, so that the task of pattern recognition is image recognition or audio recognition, and the like.
In one embodiment, in step 1, a plurality of static convolution kernels are selected, each sample is processed by convolution and pooling operations, and then the processed samples are normalized to an interval [0, T ], where each sample has L values after processing.
In one embodiment, step 2, constructing the input layer using the same number of pulse neurons (i.e., L) as the sample length for converting the output of the encoding layer into a pulse sequence and the same number of pulse generators (i.e., L) as the sample length for inducing the pulse neurons to send pulses; the impulse neuron and impulse generationThe pulse generators are connected in a one-to-one manner, each output of the pulse generators and the output of the coding layer correspond to each other one by one, and the relationship between the time for inducing the pulse neuron to send the pulse and the output of the coding layer is as follows: t isspike-i=OutputiWherein T isspikIs the induction time, Output, of the pulse generator iiIs the coding layer output result of the corresponding position.
In one embodiment, in step 3, a memory layer is constructed by using m layers of impulse neural networks, each layer of impulse neural network has n impulse neurons, n and m are arbitrary positive numbers, in a network initialization stage, except for the impulse neuron of the last layer of impulse neural network, the impulse neurons of the other layers of impulse neural networks randomly select x impulse neurons to establish synapses to the next layer of impulse neural network, x is 0< x ≦ n, n × m neurons and synapse structures and synapse weights between the n × m neurons form a basis for memory, and precise memory is formed through learning in a subsequent step.
In one embodiment, the step 4 comprises:
step 4.1: setting a discriminator for each memory layer impulse neuron if the impulse neuron is t in the pastWExcitation less than theta in timeLTDNext, the pulse neuron is followed by tWEnter into brain long-term inhibition state within time if pulse neuron is tWExcitation over timeLTPNext, the pulse neuron is followed by tWEntering a long-term enhanced state within time; the synapse weight of the neuron entering the long-time enhanced state is set to omegaLTP,ωLTPLarge enough to enable it to be activated by a synapseally-directed neuron, referred to as a post-synaptic neuron; the synaptic weight of the pulse neuron entering the long-term inhibition state of the brain is set to be omegaLTD,ωLTDSmall enough to make it difficult to activate its postsynaptic neurons, and a spiking neuron that enters a brain long-term inhibitory state needs to exert an inhibitory effect on synapses that may be clipped, weighted, weakened, and spiking neurons in the network memory layer may be at the time of exiting the brain long-term inhibitory stateThe long-term brain inhibition state, the long-term enhancement state, the non-long-term brain inhibition state and the non-long-term enhancement state are switched continuously, the long-term enhancement mechanism can be regarded as strengthening the relation between the neurons related to a certain mode in the current mode identification task, and the long-term brain inhibition mechanism can inhibit the relation between noise and the neurons;
step 4.2: inputting a portion of samples of the pattern recognition task, the progressive synaptic structure adjustment algorithm being: if there is no synaptic connection between two pulsing neurons and it always fires within Δ t, a new synapse is established between the two;
the synapse weight adjusting algorithm is used for adjusting and adjusting by using a pulse-timing-dependent plasticity (STDP) principle, the synapse structure adjusting method is the same as a Hubbitt rule principle in a brain, and the STDP principle is considered as one of key rules of brain synapse plasticity. The pulse neurons in the network are continuously switched among the states LTD, LTP, non-LTD and LTD, and the network learns through a gradual synapse structure adjusting algorithm and a synapse weight adjusting algorithm, so that a compact structure is generated between the neurons related to a certain mode in the current mode identification task, and the whole structure is very suitable for the current task.
In one embodiment, the method for exerting inhibitory synaptic effects after exiting the brain long-term inhibitory state in step 4.1 is:
if the pulse neuron is going to exit the brain long-time-course inhibition state, each synapse of the pulse neuron has the probability of rho 1 to generate weight attenuation alpha times, alpha<The probability of 1, rho 2 does not occur, and the probability of (1-rho 2) is cut off; wherein
Figure BDA0003381690320000051
ρ, κ, and Ψ are constants that control the magnitude of the probability, and LTD _ Count is the number of times the spiking neuron currently enters the brain's long-term inhibitory state continuously.
In one embodiment, the step 5 establishes synapses of a memory layer and an output layer by a heuristic method, including:
step 5.1: using the samples used in the step 4, putting the samples with the same label in the same group, and inputting the samples of each group into the coding layer in sequence;
step 5.2: for the same group of input, in the pulse neuron of the last layer of the memory layer, the excitation frequency exceeds thetaOUTAnd establishing synapses between the pulse neurons and the output layer pulse neurons corresponding to the current group. The process is similar to the brain decision process, generally speaking, the human brain makes a decision only related to the brain area activated by the current external stimulation, but not related to other brain areas which are not activated, so that the decision is made by the currently most related activation memory when the judgment is output.
In one embodiment, the step 6 comprises:
step 6.1: setting an exciter for each pulse neuron of the output layer, and enabling the pulse neuron to be activated after receiving a teacher signal; then, disordering all samples and inputting the samples into a pulse neural network;
step 6.2: for a certain sample, the output layer pulse neuron corresponding to the label is pulse neuron j, and the transmission time of the teacher signal of the pulse neuron j is teThe transmission time of teacher signal of the rest of pulse neurons in the output layer is tinWherein t isinEarlier than the earliest firing time in the last layer of the memory layer, teThen at the moment when the excitation frequency of the last layer of the pulse neurons in the memory layer is the highest, under the guidance of the teacher signal, the synaptic weight adjustment algorithm described in step 4 will increase the synaptic weight between the pulse neurons j in the memory layer and the output layer, and the neurons in the memory layer other than the pulse neurons j in the output layer are caused to be at teThe weight of the previously fired synapse will decrease;
step 6.3: and (4) repeating the step 6.1 and the step 6.2 for a plurality of times, then completing learning, obtaining the impulse neural network which can be used for the current pattern recognition task after completing learning, and considering impulse neurons in the network, and synaptic structures and synaptic weights among the impulse neurons as the memory components.
Compared with the prior art, the invention has the beneficial effects that:
1) the invention can effectively learn the impulse neural network, can be widely applied to various pattern recognition tasks, has good robustness, and the network can generate a self-adaptive network structure according to data, fully utilizes the characteristics of the impulse neural network which are more similar to the brain, and excavates the potential of the impulse neural network in the direction of artificial intelligence as much as possible.
2) The invention introduces LTP and LTD mechanisms and other brain learning mechanisms, can effectively generate memory, and the introduction of the LTD and LTP mechanisms enables the network to carry out accurate structure learning, thereby reducing the network scale on the premise of not losing a lot of performances, leading to more effective learning and higher operation efficiency.
3) The method can be used for solving common pattern recognition problems such as picture classification, target detection, voice recognition and the like by the impulse neural network. In these pattern recognition problems, the data is converted into pulse sequences, and then the method is used for training the impulse neural network, so that the impulse neural network generates adaptive structures and weights for the current task, and the performance of the tasks can be improved.
Drawings
FIG. 1 is a diagram of a spiking neural network of the present invention.
Fig. 2 is a relationship of probabilities ρ 2 and ρ 1 mentioned in the present invention.
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the drawings and examples.
As shown in fig. 1, the present invention is a segmented progressive impulse neural network learning method for pattern recognition, the network structure is constructed by four layers, namely an encoding layer, an input layer, a memory layer and an output layer, the encoding layer: processing and coding the input samples into a form which can be processed by a pulse neural network, namely pulse coding; an input layer: converting the output of the coding layer into a pulse sequence; a memory layer: learning by using a synapse weight value adjustment and a gradual synapse structure adjustment algorithm added with an LTP and LTD mechanism, and generating a structure suitable for a current task after learning is finished; an output layer: synapses of the memory layer and the output layer are established through a heuristic method, so that the output of the memory layer can be accurately utilized and a decision can be made. In the learning method, the project similar to the gradual learning of human brain is divided into two stages, wherein the first stage is a fuzzy learning stage, partial samples are input in the stage, new synapse generation is allowed, and a network generates a structure suitable for a current task; the second stage is a 'precise' learning stage, all samples are input at the stage, only synapse weight adjustment is carried out, and teacher signals are used for guiding weight adjustment, so that the network can carry out precise optimization aiming at the current task to achieve good efficiency. After a memory layer generates a specific network structure and weight, the final layer uses a heuristic method to establish and output synapses of the layer, then the weight of the whole network is adjusted under the guidance of teacher signals to achieve the optimal performance, and then the network synapse structure and the synapse weight are fixed, so that a pulse neural network model which can be applied to the problem of actual mode identification can be obtained. The invention can be widely applied to various mode recognition tasks by effectively generating memory, and has good robustness. And the introduction of brain LTD and LTP mechanisms enables the network to carry out more targeted and accurate structure learning, so that the network scale is reduced on the premise of not losing a lot of performances, the complexity of the network scale is reduced, the learning is more effective, and the operation efficiency is higher.
Referring to fig. 1 and 2, taking the pattern recognition problem of handwriting image recognition as an example, the method of the invention includes the following steps:
step 1: establishing a coding layer, processing the handwritten picture and coding the handwritten picture into a form which can be processed by a pulse neural network;
step 1.1: selecting a plurality of two-dimensional static convolution kernels, performing convolution and pooling operations on each picture, then standardizing the processed samples into an interval [0,255], and resetting the dimensionality of the processed data to one dimension with the length of L;
step 2: establishing an input layer, and converting the output of the coding layer into a pulse sequence;
step 2.1: using L pulsesConstructing an input layer by elements and L pulse generators, wherein the pulse neurons are connected with the pulse generators in a one-to-one manner, the outputs of the pulse generators and the coding layer are in one-to-one correspondence, and the relation between the time for inducing the pulse neuron to send a pulse and the output of the coding layer of any one pulse generator i in the pulse generators is as follows: t isspike=Outputi,TspikeIs the induction time, Output, of the pulse generator iiIs the coding layer output result of the corresponding position.
And step 3: establishing a memory layer, wherein the pulse neurons in the memory layer can be used for constructing memory;
step 3.1: constructing a feature extraction layer by using a 3-layer pulse neural network, wherein each layer of network is provided with L pulse neurons, and at the beginning stage, except the pulse neuron at the last layer, the rest pulse neurons establish synapses to a plurality of pulse neurons at the next layer, and the synapse weights are random;
and 4, step 4: and (2) performing a learning stage of children, inputting partial original pictures, introducing a Long-term suppression (LTD) and Long-term enhancement (LTP) mechanism of a brain, and using a synapse weight value adjustment algorithm and a progressive synapse structure adjustment algorithm to enable a network to generate a structure suitable for a handwritten picture recognition task, wherein the structure is considered to be initial memory of the network to the current task. This is similar to the process of human learning, generally speaking, the number of synapses in the human brain is reached in infancy, since by that time the brain has just begun to learn about the world, the synaptic plasticity is very strong, creating very rough memories that are not necessarily useful later, but initially establish an understanding of the world;
step 4.1: setting a discriminator for each memory layer impulse neuron if the neuron is t in the pastWExcitation less than theta in timeLTDThen, this neuron is followed by tWIf the neuron has t in the pastWExcitation over timeLTPThen, this neuron is followed by tWThe LTP state is entered. Neurons entering LTP state, synapses thereofThe weight is set as omegaLTP,ωLTPLarge enough to be activated by a synapse-oriented neuron, also called post-synaptic neuron, entering the LTD state, with a synaptic weight set to ωLTD,ωLTDSmall enough to make it difficult for its postsynaptic neurons to activate, and a spiking neuron that enters the LTD state needs to exert an inhibitory effect on synapses when exiting the LTD state, which may be clipped and weighted downThe impulse neurons in the network will switch between LTD, LTP, non-LTD and LTD states, the LTP mechanism can be viewed as reinforcing the connections between those neurons associated with a certain pattern in the current pattern recognition task, while the LTD mechanism suppresses the connections between noise and neurons. (ii) a
Step 4.1.1: if a neuron is going to exit the LTD state, then its probability of ρ 1 per synapse is weighted down by α (α)<1) Doubling the probability of ρ 2 does not occur anything, the probability occurrence of (1- ρ 1- ρ 2) is clipped. Wherein
Figure BDA0003381690320000081
Where ρ, κ, and Ψ are constants that control the magnitude of the probability, and LTD _ Count is the number of times this spiking neuron currently enters the LTD state in succession. In this example, ρ is 0.1, κ is 0.05, and Ψ is 0.25, the relationship between ρ 2 and ρ 1 is as shown in fig. 2;
step 4.2: part of a small number of pictures is selected for input and the progressive synaptic structure adjustment algorithm is to establish a new synapse between two neurons if there is no synaptic connection between them and it always fires within Δ t. The synapse weight adjusting method uses an STDP principle, the synapse structure adjusting method is consistent with a Hubbitt rule principle in a brain, the STDP principle is considered to be one of key rules of brain synapse plasticity, pulse neurons in a network are continuously switched among states LTD, LTP, non-LTD and LTD, and the network learns through a progressive synapse structure adjusting algorithm and a synapse weight adjusting algorithm, so that a compact structure is generated between neurons related to a certain mode in a current mode identification task, and the whole structure is very suitable for the current task.
Step 4.2.1: the specific formula of the synapse weight adjusting method using the STDP rule is shown in formula 1:
Figure BDA0003381690320000091
wherein wmaxIs the upper limit of the weight, λSTDPIs the learning rate, μ+And mu_The weight determination coefficients, alpha, during weight increase and decay, respectivelySTDPIs the asymmetry factor, K+And K-Respectively, the time convergence coefficient of weight attenuation and increase, e is a natural constant, tau-And τ+Time scaling factor, w, for weight increase and decay, respectivelyAnd w is the weight after and before updating, respectively;
and 5: establishing an output layer, constructing by using 10 neurons, respectively representing digital categories 0 to 9, establishing synapses of a memory layer and the output layer by a heuristic method, and further accurately extracting memory of the memory layer and making a decision;
step 5.1: grouping by category by using the pictures used in the step 4, for example, putting all pictures of which the category is the number 0 into one group, putting all pictures of which the category is the number 1 into one group, and inputting the grouping into an encoding layer;
step 5.2: for the same input, in the neuron of the last layer of the memory layer, the excitation frequency exceeds thetaOUTNext, a synapse is established between this neuron and the output layer neuron corresponding to the current group. The process is similar to the brain decision process, generally speaking, the decision making of the brain is only related to the current brain area activated by external stimulation and is unrelated to other brain areas which are not activated, so that the decision making is made by the currently most related activation memory when judging and outputting;
step 6: and (3) performing an accurate learning stage, inputting all pictures, only adjusting the synapse weight, and guiding the weight adjustment by using a teacher signal. This is similar to the process of human learning, and after the growth of the "infant" period, the brain will trim the synapse, and then produce accurate memory, thus effectively completing the task;
step 6.1: the output layer is provided with an exciter for each neuron, and the exciter is used for enabling the neuron to be activated after receiving teacher signals. Then, all samples are disorganized and then input into a network;
step 6.2: for a sample, if the sample is a digital 0 handwritten picture, the corresponding output layer neuron is neuron No. 0, and the transmission time of the teacher signal of neuron No. 0 is te. The transmission time of teacher signal of the rest of output layer neurons is tinWherein t isinEarlier than the earliest excitation time in the last layer of neurons in the memory layer, teThen at the time when the neuron excitation frequency of the last layer in the memory layer is the highest, under the guidance of teacher signal, the synaptic weight between i-neurons in the memory layer and the output layer is increased by the synaptic weight adjustment algorithm described in step 4, and the memory layer causes neurons except the i-neurons in the output layer to be excited at teThe weight of the previously fired synapse will decrease.
Step 6.3: the synapse weight value adjusting algorithm is the same as the step 4.2.1;
step 6.4: and (3) repeating the step 6.1 and the step 6.2 for a plurality of times, then completing learning, and obtaining the impulse neural network which can be used for the handwriting picture recognition task after completing learning, wherein the neurons in the network, and the synaptic structures and synaptic weights among the neurons can be considered as the memory for memorizing the handwriting picture recognition task.
And 7: and fixing the synaptic weight and the synaptic structure of the learnt pulse neural network, inputting a handwritten picture into the network, and representing the recognition result of the network by the neuron which is firstly excited by the output layer. Finally, the method can effectively identify the handwritten picture, the identification precision is not lower than that of the existing deep learning method, 98% is achieved, meanwhile, the number of parameters and energy consumption are greatly smaller than those of the existing work, and compared with a neural network of the same scale, the number of synapses is reduced by about 80%, so that the parameter quantity and the energy consumption caused by operation are greatly reduced.

Claims (9)

1. A method for pattern recognition based on a segmented progressive pulse neural network is characterized by comprising the following steps:
step 1: acquiring a sample of a pattern recognition task, establishing a coding layer of a pulse neural network, processing the sample on the coding layer, and coding the sample into a form which can be processed by the pulse neural network;
step 2: establishing an input layer of a pulse neural network, and converting the output of the coding layer into a pulse sequence at the input layer;
and step 3: establishing a memory layer of the impulse neural network, wherein impulse neurons in the memory layer are used for establishing memory;
and 4, step 4: performing a stage of learning for the child
Inputting a part of samples of the pattern recognition task, introducing a brain long-term inhibition and long-term enhancement mechanism, and using a synapse weight adjusting algorithm and a progressive synapse structure adjusting algorithm to enable a pulse neural network to generate a structure suitable for the current task;
and 5: establishing an output layer of the impulse neural network, and establishing synapses of a memory layer and the output layer by a heuristic method, so that the memory of the memory layer can be accurately extracted and a decision can be made;
step 6: performing a precise learning phase
Inputting all samples of the pattern recognition task, only adjusting the synapse weight, using a teacher signal to guide the weight adjustment, and obtaining a pulse neural network which can be used for the current pattern recognition task after learning is finished;
and 7: and fixing the learned synaptic weight and synaptic structure of the impulse neural network, and inputting data to be subjected to pattern recognition to obtain a pattern recognition result.
2. The method of claim 1, wherein the sample is a picture or audio and the pattern recognition task is image recognition or audio recognition.
3. The method of claim 1, wherein in step 1, a plurality of static convolution kernels are selected, each sample is processed by convolution and pooling, and then the processed samples are normalized to an interval [0, T ], and each sample has L values after processing.
4. The method for pattern recognition based on segmented progressive spiking neural network according to claim 3, wherein in step 2, the input layer is constructed by using the same number of spiking neurons as the sample length for converting the output of the coding layer into the pulse sequence and the same number of pulse generators as the sample length for inducing the spiking neurons to send the pulses; the pulse neurons are connected with the pulse generators in a one-to-one mode, the pulse generators are in one-to-one correspondence with the outputs of the coding layers, and the relation between the time of sending pulses by any pulse generator i and the outputs of the coding layers is as follows: t isspike=OutputiWherein T isspike-iIs the induction time, Output, of the pulse generator iiIs the coding layer output result of the corresponding position.
5. The method according to claim 4, wherein in the step 3, a memory layer is constructed by using m layers of impulse neural networks, each layer of impulse neural network has n impulse neurons, n and m are arbitrary positive numbers, in the network initialization stage, except the impulse neuron of the last layer of impulse neural network, the impulse neurons of the other layers of impulse neural networks randomly select x impulse neurons to establish synapses to the next layer of impulse neural network, 0< x ≦ n, n × m neurons and synapse structures and synapse weights between n × m neurons form the basis of memory, and form precise memory through learning in the subsequent steps.
6. The method for pattern recognition based on segmented progressive spiking neural network according to claim 5, wherein the step 4 comprises:
step 4.1: setting a discriminator for each memory layer impulse neuron if the impulse neuron is t in the pastWExcitation less than theta in timeLTDNext, the pulse neuron is followed by tWEnter into brain long-term inhibition state within time if pulse neuron is tWExcitation over timeLTPNext, the pulse neuron is followed by tWEntering a long-term enhanced state within time; the synapse weight of the neuron entering the long-time enhanced state is set to omegaLTP,ωLTPLarge enough to enable it to be activated by a synapseally-directed neuron, referred to as a post-synaptic neuron; the synaptic weight of the pulse neuron entering the long-term inhibition state of the brain is set to be omegaLTD,ωLTDSmall enough to make it difficult for its postsynaptic neurons to activate, and a spiking neuron that enters a brain long-term inhibitory state needs to exert an effect of inhibiting synapses when exiting the brain long-term inhibitory state, which synapses may be clipped, weighted, attenuated;
step 4.2: inputting a portion of samples of the pattern recognition task, the progressive synaptic structure adjustment algorithm being: if there is no synaptic connection between two pulsing neurons and it always fires within Δ t, a new synapse is established between the two; the synapse weight adjusting algorithm adjusts the synapse weight by using a pulse timing sequence dependent plasticity principle.
7. The method for pattern recognition based on segmented progressive spiking neural network according to claim 6, wherein the method for applying inhibitory synapse after exiting the brain long-term inhibitory state in step 4.1 is as follows:
if the pulse neuron is about to exit the brain long-time-course inhibition state, each synapse of the pulse neuron has the probability of rho 1, the weight is attenuated by alpha times, alpha is less than 1, and the probability of rho 2 does not occur, and the probability of (1-rho 2) is cut off; wherein
Figure FDA0003381690310000031
ρ, κ, and Ψ are constants that control the magnitude of the probability, and LTD _ Count is the number of times the spiking neuron currently enters the brain's long-term inhibitory state continuously.
8. The method for pattern recognition based on segmented progressive spiking neural network according to claim 6, wherein the step 5 establishes synapses of a memory layer and an output layer by a heuristic method, comprising:
step 5.1: using the samples used in the step 4, putting the samples with the same label in the same group, and inputting the samples of each group into the coding layer in sequence;
step 5.2: for the same group of input, in the pulse neuron of the last layer of the memory layer, the excitation frequency exceeds thetaOUTAnd establishing synapses between the pulse neurons and the output layer pulse neurons corresponding to the current group.
9. The method of claim 8, wherein the step 6 comprises:
step 6.1: setting an exciter for each pulse neuron of the output layer, activating the pulse neuron after receiving a teacher signal, then disordering all samples, and inputting the samples into a pulse neural network;
step 6.2: for a certain sample, the output layer pulse neuron corresponding to the label is pulse neuron j, and the transmission time of the teacher signal of the pulse neuron j is teThe transmission time of teacher signal of the rest of pulse neurons in the output layer is tinWherein t isinEarlier than the earliest firing time in the last layer of the memory layer, teThen at the moment when the excitation frequency of the last layer of the pulse neurons in the memory layer is the highest, under the guidance of the teacher signal, the synaptic weight adjustment algorithm described in step 4 will increase the synaptic weight between the pulse neurons j in the memory layer and the output layer, and the neurons in the memory layer other than the pulse neurons j in the output layer are caused to be inteThe weight of the previously fired synapse will decrease;
step 6.3: and (4) repeating the step 6.1 and the step 6.2 for a plurality of times, and then completing the learning, wherein the spiking neurons in the spiking neural network and the synaptic structures and synaptic weights among the spiking neurons are considered as memory components after the learning is completed.
CN202111436510.XA 2021-11-29 2021-11-29 Method for carrying out pattern recognition based on segmented progressive pulse neural network Pending CN114220089A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111436510.XA CN114220089A (en) 2021-11-29 2021-11-29 Method for carrying out pattern recognition based on segmented progressive pulse neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111436510.XA CN114220089A (en) 2021-11-29 2021-11-29 Method for carrying out pattern recognition based on segmented progressive pulse neural network

Publications (1)

Publication Number Publication Date
CN114220089A true CN114220089A (en) 2022-03-22

Family

ID=80698860

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111436510.XA Pending CN114220089A (en) 2021-11-29 2021-11-29 Method for carrying out pattern recognition based on segmented progressive pulse neural network

Country Status (1)

Country Link
CN (1) CN114220089A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5666518A (en) * 1995-06-26 1997-09-09 The United States Of America As Represented By The Secretary Of The Air Force Pattern recognition by simulated neural-like networks
US20050105463A1 (en) * 2002-02-05 2005-05-19 Gustavo Deco Method for classifying the traffic dynamism of a network communication using a network that contains pulsed neurons, neuronal network and system for carrying out said method
US20140122402A1 (en) * 2011-06-30 2014-05-01 Commissariat A L'energie Atomique Et Aux Energies Alternatives Network of artificial neurons based on complementary memristive devices
US20140129498A1 (en) * 2011-06-30 2014-05-08 Commissariat A L'energie Atomique Et Aux Energies Alternatives Method for non-supervised learning in an artificial neural network based on memristive nanodevices, and artificial neural network implementing said method
US20140172762A1 (en) * 2012-09-26 2014-06-19 Centre National De La Recherche Scientifique - Cnrs Unknown
CN108985447A (en) * 2018-06-15 2018-12-11 华中科技大学 A kind of hardware pulse nerve network system
CN110210563A (en) * 2019-06-04 2019-09-06 北京大学 The study of pattern pulse data space time information and recognition methods based on Spike cube SNN
CN111639754A (en) * 2020-06-05 2020-09-08 四川大学 Neural network construction, training and recognition method and system, and storage medium
CN112232440A (en) * 2020-11-10 2021-01-15 北京理工大学 Method for realizing information memory and distinction of impulse neural network by using specific neuron groups
CN112232494A (en) * 2020-11-10 2021-01-15 北京理工大学 Method for constructing pulse neural network for feature extraction based on frequency induction
CN112288078A (en) * 2020-11-10 2021-01-29 北京理工大学 Self-learning, small sample learning and transfer learning method and system based on impulse neural network
US20210034962A1 (en) * 2019-08-01 2021-02-04 International Business Machines Corporation Learning and recall in spiking neural networks

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5666518A (en) * 1995-06-26 1997-09-09 The United States Of America As Represented By The Secretary Of The Air Force Pattern recognition by simulated neural-like networks
US20050105463A1 (en) * 2002-02-05 2005-05-19 Gustavo Deco Method for classifying the traffic dynamism of a network communication using a network that contains pulsed neurons, neuronal network and system for carrying out said method
US20140122402A1 (en) * 2011-06-30 2014-05-01 Commissariat A L'energie Atomique Et Aux Energies Alternatives Network of artificial neurons based on complementary memristive devices
US20140129498A1 (en) * 2011-06-30 2014-05-08 Commissariat A L'energie Atomique Et Aux Energies Alternatives Method for non-supervised learning in an artificial neural network based on memristive nanodevices, and artificial neural network implementing said method
US20140172762A1 (en) * 2012-09-26 2014-06-19 Centre National De La Recherche Scientifique - Cnrs Unknown
CN108985447A (en) * 2018-06-15 2018-12-11 华中科技大学 A kind of hardware pulse nerve network system
CN110210563A (en) * 2019-06-04 2019-09-06 北京大学 The study of pattern pulse data space time information and recognition methods based on Spike cube SNN
US20210034962A1 (en) * 2019-08-01 2021-02-04 International Business Machines Corporation Learning and recall in spiking neural networks
CN111639754A (en) * 2020-06-05 2020-09-08 四川大学 Neural network construction, training and recognition method and system, and storage medium
CN112232440A (en) * 2020-11-10 2021-01-15 北京理工大学 Method for realizing information memory and distinction of impulse neural network by using specific neuron groups
CN112232494A (en) * 2020-11-10 2021-01-15 北京理工大学 Method for constructing pulse neural network for feature extraction based on frequency induction
CN112288078A (en) * 2020-11-10 2021-01-29 北京理工大学 Self-learning, small sample learning and transfer learning method and system based on impulse neural network

Similar Documents

Publication Publication Date Title
Liu et al. Deep neural network architectures for modulation classification
CN110737764B (en) Personalized dialogue content generation method
CN108062572A (en) A kind of Fault Diagnosis Method of Hydro-generating Unit and system based on DdAE deep learning models
Shrestha et al. Stable spike-timing dependent plasticity rule for multilayer unsupervised and supervised learning
CN104751842B (en) The optimization method and system of deep neural network
EP0623914A1 (en) Speaker independent isolated word recognition system using neural networks
CN114266351A (en) Pulse neural network training method and system based on unsupervised learning time coding
CN113902092A (en) Indirect supervised training method for impulse neural network
Yao The evolution of connectionist networks
Zhao et al. Genetic optimization of radial basis probabilistic neural networks
CN115809700A (en) Spiking neural network learning method based on synapse-threshold synergy
CN112232440A (en) Method for realizing information memory and distinction of impulse neural network by using specific neuron groups
CN111091815A (en) Voice recognition method of aggregation label learning model based on membrane voltage driving
Sagi et al. A biologically motivated solution to the cocktail party problem
CN114220089A (en) Method for carrying out pattern recognition based on segmented progressive pulse neural network
CN113948067B (en) Voice countercheck sample repairing method with hearing high fidelity characteristic
Namarvar et al. A new dynamic synapse neural network for speech recognition
Sun et al. Simplified spike-timing dependent plasticity learning rule of spiking neural networks for unsupervised clustering
CN111260054A (en) Learning method for improving accuracy of associative memory impulse neural network
CN112232494A (en) Method for constructing pulse neural network for feature extraction based on frequency induction
CN115602156A (en) Voice recognition method based on multi-synapse connection optical pulse neural network
CN113408611B (en) Multilayer image classification method based on delay mechanism
CN112288078A (en) Self-learning, small sample learning and transfer learning method and system based on impulse neural network
CN113255883A (en) Weight initialization method based on power law distribution
CN113408618B (en) Image classification method based on R-Multi-parameter PBSNLR model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination