CN114549852B - Impulse neural network training method based on color antagonism and attention mechanism - Google Patents

Impulse neural network training method based on color antagonism and attention mechanism Download PDF

Info

Publication number
CN114549852B
CN114549852B CN202210174117.6A CN202210174117A CN114549852B CN 114549852 B CN114549852 B CN 114549852B CN 202210174117 A CN202210174117 A CN 202210174117A CN 114549852 B CN114549852 B CN 114549852B
Authority
CN
China
Prior art keywords
pulse
characteristic diagram
weight
feature map
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210174117.6A
Other languages
Chinese (zh)
Other versions
CN114549852A (en
Inventor
高绍兵
姚智伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202210174117.6A priority Critical patent/CN114549852B/en
Publication of CN114549852A publication Critical patent/CN114549852A/en
Application granted granted Critical
Publication of CN114549852B publication Critical patent/CN114549852B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a pulse neural network training method based on color antagonism and attention mechanism, which comprises the following steps: s1, performing visual access processing based on color information and contour access processing based on a Gabor operator to obtain a pulse characteristic diagram A and a pulse characteristic diagram B; s2, obtaining a fusion pulse characteristic diagram C; s3, introducing an attention mechanism to obtain the weight of each feature map in the C; adjusting the weights using a color antagonism mechanism and pulse timing-dependent plasticity; and (4) assigning different weights to different feature maps by using an attention mechanism, and multiplying the obtained weight of each feature map in the C by the corresponding feature map in the C to obtain a new feature map. The invention can effectively combine the color information and the outline information in the color input image to obtain the pulse characteristic diagram with more abundant information and provide the pulse characteristic diagram for the pulse neural network to learn.

Description

Pulse neural network training method based on color antagonism and attention mechanism
Technical Field
The invention belongs to the technical field of computer vision and image processing, relates to unsupervised impulse neural network training, and particularly relates to an impulse neural network training method based on color antagonism and attention mechanism.
Background
The impulse neural network belongs to the third generation neural network, and compared with an artificial neural network, the neuron of the impulse neural network is an impulse neuron. The impulse neuron records its own voltage value and once the voltage value is above a certain threshold, it releases an impulse that is passed along the connection to the next layer of impulse neurons, but the general back propagation cannot be used to train the impulse neural network due to the inconductivity of the impulse release function. In recent years, the field of impulse neural networks has become abnormal fire heat, and training methods thereof can be mainly classified into three categories: an impulse neural network is trained based on methods of transition from artificial neural networks, back-propagation methods based on gradient substitution, and using unsupervised learning algorithms. The STDP algorithm commonly used in the method for training the impulse neural network by using the unsupervised learning algorithm is the pulse timing sequence dependence plasticity in the method.
For most training methods based on the STDP and its variants, they mostly use grayscale images as the input of the network, abandon the color information of the images, and even if there are a few methods to consider the extraction and learning of color information, they do not start from biological vision, do not simulate biological vision path, and are difficult to provide better biological interpretability.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a pulse neural network training method based on color antagonism and attention mechanism, which can effectively combine color information and contour information in a color input image to obtain a pulse characteristic diagram with more abundant information and provide the pulse characteristic diagram for pulse neural network learning.
The purpose of the invention is realized by the following technical scheme: the impulse neural network training method based on the color antagonism and attention mechanism comprises the following steps:
s1, performing visual path processing based on color information and contour path processing based on a Gabor operator in parallel to respectively obtain a pulse characteristic diagram A and a pulse characteristic diagram B;
s2, obtaining a fusion pulse characteristic diagram C: splicing the pulse characteristic diagram A and the pulse characteristic diagram B end to end, and then performing pooling operation to obtain a pulse characteristic diagram C;
s3, introducing an attention mechanism to obtain the weight of each feature map in the C; adjusting the weights using a color antagonistic mechanism and pulse timing dependent plasticity; allocating different weights to different feature maps by using an attention mechanism, multiplying the weight of each feature map in the C with the corresponding feature map in the C to serve as a new feature map, and then performing convolution operation on the new feature map and a convolution kernel; the color profile in C uses color antagonism and pulse timing dependent plasticity adjusted weights, and the profile in C uses only pulse timing dependent plasticity adjusted weights.
Further, the specific implementation method of step S1 is as follows: converting the RGB color image into an LMS color space based on a visual access of color information, and determining the time for releasing the pulse according to the pixel value to obtain a pulse characteristic diagram A; the specific treatment method comprises the following steps:
Figure BDA0003518396830000021
/>
Figure BDA0003518396830000022
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003518396830000023
and &>
Figure BDA0003518396830000024
Representing matrix representations of the original image in an RGB color space, an XYZ color space and an LMS color space respectively; equations (1) and (2) represent the process of converting the original RGB image into the LMS space;
extracting the image contour by using a plurality of Gabor operators based on contour passage processing of the Gabor operators, and determining the time for releasing the pulse according to the pixel value to obtain a pulse characteristic diagram B; the specific method comprises the following steps: the contour information in the image is extracted by convolving 4 Gabor operators with different directions (pi/8, pi/4 + pi/8, pi/2 + pi/8, 3 pi/4 + pi/8) with the gray scale image.
Further, the specific implementation method of step S3 is:
the attention mechanism processing is carried out, and the realization method comprises the following steps: firstly, performing convolution operation on the pulse characteristic diagram C to obtain a pulse characteristic diagram D after convolution, and calculating the weight of each characteristic diagram in the C according to the C and the D:
Figure BDA0003518396830000025
wherein, a f Representing the weight, p, of the f-th feature map + And p - Representing the parameters for enhancement or suppression of the feature map respectively,
Figure BDA0003518396830000026
and &>
Figure BDA0003518396830000027
Respectively, represent the average time of the release pulse at all points on the profile before and after convolution, and the Sigmoid () function is used to make a pair->
Figure BDA0003518396830000028
Zooming;
multiplying the weight of each feature map in the obtained C by the corresponding feature map in the C to obtain a new feature map;
the color antagonism mechanism is implemented as follows:
Figure BDA0003518396830000029
Figure BDA00035183968300000210
Figure BDA00035183968300000211
Figure BDA0003518396830000031
Figure BDA0003518396830000032
/>
Figure BDA0003518396830000033
D SL the values of the weight change amounts corresponding to the S profile and the L profile corresponding to the LMS space are shown, D SM The numerical value of the corresponding weight variation of the S characteristic diagram and the M characteristic diagram corresponding to the LMS space is shown, and T represents the maximum time step; the parameter ρ is used to scale the exponent section parameter to [0,3 ]]Any real number within; | Δ t SL | represents the absolute value of the difference between the S profile and the time point of the release pulse at each position on the L profile; | Δ t SM | represents the absolute value of the difference between the time points of the release pulse at each position on the S and M profiles;
the formulas (6), (7), (8) and (9) respectively represent the variable quantity of the weight corresponding to the characteristic diagram of S, L, M which is finally acted on; Δ W S =ΔW S1 +ΔW S2 Represents the amount of change, Δ W, of the weight ultimately applied to the S-profile L Represents the amount of change, Δ W, of the weight ultimately applied to the L profile M Representing the variation of the weight finally acted on the M characteristic diagram, wherein alpha + and alpha-respectively represent a positive parameter and a negative parameter and are used for controlling the increase and decrease of the weight; Δ t pre-post Representing the time difference between the release of the pulse by the pre-convolution neuron and the post-convolution neuron;
the specific implementation of pulse timing dependent plasticity is:
Figure BDA0003518396830000034
Δ W represents the calculated amount of change in the weight of a connection, W represents the current value of the weight of the connection; p is a radical of + And p - Are respectively provided withRepresenting a positive and a negative parameter for controlling the increase or decrease of the weight, where p + And p - The values of (a) are 0.05 and-0.015, respectively; t is t pre And t post Respectively representing the time when the connected front and back neurons release the pulse;
the method for training the impulse neural network based on the color antagonism and the impulse timing sequence dependence plasticity adjustment weight comprises the following steps:
Figure BDA0003518396830000035
W f representing the convolution kernel corresponding to the f-th feature map.
The beneficial effects of the invention are: the method can effectively train the impulse neural network, can effectively combine the color information and the contour information in the color input image, and obtains the impulse characteristic diagram with more abundant information to be provided for the impulse neural network to learn. Meanwhile, the accuracy of the classification of the impulse neural network can be effectively improved by simulating the processing mode of the biological visual pathway on the color information and the attention mechanism, and a reliable thought and method are provided for the subsequent training of exploring the biological visual pathway and the impulse neural network.
Drawings
FIG. 1 is a flow chart of a spiking neural network training method based on color antagonism and attention mechanism according to the present invention;
FIG. 2 is a data set image employed in the present embodiment;
FIG. 3 is a diagram illustrating the result of processing using a visual path based on color information and a contour path based on Gabor operator according to the present embodiment;
FIG. 4 shows the result of pooling after stitching two pulse profiles according to this example.
Detailed Description
The technical scheme of the invention is further explained by combining the attached drawings.
As shown in fig. 1, the method for training the impulse neural network based on the color antagonism and attention mechanism of the present invention includes the following steps:
s1, performing visual path processing based on color information and contour path processing based on a Gabor operator in parallel to respectively obtain a pulse characteristic diagram A and a pulse characteristic diagram B;
the specific implementation method comprises the following steps: converting the RGB color image into an LMS color space based on a visual access of color information, and determining the time for releasing a pulse according to the size of a pixel value to obtain a pulse characteristic diagram A; the specific treatment method comprises the following steps:
Figure BDA0003518396830000041
Figure BDA0003518396830000042
wherein the content of the first and second substances,
Figure BDA0003518396830000043
and &>
Figure BDA0003518396830000044
Representing matrix representations of the original image in an RGB color space, an XYZ color space and an LMS color space respectively; equations (1) and (2) represent the process of converting the original RGB image into the LMS space;
extracting the image contour by using a plurality of Gabor operators based on contour passage processing of the Gabor operators, and determining the time for releasing the pulse according to the pixel value to obtain a pulse characteristic diagram B; the specific method comprises the following steps: extracting contour information in the image by using 4 Gabor operators with different directions (pi/8, pi/4 + pi/8, pi/2 + pi/8 and 3 pi/4 + pi/8) to be convolved with the gray image;
the method for determining the time of releasing the pulse according to the pixel value is as follows: using delay coding, each point calculates its pulse time. The larger the pixel value of a dot, the earlier the dot releases the pulse. The time interval here ranges from 0, 15, i.e. closer to 0 and vice versa. The specific method comprises the following steps: and sequencing the pixel values of the points, equally dividing the values into T time step lengths, generating a pulse characteristic graph which has the same size as the original graph and T times, and releasing pulses from the points with the maximum pixel values on the first graph. In this example, for an image with size (62, 62), there are 62 × 62=3844 pixels, and assuming that the time step length is set to 15, there are new 3844/15=256 pixels that release pulses at each time step (the larger the pixel value, the earlier the pulse is released), that is, the value is set to 1, and the remaining points are set to 0.
The size of the finally generated pulse characteristic graph is (15, 62, 62), the size of the first graph is (62, 62), 256 points on the graph are 1, and the rest points are 0; in the second (62, 61), not only 256 points on the first graph are 1, but also new 256 points (the pixel values of which are smaller than those of the 256 points on the first graph) are set to 1, and the rest of the points are 0; and so on …
This embodiment uses an RGB image with an image size of (3, 64, 64), as shown in fig. 2, which is derived from an ETH-80 image dataset, showing only two of the eight categories; after step S1 (formulas (1) to (2)), a visual path process based on color information is applied to an RGB image with a category of "horse", that is, the image is first cut into RGB images with a size of (3, 62, 62), and then the RGB images are converted into an LMS space, where a total time step length is set to 15, and a pulse feature map a with a size of (15,3, 62, 62) is generated according to the pixel value size, as shown in fig. 3 (b); and the contour path based on the Gabor operator converts the RGB image with the image size of (3, 64, 64) into a grayscale image with the size of (64, 64), extracts the contours in different directions by using 4 Gabor operators in different directions to generate a feature map with the size of (4, 62, 62), and then converts the feature map into a pulse feature map B with the size of (15,4, 62, 62), as shown in fig. 3 (B).
S2, obtaining a fusion pulse characteristic diagram C: splicing the pulse characteristic diagram A and the pulse characteristic diagram B end to obtain a pulse characteristic diagram with richer information, and then performing pooling operation on the pulse characteristic diagram to reduce the size of the characteristic diagram and obtain translation deformation-free to obtain a pulse characteristic diagram C; the pulse characteristic diagram A and the pulse characteristic diagram B are spliced into a fused pulse characteristic diagram with the size of (15,7, 62, 62), and then pooling is carried out to obtain a fused pulse characteristic diagram C with the size of (15,7, 30, 30), as shown in FIG. 4.
S3, introducing an attention mechanism to obtain the weight of each feature map in the C; adjusting the weights using a color antagonism mechanism and pulse timing-dependent plasticity; allocating different weights to different feature maps by using an attention mechanism, multiplying the weight of each feature map in the obtained C by the corresponding feature map in the C to serve as a new feature map, and then performing convolution operation on the new feature map and a convolution kernel; the color feature graph in the graph C uses a color antagonistic mechanism and pulse timing-dependent plasticity to adjust the weights, and the contour feature graph in the graph C only uses pulse timing-dependent plasticity to adjust the weights;
the specific implementation method comprises the following steps:
the method for realizing attention mechanism processing comprises the following steps: firstly, performing convolution operation on the pulse characteristic diagram C to obtain a pulse characteristic diagram D after convolution, and calculating the weight of each characteristic diagram in C according to C and D:
Figure BDA0003518396830000061
wherein, a f Representing the weight, p, of the f-th feature map + And p - Respectively, the parameters for enhancing or suppressing the characteristic diagram, p in this embodiment + And p - The values of (A) were taken to be 0.005 and-0.001, respectively.
Figure BDA0003518396830000062
And &>
Figure BDA0003518396830000063
Respectively, represent the average time of the release pulse at all points on the profile before and after convolution, and the Sigmoid () function is used to make a pair->
Figure BDA0003518396830000064
Zooming;
multiplying the obtained weight of each feature map in the C by the corresponding feature map in the C to serve as a new feature map, and then performing convolution operation on the new feature map and a convolution kernel;
the color antagonism mechanism is realized by the following steps:
Figure BDA0003518396830000065
Figure BDA0003518396830000066
Figure BDA0003518396830000067
Figure BDA0003518396830000068
Figure BDA0003518396830000069
/>
Figure BDA00035183968300000610
D SL the values of the weight change amounts corresponding to the S feature map and the L feature map corresponding to the LMS space, D SM The numerical value of the corresponding weight variation of the S characteristic diagram and the M characteristic diagram corresponding to the LMS space is shown, and T represents the maximum time step; the parameter ρ is used to scale the index portion parameter and is [0,3 ]]Any real number within; | Δ t SL | represents the absolute value of the difference between the S profile and the time point of the release pulse at each position on the L profile; | Δ t SM | represents the absolute value of the difference between the S profile and the time point of the release pulse at each position on the M profile;
the formulas (6), (7), (8) and (9) respectively represent the variable quantity of the weight corresponding to the characteristic diagram of S, L, M which is finally acted on; Δ W S =ΔW S1 +ΔW S2 Indicating the ultimate effectThe amount of change, Δ W, in the weight corresponding to the S-feature map L Represents the amount of change, Δ W, of the weight ultimately applied to the L profile M Representing the variation of the weight finally acted on the M characteristic diagram, wherein alpha + and alpha-respectively represent a positive parameter and a negative parameter and are used for controlling the increase and decrease of the weight; Δ t pre-post Representing the time difference between the release of the pulse by the pre-convolution neuron and the post-convolution neuron; in this embodiment, ρ is taken as
Figure BDA0003518396830000071
α + And alpha - The values of (A) are 0.1 and-0.1, respectively.
ΔW S 、ΔW L 、ΔW M The three variable quantities respectively correspond to the variable quantities on the three convolution kernels of the LMS characteristic diagram and are respectively added back to the corresponding convolution kernels; that is, Δ W S And added back to the convolution kernel corresponding to the L-feature map (i.e., the one convolved with the L-feature map in two dimensions during the convolution stage), for two other similar reasons.
The specific implementation of pulse timing-dependent plasticity is:
Figure BDA0003518396830000072
Δ W represents the calculated amount of change in the weight of a connection, W represents the current value of the weight of the connection; p is a radical of + And p - Respectively representing a positive and a negative parameter, for controlling the increase or decrease of the weight, where p + And p - The values of (a) are 0.05 and-0.015, respectively; t is t pre And t post Respectively representing the time when the connected front and back neurons release the pulse;
the method for training the impulse neural network based on the color antagonism and the impulse timing sequence dependence plasticity adjustment weight comprises the following steps:
Figure BDA0003518396830000073
W f what is meant is the value of the weight (i.e., the convolution kernel). W on the right f Is the original weight value, and W on the left f Is the weight value after the picture training. W is a group of f Representing the convolution kernel corresponding to the f-th feature map. Namely: a convolution kernel corresponding to the feature map of the extracted contour, trained using only STDP (adjusted once using STDP); the convolution kernels corresponding to the three characteristic maps of the LMS respectively need to use two adjusting modes of STDP and color antagonism, and the two adjusting modes do not have a sequence.
And adjusting the weight of the impulse neural network by using the method until the preset adjusting times are reached and finishing the training to obtain the trained impulse neural network.
S4, outputting the convolution result to a classifier for classification: and transforming the convolution result into a pulse sequence, and enabling a classifier to classify. Firstly, an attention mechanism (formula (3)) is used for calculating a feature weight (each value corresponds to a feature map) with the size of (7,1,1), then a fused pulse feature map C with the size of (15,7, 30, 30) is correspondingly multiplied by the feature weight with the size of (7,1,1), then the multiplication result is convolved with N weights with the size of (7, 30, 30), a pulse feature map with the size of (15, N, 1) after convolution is obtained, and the weights are updated by a weight changing method in S3.
After the network training is completed, step S4 converts the convolution result with the size of S3 being (15, n, 1) into a pulse sequence tensor with the length of 15 × n, and outputs the pulse sequence tensor to the support vector machine for classification.
The above simple example is mainly illustrated and shown by taking an experiment of a single image on a trained impulse neural network as an example, in actual calculation, a user takes a training set (one image is input and processed in the above manner, and the next image is input after the weight is updated) as input, then the user stops training by setting the training set for 30 times, and after the training is finished, a test set (one image is input to the trained network one by one, and the same is true), and then the output of the network is transmitted to a classifier.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art, having the benefit of this disclosure, may effect numerous modifications thereto and changes may be made without departing from the scope of the invention in its aspects.

Claims (1)

1. The pulse neural network training method based on the color antagonism and attention mechanism is characterized by comprising the following steps of:
s1, performing visual path processing based on color information and contour path processing based on a Gabor operator in parallel to respectively obtain a pulse characteristic diagram A and a pulse characteristic diagram B; the specific implementation method comprises the following steps: converting the RGB color image into an LMS color space based on a visual access of color information, and determining the time for releasing the pulse according to the pixel value to obtain a pulse characteristic diagram A; the specific treatment method comprises the following steps:
Figure FDA0004100399360000011
Figure FDA0004100399360000012
wherein the content of the first and second substances,
Figure FDA0004100399360000013
and &>
Figure FDA0004100399360000014
Representing matrix representations of the original image in an RGB color space, an XYZ color space and an LMS color space respectively; equations (1) and (2) represent the process of converting the original RGB image into the LMS space;
contour path processing based on Gabor operators uses a plurality of Gabor operators to extract image contours, and determines the time for releasing pulses according to the pixel values to obtain a pulse characteristic diagram B; the specific method comprises the following steps: extracting contour information in the image by using 4 Gabor operators with different directions (pi/8, pi/4 + pi/8, pi/2 + pi/8 and 3 pi/4 + pi/8) to be convolved with the gray image;
s2, obtaining a fusion pulse characteristic diagram C: splicing the pulse characteristic diagram A and the pulse characteristic diagram B end to end, and then performing pooling operation to obtain a pulse characteristic diagram C;
s3, introducing an attention mechanism to obtain the weight of each feature map in the C; adjusting the weights using a color antagonistic mechanism and pulse timing dependent plasticity; allocating different weights to different feature maps by using an attention mechanism, multiplying the weight of each feature map in the C with the corresponding feature map in the C to serve as a new feature map, and then performing convolution operation on the new feature map and a convolution kernel; the color feature graph in the graph C uses a color antagonism mechanism and pulse timing-dependent plasticity to adjust the weight, and the contour feature graph in the graph C only uses the pulse timing-dependent plasticity to adjust the weight; the specific implementation method comprises the following steps:
the attention mechanism processing is carried out, and the realization method comprises the following steps: firstly, performing convolution operation on the pulse characteristic diagram C to obtain a pulse characteristic diagram D after convolution, and calculating the weight of each characteristic diagram in the C according to the C and the D:
Figure FDA0004100399360000015
wherein, a f Representing the weight, p, of the f-th feature map + And p - Representing the parameters for enhancement or suppression of the feature map respectively,
Figure FDA0004100399360000016
and &>
Figure FDA0004100399360000017
Respectively, represent the average time of the release pulse at all points on the profile before and after convolution, and the Sigmoid () function is used to make a pair->
Figure FDA0004100399360000018
Zooming in and out;
multiplying the obtained weight of each feature map in the C by the corresponding feature map in the C to obtain a new feature map;
the color antagonism mechanism is realized by the following steps:
Figure FDA0004100399360000021
Figure FDA0004100399360000022
/>
Figure FDA0004100399360000023
Figure FDA0004100399360000024
Figure FDA0004100399360000025
Figure FDA0004100399360000026
D SL the values of the weight change amounts corresponding to the S profile and the L profile corresponding to the LMS space are shown, D SM The numerical value of the corresponding weight variation of the S characteristic diagram and the M characteristic diagram corresponding to the LMS space is shown, and T represents the maximum time step; the parameter ρ is used to scale the index portion parameter and is [0,3 ]]Any real number within; | Δ t SL | represents the absolute value of the difference between the time points of the release pulse at each position on the S and L profiles; | Δ t SM I represents each position on the S characteristic diagram and the M characteristic diagramThe absolute value of the difference of the points in time at which the pulse is released;
equations (6), (7), (8) and (9) respectively represent the variation of the weights finally applied to the S, L, M characteristic diagram; Δ W S =ΔW S1 +ΔW S2 Represents the amount of change, Δ W, of the weight ultimately applied to the S-profile L Indicates the amount of change, α W, of the weight finally applied to the L profile M The variable quantity of the weight corresponding to the final action on the M characteristic diagram is shown, and alpha + and alpha-respectively show a positive parameter and a negative parameter which are used for controlling the increase and the decrease of the weight; Δ t pre-post Representing the time difference between the release of the pulse by the pre-convolution neuron and the post-convolution neuron;
the specific implementation of pulse timing-dependent plasticity is:
Figure FDA0004100399360000027
Δ W represents the calculated amount of change in the weight of a connection, W represents the current value of the weight of the connection; p is a radical of formula + And p - Respectively representing a positive and a negative parameter, for controlling the increase or decrease of the weight, where p + And p - Values of (a) are 0.05 and-0.015, respectively; t is t pre And t post Respectively representing the time when the connected front and back neurons release the pulse;
the method for training the impulse neural network based on the color antagonism and the impulse timing sequence dependence plasticity adjustment weight comprises the following steps:
Figure FDA0004100399360000031
W f representing the convolution kernel corresponding to the f-th feature map.
CN202210174117.6A 2022-02-24 2022-02-24 Impulse neural network training method based on color antagonism and attention mechanism Active CN114549852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210174117.6A CN114549852B (en) 2022-02-24 2022-02-24 Impulse neural network training method based on color antagonism and attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210174117.6A CN114549852B (en) 2022-02-24 2022-02-24 Impulse neural network training method based on color antagonism and attention mechanism

Publications (2)

Publication Number Publication Date
CN114549852A CN114549852A (en) 2022-05-27
CN114549852B true CN114549852B (en) 2023-04-18

Family

ID=81677813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210174117.6A Active CN114549852B (en) 2022-02-24 2022-02-24 Impulse neural network training method based on color antagonism and attention mechanism

Country Status (1)

Country Link
CN (1) CN114549852B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555523A (en) * 2019-07-23 2019-12-10 中建三局智能技术有限公司 short-range tracking method and system based on impulse neural network
CN112633497A (en) * 2020-12-21 2021-04-09 中山大学 Convolutional pulse neural network training method based on reweighted membrane voltage
CN113111758A (en) * 2021-04-06 2021-07-13 中山大学 SAR image ship target identification method based on pulse neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9367798B2 (en) * 2012-09-20 2016-06-14 Brain Corporation Spiking neuron network adaptive control apparatus and methods

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555523A (en) * 2019-07-23 2019-12-10 中建三局智能技术有限公司 short-range tracking method and system based on impulse neural network
CN112633497A (en) * 2020-12-21 2021-04-09 中山大学 Convolutional pulse neural network training method based on reweighted membrane voltage
CN113111758A (en) * 2021-04-06 2021-07-13 中山大学 SAR image ship target identification method based on pulse neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Mantas Lukoševičius 等.Reservoir computing approaches to recurrent neural network training.Computer Science Review.2009,127-149. *
Na Guo 等.A Neurally Inspired Pattern Recognition Approach with Latency-Phase Encoding and Precise-Spike-Driven Rule in Spiking Neural Network.2017 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics.2012,全文. *
胡伟泰.颜色视觉通路的脉冲神经网络模拟及特征提取.中国优秀硕士学位论文全文数据库 (信息科技辑).2020,I138-2199. *

Also Published As

Publication number Publication date
CN114549852A (en) 2022-05-27

Similar Documents

Publication Publication Date Title
CN108717568B (en) A kind of image characteristics extraction and training method based on Three dimensional convolution neural network
WO2022252272A1 (en) Transfer learning-based method for improved vgg16 network pig identity recognition
CN110414377B (en) Remote sensing image scene classification method based on scale attention network
CN108520206B (en) Fungus microscopic image identification method based on full convolution neural network
CN112614077B (en) Unsupervised low-illumination image enhancement method based on generation countermeasure network
CN108875935B (en) Natural image target material visual characteristic mapping method based on generation countermeasure network
CN108665005B (en) Method for improving CNN-based image recognition performance by using DCGAN
CN107358626A (en) A kind of method that confrontation network calculations parallax is generated using condition
CN108629370B (en) Classification recognition algorithm and device based on deep belief network
CN110781897A (en) Semantic edge detection method based on deep learning
CN112307714A (en) Character style migration method based on double-stage deep network
CN112818764A (en) Low-resolution image facial expression recognition method based on feature reconstruction model
CN113554599B (en) Video quality evaluation method based on human visual effect
CN110852935A (en) Image processing method for human face image changing with age
CN114581560A (en) Attention mechanism-based multi-scale neural network infrared image colorizing method
CN111882516B (en) Image quality evaluation method based on visual saliency and deep neural network
CN110598737B (en) Online learning method, device, equipment and medium of deep learning model
CN109508640A (en) A kind of crowd's sentiment analysis method, apparatus and storage medium
CN109934835B (en) Contour detection method based on deep strengthening network adjacent connection
CN114492634A (en) Fine-grained equipment image classification and identification method and system
CN109284765A (en) The scene image classification method of convolutional neural networks based on negative value feature
CN114549852B (en) Impulse neural network training method based on color antagonism and attention mechanism
Wu et al. Remote sensing image colorization based on multiscale SEnet GAN
CN112270404A (en) Detection structure and method for bulge defect of fastener product based on ResNet64 network
CN115018729B (en) Content-oriented white box image enhancement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant