CN106560848A - Novel neural network model for simulating biological bidirectional cognition capability, and training method - Google Patents
Novel neural network model for simulating biological bidirectional cognition capability, and training method Download PDFInfo
- Publication number
- CN106560848A CN106560848A CN201610891903.2A CN201610891903A CN106560848A CN 106560848 A CN106560848 A CN 106560848A CN 201610891903 A CN201610891903 A CN 201610891903A CN 106560848 A CN106560848 A CN 106560848A
- Authority
- CN
- China
- Prior art keywords
- neural network
- positive
- training
- neutral net
- negative sense
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/061—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Theoretical Computer Science (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Neurology (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Image Analysis (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a novel neural network model for simulating biological bidirectional cognition capability, and a training method. The model consists of a positive neural network and a negative neural network. The positive neural network completes the simulation of a positive cognition process from the input to the output, and the negative neural network completes the simulation of a positive cognition process from the output to the input, wherein the two neural network structures are symmetric, and share a weight. The corresponding connection weight matrixes of the positive and negative neural networks are in a transposition relation. According to the invention, a coordinative structure of the positive and negative neural networks which are symmetric in structure and shares the weight is built, thereby achieving the simulation of the biological bidirectional cognition capability. A mode of negative learning process is introduced into a process of a standard BP algorithm, and a novel neural network training method is proposed.
Description
Technical field
The present invention relates to artificial neural network field, and in particular to a kind of new type nerve of simulation biological bidirectional cognitive ability
Network model and training method.
Background technology
Neutral net is the information processing system of a kind of simulation human brain structure and its function, mainly by artificial neuron and net
Network structure composition, its artificial neuron simulates the information process of biological neuron, network structure simulation biological nervous system
The connected mode of middle neuron, and the corresponding Synaptic junction state of memory is then responsible in network connection weights and biasing.As one
Active marginality cross discipline, is just becoming machine learning, artificial intelligence, cognitive science, neuro-physiology, nonlinear kinetics
Etc. the study hotspot of association area.
(1) Artificial Neural Network Structures
Through development for many years, hundreds of neural network model is had at present and is suggested, in the research of neutral net
Important impetus is served in journey, wherein representational mainly include:Perceptron, adaptive linear neuron, cerebellum are certainly
Motivation, Back propagation neural networks, adaptive resonance theory, box midbrain BSB networks, neocognitron, self-organizing feature map net
Network, Hopfield networks, Boltzmann machine, bi-directional associative memory net, two-way propagation net, deep neural network etc..These nerves
Network model achieves good effect in terms of the cognitive behavior of simulation people, promotes neutral net in class brain intelligent study
The fast development of aspect.But, for simulating the cognitive behavior mode of people, the construction strategy of most of neural network model is all
Concentrate on and extract from training data by because of the cognitive relation to fruit, simulate " the forward cognitive process " of people.And the cognition of biology
Process is often two-way, the biological forward cognitive ability both with " holding because of rope fruit ", and inversely recognizing with " hold fruit rope because "
Ability is known, to build neural network model of this two-way cognitive ability of simulation as target little quilt in conventional research
Refer to.
(2) neural network training method
Hot issue of field of neural networks another research is the training method of network, also referred to as learning method.From broad sense
On say, the study of neutral net can be divided into supervised learning, unsupervised learning, intensified learning and the type of semi-supervised learning four.BP
Algorithm since the mid-80 proposition of 20th century, has greatly facilitated and has promoted god as most outstanding supervised learning algorithm
The development of Jing networks, becomes a milestone in neutral net development history.Used in realistic task during neutral net, mostly
It is to be trained using BP algorithm.Developed through nearly 30 years, in research of BP algorithm the problems such as network structure, parameter optimization
Significant progress is obtained for, the convergence existed especially for classical BP algorithm is slow, be easily absorbed in local optimum and classification is extensive
The shortcomings of ability, it is proposed that much improve learning method.For the problem that BP algorithm convergence rate is slow, the improved method of proposition
It is broadly divided into new type of parametric adjustable strategies and Novel learning method both direction.Dynamically adjust in parameter adjustment, i.e. learning process
The parameters such as whole learning rate, step-length accelerating network training, such as learning rate changing algorithm, variable momentum term algorithm, become gradient algorithm;Newly
Type learning method is mainly on the basis of original BP algorithm by optimum theory and method to accelerate network convergence, such as has height
Confinement Newton's algorithm, Hessian-free algorithms, RELM rapidly learning algorithm etc. of rank convergence.For the nerve of BP algorithm training
The problem of network generalization difference, mainly proposes the methods such as weight optimal initialization, weights punishment, weights elimination, GA etc. bionical
Method that the method that optimized algorithm and neutral net are combined training, principal component analysis and neutral net combine etc..Improve above
Method has his own strong points, but it is empty also to there is larger lifting on the balance sex chromosome mosaicism between convergence rate and Generalization Capability
Between, meanwhile, when neural metwork training is carried out, optimization basis is that gradient declines and (or considers Hessian matrixes to said method
Second order gradient) algorithm, have certain dependence to initial value, therefore seek the high Novel learning of fast convergence rate, Generalization Capability
Method is still a major issue in BP algorithm research field.
To sum up, as the effective tool of class brain intelligent study, the existing research of neutral net has been in just system.But, it is existing
The neural network structure that adopts is also much not as good as the complex structure of biological neural network in stage research, still simply opposite juju
The primary simulation of Jing system informations process, finds from brain science and neuro-cognitive science and uses for reference, and develops function more in theory
Powerful class brain computation model, makes neutral net that there is the aspects such as environment sensing, data understanding and inductive decision ability to also have
Very big room for promotion.Biological cognitive process is often two-way, both with the forward cognitive ability of " holding because of rope fruit ", is had again
Have a reverse cognitive ability of " hold fruit rope because ", and to the simulation of this two-way cognitive ability in conventional neutral net research very
It is mentioned less.
In addition, one prominent critical nature of neutral net is " study ".In broad terms, the study of neutral net can divide
For supervised learning, unsupervised learning, intensified learning and the type of semi-supervised learning four.BP algorithm is used as most outstanding supervised learning
Algorithm, since the mid-80 proposition of 20th century, has greatly facilitated and has promoted the development of neutral net, becomes nerve net
A milestone in network development history.Used in realistic task during neutral net, it is being trained using BP algorithm.But
It is that BP algorithm has always that convergence is slow, is easily absorbed in local optimum and the shortcomings of classification generalization ability difference, although researcher carries
The improved method for going out many algorithms, but unified solution is not obtained all the time.
The content of the invention
To solve the above problems, the invention provides a kind of new neural network model of simulation biological bidirectional cognitive ability
And training method, establishing can simulate the mutual learning neural network model of biological " two-way cognitive ability ";And propose solution
The mutual learning neural network training method of the problems such as traditional neural network training method convergence rate is slowly and generalization ability is poor.
For achieving the above object, the technical scheme taken of the present invention is:
The new neural network model of simulation biological bidirectional cognitive ability, by positive neutral net and negative sense neutral net structure
Into, positive neutral net is completed by the simulation of the forward cognitive process for being input to output, negative sense neutral net complete by output to
The simulation of the reverse cognitive process of input, two network structures are symmetrical, and weights are shared.
Wherein, the positive neutral net (Positive Neural Network) is classical containing a hidden layer
Feedforward neural network, is for being responsible for the study input space to the mapping model in output space;Specifically,
If positive neural network input layer neuron number is r, input vector is x=[x1, x2..., xr]T∈Rr×1, it is defeated
It is o to go out layer neuron number, and output vector isHidden neuron number be p, its network structure
By shown in positive part of neural network in Fig. 1.The connection weight matrix of input layer and hidden layer is Pw1∈Rp×r, bias term is Pb1
∈Rp×1, the connection weight matrix of hidden layer and output layer is Pw2∈Ro×p, bias term is Pb2∈Ro×1。
Wherein, the negative sense neutral net (Negative Neural Network) is symmetrical with positive neural network structure,
It is for being responsible for study output space to input space mapping model;Specifically,
If negative sense neural network input layer neuron number is o, input vector is y=[y1, y2..., yo]T∈Ro×1, it is defeated
It is r to go out layer neuron number, and output vector isHidden neuron number be p, its network structure by
In Fig. 1 shown in negative sense part of neural network, the connection weight matrix of input layer and hidden layer is Nw1∈Rp×o, bias term is Nb1∈Rp ×1;The connection weight matrix of hidden layer and output layer is Nw2∈Rr×p, bias term is Nb2∈Rr×1。
The positive neutral net and the negative sense neural network structure are symmetrical, according to the attachment structure of neutral net, can
Know positive and negative to the corresponding connection weight matrix of neutral net transposition relation each other, be shown below:Pw1=(Nw2)T∈Rp×r,
Pw2=(Nw1)T∈Ro×p;
In network training process, positive and negative to by way of neural network share connection weight, will be originally separate
It is positive and negative to get up to neural network integration, the connection weight of common training network.
Present invention also offers the training method of the new neural network model of above-mentioned simulation biological bidirectional cognitive ability,
If training sample sum is m, the input and output of j-th sample are respectively xj, yj, it is positive and negative to distinguish to neural computing output
It is positive and negative to be respectively E to neural metwork training appearance batch error mean for F (x) and G (y)pAnd En, it is positive and negative to connect to neutral net
Connect weights and be respectively Pw and Nw, learning rate is a, and iterations is k, then training method comprises the steps:
Step1:Initialization network structure, random initializtion forward direction neural network weight;
Step2:All training samples carry out random disorder operation, rearrange sample order;It is s according to every group of number
(batch), by sample mean t subgroup (son batch) is divided intoExpression rounds up;
Step3:Calculate positive nerve net string bag batch mean value error:
Step4:Update positive neutral net connection weight:
Step5:The circulation of step 3-4 is carried out t time;
Step6:Positive neutral net connection weight transposition is assigned to into negative sense nerve net:Nw (k+1)=Pw (k+1)T;
Step7:Calculate negative sense nerve net string bag batch mean value error:
Step8:Update negative sense neutral net connection weight:
Step9:The circulation of step 7-8 is carried out t time;
Step10:Negative sense neutral net connection weight transposition is assigned to into positive nerve net:Pw (k+1)=Nw (k+1)T;
Step11:Model completes an iteration, and judges whether that reaching convergence wants according to error result and iterations k
Ask, if reaching requirement, network completes training, otherwise circulation step 2-10.
Wherein, step Step3-Step5 is the positive study stage, and step Step7-Step9 is the negative sense study stage.
Angle from simulation biological bidirectional cognitive ability of the invention, by the negative sense for introducing analogous reverse cognitive process
Neutral net, constructs mutual learning neural network model, and proposes mutual learning neural network training method on this basis, leads to
Cross theory analysis and numerical experiment demonstrates the validity of the art of this patent, its remarkable result outstanding behaviours is in following side
Face:
(1) it is that biological bidirectional cognitive ability is entered based on symmetrical configuration and weights shared " mutual learning neural network model "
The effective tool of row simulation.The positive neutral net and negative sense neutral net of symmetrical configuration simulates respectively the forward cognitive of biology
Process and reverse cognitive process.
(2) " mutual learning neural network training method " is aligned using input data and output label and instructed to neutral net
Practice, negative sense neutral net is trained using output label and input data, and by way of weight matrix transposition is shared
Forward direction study and negative sense is set to learn alternately, this training method is simulated during human brain forms concept and passes through repeatedly right
Than, cursorily recognize the extension of concept, cursorily recognize the intension of concept, then the extension of fine understanding concept, fine recognizes
Know the intension of concept, this cognitive process for moving in circles.Using mutual learning neural network training method to mutually study nerve net
Network model is trained, and realizes simulation of the neutral net to biological bidirectional cognitive ability, more meets the actual cognitive row of biology
For.
(3) " mutual learning neural network training method " is entered using gradient decline principle in weights space and its dual spaces
Row bidirectional research, compared with traditional neural network training method, can be such that neutral net explores in weights space more quickly
To a weights position that convergence is fast, gradient is big, classification generalization ability is good, can make that network is quick, stable convergence.
(4) numerical experiment results show:Mutually learning neural network training method can be to simulating biological bidirectional cognitive ability
Mutual learning neural network model be trained, and make positive and negative to restrain simultaneously to two neutral nets.
(5) numerical experiment results demonstrate analog capability of the negative sense neural metwork training to biological reverse cognitive process, enter
One step discloses the learning process that the training process of neutral net is data-driven, and its learning outcome is data substantive characteristics and group
Knit the true reflection of form.Internal relation between data not only can be able to instead by the forward learning process of positive neutral net
Reflect, and can be portrayed by the reversion choice process of negative sense neutral net, this is by the inherent attribute institute of data itself
Determine.If mutual learning neural network training method is applied to into depth network image to process, using convolution feature as input, instruction
Practice last three full link sort devices of depth network, then can realize the mutual correspondence of convolution feature and image tag, it is positive
It is input into for prediction of classifying, negative sense is input into the convolution feature for reconstructing the category, and recycling deconvolution network then can be visual
Change the Image Reconstruction generating process of certain particular category, realize based on depth network end-to-end (tab end is to input)
Image Reconstruction, this automatic reconstruct form of painting realized using mutual learning neural network training method fully presents mutually study
Broad prospect of application of the neural network training method in artificial intelligence art field.
Description of the drawings
Fig. 1 is that the new neural network model structure and weights that biological bidirectional cognitive ability is simulated in the embodiment of the present invention is total to
Enjoy relation schematic diagram.
Fig. 2 is mutual learning neural network training method flow chart in the embodiment of the present invention.
Fig. 3 is CMU PIE data sets mean square error convergence result in the embodiment of the present invention.
Fig. 4 is CMU PIE data sets training classification error rate in the embodiment of the present invention.
Fig. 5 is CMU PIE data sets prediction classification error rate in the embodiment of the present invention.
Fig. 6 is ORL data sets mean square error convergence result in the embodiment of the present invention.
Fig. 7 is ORL data sets training classification error rate in the embodiment of the present invention.
Fig. 8 is ORL data sets prediction classification error rate in the embodiment of the present invention.
Specific embodiment
In order that objects and advantages of the present invention become more apparent, the present invention is carried out further with reference to embodiments
Describe in detail.It should be appreciated that specific embodiment described herein is not used to limit this only to explain the present invention
It is bright.
A kind of new neural network model of simulation biological bidirectional cognitive ability is embodiments provided, by just Godwards
Jing networks and negative sense neutral net are constituted, and positive neutral net is completed by the simulation of the forward cognitive process for being input to output, are born
Complete by the simulation of output to the reverse cognitive process of input to neutral net, two network structures are symmetrical, and weights are shared, its tool
Volume grid structure and weights shared relationship are as shown in Figure 1.
The positive neutral net (Positive Neural Network) is the classical feedforward god containing a hidden layer
Jing networks, are for being responsible for the study input space to the mapping model in output space;Specifically,
If positive neural network input layer neuron number is r, input vector is x=[x1, x2..., xr]T∈Rr×1, it is defeated
It is o to go out layer neuron number, and output vector isHidden neuron number be p, its network structure by
In Fig. 1 shown in positive part of neural network.The connection weight matrix of input layer and hidden layer is Pw1∈Rp×r, bias term is Pb1∈Rp ×1, the connection weight matrix of hidden layer and output layer is Pw2∈Ro×p, bias term is Pb2∈Ro×1。
The negative sense neutral net (Negative Neural Network) is symmetrical with positive neural network structure, is use
In study is responsible for space is exported to input space mapping model;Specifically,
If negative sense neural network input layer neuron number is o, input vector is y=[y1, y2..., yo]T∈Ro×1, it is defeated
It is r to go out layer neuron number, and output vector isHidden neuron number be p, its network structure by
In Fig. 1 shown in negative sense part of neural network, the connection weight matrix of input layer and hidden layer is Nw1∈Rp×o, bias term is Nb1∈Rp ×1;The connection weight matrix of hidden layer and output layer is Nw2∈Rr×p, bias term is Nb2∈Rr×1。
The positive neutral net and the negative sense neural network structure are symmetrical, according to the attachment structure of neutral net, can
Know positive and negative to the corresponding connection weight matrix of neutral net transposition relation each other, be shown below:Pw1=(Nw2)T∈Rp×r,
Pw2=(Nw1)T∈Ro×p;
In network training process, positive and negative to by way of neural network share connection weight, will be originally separate
It is positive and negative to get up to neural network integration, the connection weight of common training network.
The new neural network model core thought of the simulation biological bidirectional cognitive ability being originally embodied as is as follows:
1. recognizing again to neuron network simulation cognitive process
Biological cognitive process is often two-way, i.e., biology has the forward cognitive ability of " holding because of rope fruit " and " holds fruit
The reverse cognitive ability of Suo Yin ".By taking cognition of the people to concept as an example, from for psychologic angle, concept is the essence of things
Reflection of the attribute in human brain.For the angle portrayed from mathematics, the concept in human brain is the thing of reflection property, is to objective thing
One kind " manifolding " of thing.Under normal circumstances, people is neither first there is extension then to form intension, nor first there is intension then to be formed
Extension.Human brain forms concept, is started with from contrast, by repeatedly contrast, cursorily recognizes extension, cursorily recognizes intension, then
Fine understanding extension, fine understanding intension, so moves in circles, and gradually forms.
When the picture of multiple different cats is constantly presented to a trainee by us, human brain can gradually form the general of cat
Read, after multiple repetition training, the response mode of respective regions neuron is gradually fixed in the nervous system of people, formed to cat
The forward cognition of this concept.If the process be reversed, that is, allow trainee to imagine and draw the image of a cat, can send out
The image that existing trainee draws every time can be different, but increasing with frequency of training, in the image that trainee is drawn,
The key features such as head, neck, trunk, four limbs, the tail of cat are retained in drawing at each time, it means that produced by the concept of cat
Association have stimulated the neuronal cell of the same area, and then generate and see same response produced during the picture of cat,
The reverse cognitive process of this " producing the image of cat by the concept of cat " is forward recognized with " producing the concept of cat by the picture of cat "
Know that process is two process wholes or office by excited or suppress cooperation to complete by the neuron of relevant range in nervous system
Portion have shared the respective regions in nervous system.
It is found that training when the simulation that cognitive angle analysis artificial neural network is carried out to the nervous system of people
What the supervised learning method for being adopted was simulated is the conditioned reflex model in human brain cognition.It is defeated in the supervised learning algorithms such as BP
It is " because " to enter data, and output label is " fruit ", and network training is completed by the simulation for stimulating the forward cognitive process to response, real
Show by because (data) are to the cognitive learning of fruit (label), corresponding causality knowledge store is in the free parameter of network
(synaptic weight and the value of biasing).The neuron network simulation of this forward cognitive process, when forming concept similar to human brain, by
The forming process for being extended to intension of concept.
Biological reverse cognitive process be one " hold fruit rope because " process, be by concept for the concept cognition of people
Intension to extension forming process.It is simulated if by neutral net, this is one from output label to input data
The process for being learnt.
Still by taking BP algorithm as an example, when the neutral net incoming input label of forward direction, it is also possible to produced accordingly by calculating
Output data, and network weight is adjusted using the mode of mean square error backpropagation between output data and target data, make net
Network is completed by the simulation of the reverse cognitive process of label to data, is realized by fruit (label) to the cognitive learning because of (data), and
Corresponding " really because of relation " is stored in the free parameter value of network.
2. mutual learning neural network model structure
In conventional neural network model research, highlighted the simulation to forward cognitive ability, and have ignored it is right
The simulation of reverse cognitive ability, and in fact, the powerful nonlinear transformation ability of neutral net can set up the input space and arrive
The mapping in output space, can also set up output space to the mapping of the input space.Appoint in some specific neural network learnings
In business, both needed to extract the next cognitive output space of information from the input space, it is also desirable to extract feature to recognize from output space
Know the input space.
Therefore, in order to simulate biological two-way cognitive process, can build by the nerve of two shared network connection weights
The new model of network mapping model composition completes this process to cooperate.This patent is by the base in standard forward direction neutral net
On plinth, a negative sense neutral net with it with symmetrical structure is introduced, and tie both by way of connection weight is shared
It is combined, simulation of the neutral net to biological nervous system is completed jointly, this new neural network model is referred to as " mutually study
Neural network model ".Mutually learning neural network model is formed by positive neutral net and negative sense neural network ensemble, wherein positive
Neutral net is completed by the simulation of the forward cognitive process for being input to output, and negative sense neutral net is completed by output to the inverse of input
To the simulation of cognitive process, two network structures are symmetrical, and weights are shared.
As shown in Fig. 2 the embodiment of the present invention additionally provides the new neural network of above-mentioned simulation biological bidirectional cognitive ability
The training method of model, if training sample sum is m, the input and output of j-th sample are respectively xj, yj, it is positive and negative to nerve
Network calculations output is respectively F (x) and G (y), positive and negative to be respectively E to neural metwork training appearance batch error meanpAnd En,
Positive and negative to be respectively Pw and Nw to neural network connection weights, learning rate is a, and iterations is k, then training method includes as follows
Step:
Step1:Initialization network structure, random initializtion forward direction neural network weight;
Step2:All training samples carry out random disorder operation, rearrange sample order;It is s according to every group of number
(batch), by sample mean t subgroup (son batch) is divided intoExpression rounds up;
Step3:Calculate positive nerve net string bag batch mean value error:
Step4:Update positive neutral net connection weight:
Step5:The circulation of step 3-4 is carried out t time;
Step6:Positive neutral net connection weight transposition is assigned to into negative sense nerve net:Nw (k+1)=Pw (k+1)T;
Step7:Calculate negative sense nerve net string bag batch mean value error:
Step8:Update negative sense neutral net connection weight:
Step9:The circulation of step 7-8 is carried out t time;
Step10:Negative sense neutral net connection weight transposition is assigned to into positive nerve net:Pw (k+1)=Nw (k+1)T;
Step11:Model completes an iteration, and judges whether that reaching convergence wants according to error result and iterations k
Ask, if reaching requirement, network completes training, otherwise circulation step 2-10.
Wherein, step Step3-Step5 is the positive study stage, and step Step7-Step9 is the negative sense study stage.
The training method of the new neural network model of the simulation biological bidirectional cognitive ability being originally embodied as is in " mutually study
On the basis of neural network model ", by way of introducing negative sense learning process during in standard BP algorithm, it is proposed that one
New neural network training method --- " mutual learning neural network training method " is planted, the method is using input data and output mark
Label are aligned and are trained to neutral net, and negative sense neutral net is trained using output label and input data, and are passed through
The shared mode of weight matrix transposition makes positive study and negative sense learn alternately, and this training method simulates human brain and formed
By repeatedly contrast during concept, the extension of concept is cursorily recognized, cursorily recognize the intension of concept, then fine recognize
Know the extension of concept, the intension of fine understanding concept, this cognitive process for moving in circles.Its core concept is as follows:
1. mutual study thoughts
Early stage refers to that two or more neutral nets pass through supervised learning with regard to the theoretical research of mutual learning neural network, mutually
For tutor, learn from each other, and make connection weight be finally reached the synchronous regime with regard to the time.
On this basis, this patent proposes new neutral net and mutually learns (Mutual Learning) concept, i.e. nerve
The input data of network and output label are according to the mutually study of supervised learning rule.Comprising positive study in new mutual study concept
(Positive Learning) and negative sense learn (Negative Learning) two processes, wherein positive study is with data X
For input, label Y is output, is trained using supervised learning algorithm, and with former label Y as input, former data X are for negative sense study
Output, is trained using supervised learning algorithm.
2. mutual learning neural network training method description
Mutually learning neural network model has been erected to biological neural system by the positive and negative to neutral net of symmetrical configuration
The basic model that two-way cognitive process of uniting is simulated.When being trained to mutual learning neural network model, need using special
" reason " and " result " each other as learning object, is carried out two-way study by different method.For this purpose, this patent is in new mutual study
Conceptual foundation on propose " mutual learning neural network training method ".
Mutually learning neural network training method utilizes input data X and output label Y to mutual learning neural network model
Positive neutral net is trained, and transposition is assigned to negative sense neutral net (bias term is mutual after positive connection weight matrix update
It is independent), and using new input data Y (former output label) and new output label X (former input data) to negative sense nerve net
Network is trained, and again transposition is assigned to positive neutral net (bias term is separate) after negative sense connection weight matrix update, such as
This is reciprocal, positive learning process and negative sense learning process alternately, until iteration terminates.
By mutual learning neural network training method, two train neutral net in opposite direction can with collaborative work, and
Trained simultaneously.In the case of given input data, forward direction study can be used for adjudicating the classification of data, realize " hold because
The forward cognitive process of rope fruit ";In the case of given output label, negative sense study is a generation model, can be used to weight
Structure input data, completes the reverse cognitive process of " hold fruit rope because ".The positive and negative mode to study alternately then simulates people couple
In the cognitive process of concept by concept be extended to intension and by concept intension to extension alternating makeover process.It is such double
Can obtain the characteristic information of data from data space and Label space respectively to study, and by way of weights are shared
Bidirectional research is carried out in weights space.
Embodiment
(1) numerical experiment method is introduced
Due to mutual learning neural network training method (ML, Mutual Learning) every time repetitive exercise comprising positive and
Two learning processes of negative sense, standard forward direction training method STD-PL (Standard Positive Learning) [1] is only included
One positive learning process, therefore under identical iterations, the learning process of mutual learning neural network training method and
The habit time is the twice of standard forward direction training method.In order to liberally contrast mutual learning neural network training method and standard comprehensively
The performance of positive training method, numerical experiment part adopts 4 kinds of different training methods:Mutually learn EP-ML (Equal etc. process
Process Mutual Learning), etc. process shift learning EPT-ML (Equal Process Transformation
Mutual Learning), etc. iterations mutually learn EI-ML (Equal Iteration Mutual Learning), wait repeatedly
Generation number shift learning EIT-ML (Equal Iteration Transformation Mutual Learning).
Etc. 1. process mutually learns (EP-ML):Mutual learning training iterations is set to the positive instruction of standard by the training method
Practice the half of iterations, make mutual learning neural network training method and standard forward direction training method that there is identical learning process
Number and training time.
The process shift learning (EPT-ML) such as 2.:The training method first carries out the mutual learning training of certain number of times, then goes
Fall negative sense learning process, be converted to the positive training of standard, by the iterations for limiting mutual learning training, make mutually to learn nerve net
Network training method and standard forward direction training method have identical learning process number and training time.
3. iterations is waited mutually to learn (EI-ML):The iteration that the training method trains mutual learning training and standard forward direction
Number of times be set to it is identical, make mutual learning neural network training method learning process number be standard forward direction training method twice.
4. iterations shift learning (EIT-ML) is waited:The training method first carries out the mutual learning training of certain number of times, so
After be converted to the positive training of standard, and make the iterations phase of mutual learning neural network training method and standard forward direction training method
Together.
Mutual learning training conversion ratio is set in training process as ε, the positive training iterations of standard is K, standard is positive to train
Time be T, the iterations of 4 kinds of training methods, learning process number, compared with the positive training of standard mutual learning training time
Multiple etc. is as shown in table 1.
The standard of table 1 forward direction training method and 4 kinds of mutual learning neural network training method iterationses, learning process number, when
Between multiple comparative result
(2) experiment parameter is arranged
In numerical experiment neuron excitation function adopt Sigmoid functions, it is positive and negative to learning process using identical
Habit rate and momentum term parameter, training set and test set data are normalized in [0,1] interval range using maximin method,
Learning rate is adjusted by the way of learning rate reduction.If it is ScaleIndex that learning rate changes number of times scale parameter, change journey
Degree parameter is ScaleLr, and change total degree is ChangeTimes.
(3) numerical experiment case
1. classification experiments
In order to verify the validity of mutual learning neural network training method, 10 experiments are chosen from UCI taxonomy databases
Data set carries out classifying quality contrast test (data set information of selection is as shown in table 2).
The attribute information of table 2UCI categorized data sets
Listed data set is referred to as in the corresponding full name of UCI grouped datas concentration in table:
CMC/DRD/Glass/IP/Iris/ORH/Seeds/WF/WFN/Wine correspondence Contraceptive Method Choice/
Diabetic Retinopathy Debrecen/Glass Identification/Ionosphere/Iris/Optical
Recognition of Handwritten Digits/Seeds/Waveform Database Generator/Wine。
Standard forward direction training method and 4 kinds of mutual learning neural network training methods are initial in identical parameter setting and network
It is trained under conditions of weights, wherein Study rate parameter ScaleIndex and ScaleLr are disposed as 2/3, ChangeTimes
It is set to 4.Overall merit is carried out using the average result of 30 experiments on each data set, concrete outcome is as shown in table 3.
(test-Avg, test-Min, test-Std represent respectively prediction classification error rate average, minimum of a value and standard deviation;train-
Avg represents training classification error rate average.)
The classification error rate comparative result of the standard exercise method of table 3 and 4 kinds of mutual learning neural network training methods
The experiment statisticses result of analytical table 3, can obtain and such as draw a conclusion:
The average and minimum of a value of comparison-of-pair sorting's error rate, generally 4 kinds mutual learning neural network training methods are than standard just
Neutral net to training method training has lower average classification error rate and minimum classification Error rate.Illustrate mutually study god
Jing network training methods are a kind of effective and the good neural network training method of generalization ability of classifying.Comparison-of-pair sorting's error rate criteria
Difference, generally 4 kinds mutual learning neural network training methods have less than the neutral net of standard forward direction training method training
Classification error rate standard deviation, illustrates less by the neutral net fluctuation of mutual learning neural network training method training, mutually learns
It is a kind of stable neural network training method to practise neural network training method.Tri- kinds of instructions of contrast STD-BP, EP-ML and EPT-ML
Practice the time identical training method, EPT-ML methods training network performance be better than individually carry out a kind of STD-BP of study and
The network of EP-ML methods training.
2. the image recognition experiment of CMU PIE human face datas collection
Using 3 kinds of different training methods such as STD-PL, EPT-ML, EI-ML, wherein EPT-ML methods are with fixed time for experiment
Mutually the mode of study conversion replaces the mode of mutually study conversion ratio conversion is used above number.In order that network stabilization convergence, learns
Habit rate parameter ScaleIndex and ScaleLr are disposed as 1/2, ChangeTimes and are set to 8.
Understand that the training time of EI-ML methods and learning process number are under identical iterations by table 1 above
The twice of STD-PL methods, and the learning process number of EPT-ML methods and STD-PL methods is equal, iterative process number.In order to
EPT-ML and STD-PL is more intuitively contrasted, by the odd number of the standard forward direction training stage of selection EPT-ML methods in experiment
The mode of the odd-times iteration result of secondary iteration result and STD-PL methods is intercepting the half of positive network training iterative process
Show for legend, enable the EPT-ML methods and STD-PL methods contrast experiment under identical iterations and training time
As a result.
CMU PIE human face data collection is by the people of 68 volunteers collection under 13 postures, 21 kinds of illumination and 4 kinds of expressions
Face data set.The mode of reference literature [23], selects a part for whole data set, altogether the Gray Face of 11492 32*32
Picture as experimental data set (68 volunteers everyone as a class, 13 kinds of postures are included per class, every kind of posture is selecting 13 just
Dough figurine face illumination picture), 100 pictures are randomly selected in each face classification, 6800 pictures composition training set is amounted to, its
4692 pictures of remaininging are test set.
Experiment parameter arranges as follows:The number of hidden nodes 200, network structure 1024-200-68, learning rate 0.2, momentum term
0.9, weights penalty term 1e-5, training batch 200, the total mutual study number of times 200 for crossing number of passes for 3000, EPT-ML methods of iteration,
200 mutual learning trainings are first carried out, 2600 standard forward direction training are carried out afterwards.
The mean square error of 2 kinds of mutual learning neural network training methods and standard forward direction training method on CMU PIE data sets
Convergence result is as shown in figure 3, training classification error rate is as shown in figure 4, prediction classification error rate is as shown in figure 5, final experiment knot
Fruit is as shown in table 4.
Table 4CMU PIE data set numerical experiment results
Analysis experimental result understands:
The contrast EI-ML and positive training STD-PL of the standard not intercepted, in the starting stage of iteration, with iterations
Increase, the mean square error convergence curve and training classification error rate curve decrease speed of EI-ML is very fast, and the mean square error of STD-PL
Difference convergence curve and training classification error rate curve decline slower;In iteration later stage, the mean square error convergence curve of EI-ML and instruction
Practice classification error rate curve gradually to tend to restraining saturation, and the mean square error convergence curve of STD-PL and training classification error rate are bent
Line then continues to decline, and converges to a less square mean error amount and training classification error rate value, in iterations identical
Under the conditions of, even if EI-PL methods spend the double training time, but still with larger square mean error amount and training classification error
Rate value, illustrates that mutual learning neural network training method can respectively from input and defeated by mutual study between data and label
Go out middle acquisition information and accelerate network training, further alleviate single piece of information source network and restrain slow problem, so as to initially instruct
Practice the stage, with faster convergence rate, make network Fast Convergent, but the mode of mutually training study need to take into account it is positive and negative to
The convergence of two networks, causes it to have larger convergence mean square error, i.e., its extreme value optimizing ability is poor.Contrast EPT-ML side
Standard forward direction training method Half-PL after method and reduction intercepting, under the conditions of learning process and training time identical, EPT-
Curve of the mean square error convergence curve and training classification error rate curve of ML methods in whole iterative process all than Half-PL
It is low, illustrate that EPT-ML methods have faster convergence rate and preferably convergence effect, it means that EPT-ML methods are by front
The mutual learning training of phase has searched out the larger weights position of gradient in weights space, but due to the extreme value optimizing ability of itself
Weaker, convergence curve gradually tends to saturation, after being transformed into the positive training of standard, makes standard forward direction training method in a convergence
Hurry up, the position that gradient is big starts iterative calculation, given full play to the preferable extreme value optimizing ability of standard forward direction training method, make net
Network rapidly converges to less extreme value place.
Analysis Fig. 3 understands that EI-ML method iteration preconvergence is fast, but prediction classification error rate is higher when restraining;STD-PL
Method preconvergence is slow, but has lower prediction classification error rate than EI-ML method;EPT-ML methods have merged mutual study god
Jing network training method preconvergence speed is fast, using the teaching of the invention it is possible to provide the advantage of good weights locus and standard forward direction training method
The strong advantage of extreme value optimizing ability, can make network rapidly converge to minimum.Meanwhile, as shown in Table 4, EPT-ML methods tool
There are minimum mean square error, training classification error rate and prediction classification error rate, illustrate in convergence rate faster and less
Under the conditions of mean square error, network is not set Expired Drugs occur.
3. the image recognition experiment of ORL facial recognition datas collection
ORL facial recognition data collection is that the data set is by different periods, background by Cambridge University's AT&T establishment of laboratory
The gray level image composition of 40 people changed for black, attitude, expression and facial jewelry totally 400 width 112*92.Experiment from
5 pictures are randomly choosed in each classification, altogether 200 pictures composition training set, remaining picture is used as test set.Due to ORL
The picture dimension of data set is larger, and trains picture number less, easily makes network that over-fitting occurs, so in ORL data sets
Experiment in employ less learning rate, and remove momentum term parameter, make network stable convergence.
Experiment parameter arranges as follows:The number of hidden nodes 200, network structure 10304-300-40, learning rate 0.02, momentum term
0, weights penalty term 1e-5, training batch 200, iterations 1000, the mutual study number of times 100 of EPT-ML methods is first carried out
100 mutual learning trainings, carry out afterwards 800 standard forward direction training.2 kinds of mutual learning neural network training sides on ORL data sets
The mean square error convergence result of method and standard forward direction training method is as shown in fig. 6, training classification error rate is as shown in fig. 7, prediction
Classification error rate is as shown in figure 8, final experimental result is as shown in table 5.
Table 5ORL data set numerical experiment results
Analysis experimental result understands:
The information of comprehensive Fig. 6, Fig. 7, Fig. 8 and Biao 5 understands, in the ORL training networks that training data is few, network structure is big
On, the convergence process and CMU PIE data sets of 3 kinds of training methods, Mnist data sets are presented identical variation tendency.But EI-ML
Method Expired Drugs in iteration deuterogenesis, and there are no Expired Drugs in STD-PL methods and EPT-ML methods.
The above is only the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For member, under the premise without departing from the principles of the invention, some improvements and modifications can also be made, these improvements and modifications also should
It is considered as protection scope of the present invention.
Claims (6)
1. the new neural network model of biological bidirectional cognitive ability is simulated, it is characterised in that by positive neutral net and negative sense
Neutral net is constituted, and positive neutral net is completed by the simulation of the forward cognitive process for being input to output, and negative sense neutral net is complete
Into the simulation by output to the reverse cognitive process of input, two network structures are symmetrical, and weights are shared.
2. the new neural network model of biological bidirectional cognitive ability is simulated as claimed in claim 1, it is characterised in that described
Positive neutral net is the classical feedforward neural network containing a hidden layer, is empty to output for being responsible for the study input space
Between mapping model;Specifically,
If positive neural network input layer neuron number is r, input vector is x=[x1, x2..., xr]T∈Rr×1, output layer
Neuron number is o, and output vector isHidden neuron number is ρ, and its network structure is by Fig. 1
Shown in positive part of neural network.The connection weight matrix of input layer and hidden layer is Pw1∈Rp×r, bias term is Pb1∈Rp×1, it is hidden
The connection weight matrix of layer and output layer is Pw2∈Ro×p, bias term is Pb2∈Ro×1。
3. the new neural network model of biological bidirectional cognitive ability is simulated as claimed in claim 1, it is characterised in that described
Negative sense neutral net is symmetrical with positive neural network structure, is for being responsible for study output space to input space mapping model;
Specifically,
If negative sense neural network input layer neuron number is o, input vector is y=[y1, y2..., yo]T∈Ro×1, output layer
Neuron number is r, and output vector isHidden neuron number is p, and its network structure is by Fig. 1
Shown in negative sense part of neural network, the connection weight matrix of input layer and hidden layer is Nw1∈Rp×o, bias term is Nb1∈Rp×1;It is hidden
The connection weight matrix of layer and output layer is Nw2∈Rr×p, bias term is Nb2∈Rr×1。
4. the new neural network model of biological bidirectional cognitive ability is simulated as claimed in claim 1, it is characterised in that described
Positive neutral net and the negative sense neural network structure are symmetrical, according to the attachment structure of neutral net, it is known that positive and negative to nerve
The corresponding connection weight matrix of network transposition relation each other, is shown below:
Pw1=(Nw2)T∈Rp×r, Pw2=(Nw1)T∈Ro×p;
In network training process, positive and negative to by way of neural network share connection weight, by it is originally separate just,
Negative sense neutral net combines, the connection weight of common training network.
5. as described in any one of claim 1-4 simulation biological bidirectional cognitive ability new neural network model training side
Method, it is characterised in that set training sample sum as m, the input and output of j-th sample are respectively xj, yj, it is positive and negative to nerve net
Network calculates output and is respectively F (x) and G (y), positive and negative to be respectively E to neural metwork training appearance batch error meanpAnd En, just,
Negative sense neutral net connection weight is respectively Pw and Nw, and learning rate is a, and iterations is k, then training method includes following step
Suddenly:
Step1:Initialization network structure, random initializtion forward direction neural network weight;
Step2:All training samples carry out random disorder operation, rearrange sample order;It is that s (is criticized according to every group of number
Amount), sample mean is divided into into t subgroup (son batch) Expression rounds up;
Step3:Calculate positive nerve net string bag batch mean value error:
Step4:Update positive neutral net connection weight:
Step5:The circulation of step 3-4 is carried out t time;
Step6:Positive neutral net connection weight transposition is assigned to into negative sense nerve net:
Nw (k+1)=Pw (k+1)T;
Step7:Calculate negative sense nerve net string bag batch mean value error:
Step8:Update negative sense neutral net connection weight:
Step9:The circulation of step 7-8 is carried out t time;
Step10:Negative sense neutral net connection weight transposition is assigned to into positive nerve net:
Pw (k+1)=Nw (k+1)T;
Step11:Model completes an iteration, and judges whether to reach convergent requirement according to error result and iterations k, if
Requirement is reached, then network completes training, otherwise circulation step 2-10.
6. the training method of the new neural network model of biological bidirectional cognitive ability is simulated as claimed in claim 5, and it is special
Levy and be, step Step3-Step5 is the positive study stage, step Step7-Step9 is the negative sense study stage.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610891903.2A CN106560848B (en) | 2016-10-09 | 2016-10-09 | Novel neural network model for simulating biological bidirectional cognitive ability and training method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610891903.2A CN106560848B (en) | 2016-10-09 | 2016-10-09 | Novel neural network model for simulating biological bidirectional cognitive ability and training method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106560848A true CN106560848A (en) | 2017-04-12 |
CN106560848B CN106560848B (en) | 2021-05-11 |
Family
ID=58485739
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610891903.2A Active CN106560848B (en) | 2016-10-09 | 2016-10-09 | Novel neural network model for simulating biological bidirectional cognitive ability and training method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106560848B (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108596334A (en) * | 2018-03-23 | 2018-09-28 | 大国创新智能科技(东莞)有限公司 | The judgement of data correspondence, generation method and system based on two-way deep learning |
CN108846477A (en) * | 2018-06-28 | 2018-11-20 | 上海浦东发展银行股份有限公司***中心 | A kind of wisdom brain decision system and decision-making technique based on reflex arc |
CN109376615A (en) * | 2018-09-29 | 2019-02-22 | 苏州科达科技股份有限公司 | For promoting the method, apparatus and storage medium of deep learning neural network forecast performance |
CN110414664A (en) * | 2018-04-28 | 2019-11-05 | 三星电子株式会社 | For training the method and neural metwork training system of neural network |
CN111026548A (en) * | 2019-11-28 | 2020-04-17 | 国网甘肃省电力公司电力科学研究院 | Power communication equipment test resource scheduling method for reverse deep reinforcement learning |
CN111358430A (en) * | 2020-02-24 | 2020-07-03 | 深圳先进技术研究院 | Training method and device for magnetic resonance imaging model |
JP2020522055A (en) * | 2017-05-14 | 2020-07-27 | デジタル リーズニング システムズ インコーポレイテッド | System and method for rapidly building, managing, and sharing machine learning models |
CN111714118A (en) * | 2020-06-08 | 2020-09-29 | 北京航天自动控制研究所 | Brain cognition model fusion method based on ensemble learning |
CN111772629A (en) * | 2020-06-08 | 2020-10-16 | 北京航天自动控制研究所 | Brain cognitive skill transplantation method |
CN111832540A (en) * | 2020-07-28 | 2020-10-27 | 吉林大学 | Identity verification method based on unsteady-state iris video stream bionic neural network |
CN112905862A (en) * | 2021-02-04 | 2021-06-04 | 深圳市永达电子信息股份有限公司 | Data processing method and device based on table function and computer storage medium |
CN113486580A (en) * | 2021-07-01 | 2021-10-08 | 河北工业大学 | High-precision numerical modeling method, server and storage medium for in-service wind turbine generator |
CN113554081A (en) * | 2021-07-15 | 2021-10-26 | 清华大学 | Method and device for constructing neural network architecture simulating dendritic spine change |
CN113554166A (en) * | 2021-06-16 | 2021-10-26 | 中国人民解放军国防科技大学 | Deep Q network reinforcement learning method and equipment for accelerating cognitive behavior model |
CN114024713A (en) * | 2021-09-30 | 2022-02-08 | 广东电网有限责任公司电力调度控制中心 | Anti-intrusion method for low-voltage power line carrier communication system |
WO2022193312A1 (en) * | 2021-03-19 | 2022-09-22 | 京东方科技集团股份有限公司 | Electrocardiogram signal identification method and electrocardiogram signal identification apparatus based on multiple leads |
CN116168805A (en) * | 2023-01-20 | 2023-05-26 | 北京瑞帆科技有限公司 | Thinking training device and cognitive training system for cognitive training |
CN117527137A (en) * | 2024-01-06 | 2024-02-06 | 北京领云时代科技有限公司 | System and method for interfering unmanned aerial vehicle communication based on artificial intelligence |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102263636A (en) * | 2011-05-24 | 2011-11-30 | 浙江工业大学 | Stream cipher key control method for fusing neural network with chaotic mappings |
WO2015016640A1 (en) * | 2013-08-02 | 2015-02-05 | Ahn Byungik | Neural network computing device, system and method |
CN104615987A (en) * | 2015-02-02 | 2015-05-13 | 北京航空航天大学 | Method and system for intelligently recognizing aircraft wreckage based on error back propagation neural network |
CN105528638A (en) * | 2016-01-22 | 2016-04-27 | 沈阳工业大学 | Method for grey correlation analysis method to determine number of hidden layer characteristic graphs of convolutional neural network |
-
2016
- 2016-10-09 CN CN201610891903.2A patent/CN106560848B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102263636A (en) * | 2011-05-24 | 2011-11-30 | 浙江工业大学 | Stream cipher key control method for fusing neural network with chaotic mappings |
WO2015016640A1 (en) * | 2013-08-02 | 2015-02-05 | Ahn Byungik | Neural network computing device, system and method |
CN104615987A (en) * | 2015-02-02 | 2015-05-13 | 北京航空航天大学 | Method and system for intelligently recognizing aircraft wreckage based on error back propagation neural network |
CN105528638A (en) * | 2016-01-22 | 2016-04-27 | 沈阳工业大学 | Method for grey correlation analysis method to determine number of hidden layer characteristic graphs of convolutional neural network |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020522055A (en) * | 2017-05-14 | 2020-07-27 | デジタル リーズニング システムズ インコーポレイテッド | System and method for rapidly building, managing, and sharing machine learning models |
JP7216021B2 (en) | 2017-05-14 | 2023-01-31 | デジタル リーズニング システムズ インコーポレイテッド | Systems and methods for rapidly building, managing, and sharing machine learning models |
CN108596334A (en) * | 2018-03-23 | 2018-09-28 | 大国创新智能科技(东莞)有限公司 | The judgement of data correspondence, generation method and system based on two-way deep learning |
CN108596334B (en) * | 2018-03-23 | 2021-01-01 | 大国创新智能科技(东莞)有限公司 | Data corresponding relation judging and generating method and system based on bidirectional deep learning |
CN110414664A (en) * | 2018-04-28 | 2019-11-05 | 三星电子株式会社 | For training the method and neural metwork training system of neural network |
CN108846477B (en) * | 2018-06-28 | 2022-06-21 | 上海浦东发展银行股份有限公司***中心 | Intelligent brain decision system and decision method based on reflection arcs |
CN108846477A (en) * | 2018-06-28 | 2018-11-20 | 上海浦东发展银行股份有限公司***中心 | A kind of wisdom brain decision system and decision-making technique based on reflex arc |
CN109376615B (en) * | 2018-09-29 | 2020-12-18 | 苏州科达科技股份有限公司 | Method, device and storage medium for improving prediction performance of deep learning network |
CN109376615A (en) * | 2018-09-29 | 2019-02-22 | 苏州科达科技股份有限公司 | For promoting the method, apparatus and storage medium of deep learning neural network forecast performance |
CN111026548A (en) * | 2019-11-28 | 2020-04-17 | 国网甘肃省电力公司电力科学研究院 | Power communication equipment test resource scheduling method for reverse deep reinforcement learning |
CN111358430A (en) * | 2020-02-24 | 2020-07-03 | 深圳先进技术研究院 | Training method and device for magnetic resonance imaging model |
CN111358430B (en) * | 2020-02-24 | 2021-03-09 | 深圳先进技术研究院 | Training method and device for magnetic resonance imaging model |
CN111714118A (en) * | 2020-06-08 | 2020-09-29 | 北京航天自动控制研究所 | Brain cognition model fusion method based on ensemble learning |
CN111772629A (en) * | 2020-06-08 | 2020-10-16 | 北京航天自动控制研究所 | Brain cognitive skill transplantation method |
CN111772629B (en) * | 2020-06-08 | 2023-03-24 | 北京航天自动控制研究所 | Brain cognitive skill transplanting method |
CN111832540B (en) * | 2020-07-28 | 2021-01-15 | 吉林大学 | Identity verification method based on unsteady-state iris video stream bionic neural network |
CN111832540A (en) * | 2020-07-28 | 2020-10-27 | 吉林大学 | Identity verification method based on unsteady-state iris video stream bionic neural network |
CN112905862A (en) * | 2021-02-04 | 2021-06-04 | 深圳市永达电子信息股份有限公司 | Data processing method and device based on table function and computer storage medium |
WO2022193312A1 (en) * | 2021-03-19 | 2022-09-22 | 京东方科技集团股份有限公司 | Electrocardiogram signal identification method and electrocardiogram signal identification apparatus based on multiple leads |
CN113554166A (en) * | 2021-06-16 | 2021-10-26 | 中国人民解放军国防科技大学 | Deep Q network reinforcement learning method and equipment for accelerating cognitive behavior model |
CN113486580A (en) * | 2021-07-01 | 2021-10-08 | 河北工业大学 | High-precision numerical modeling method, server and storage medium for in-service wind turbine generator |
CN113554081A (en) * | 2021-07-15 | 2021-10-26 | 清华大学 | Method and device for constructing neural network architecture simulating dendritic spine change |
CN114024713A (en) * | 2021-09-30 | 2022-02-08 | 广东电网有限责任公司电力调度控制中心 | Anti-intrusion method for low-voltage power line carrier communication system |
CN114024713B (en) * | 2021-09-30 | 2023-08-08 | 广东电网有限责任公司电力调度控制中心 | Anti-intrusion method for power line carrier communication system |
CN116168805A (en) * | 2023-01-20 | 2023-05-26 | 北京瑞帆科技有限公司 | Thinking training device and cognitive training system for cognitive training |
CN117527137A (en) * | 2024-01-06 | 2024-02-06 | 北京领云时代科技有限公司 | System and method for interfering unmanned aerial vehicle communication based on artificial intelligence |
CN117527137B (en) * | 2024-01-06 | 2024-05-31 | 北京领云时代科技有限公司 | System and method for interfering unmanned aerial vehicle communication based on artificial intelligence |
Also Published As
Publication number | Publication date |
---|---|
CN106560848B (en) | 2021-05-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106560848A (en) | Novel neural network model for simulating biological bidirectional cognition capability, and training method | |
Zhang et al. | Group teaching optimization algorithm: A novel metaheuristic method for solving global optimization problems | |
US9697462B1 (en) | Synaptic time multiplexing | |
Boussaïd et al. | A survey on optimization metaheuristics | |
Tang et al. | A lévy flight-based shuffled frog-leaping algorithm and its applications for continuous optimization problems | |
Enquist et al. | Neural networks and animal behavior | |
CN110443364A (en) | A kind of deep neural network multitask hyperparameter optimization method and device | |
CN105469145B (en) | A kind of intelligent Auto-generating Test Paper method based on Genetic Particle Swarm Algorithm | |
Jager et al. | The need for and development of behaviourally realistic agents | |
Powell | Contingency and convergence: Toward a cosmic biology of body and mind | |
Gershenson | Computing networks: A general framework to contrast neural and swarm cognitions | |
Heylighen | Challenge Propagation: Towards a theory of distributed intelligence and the global brain | |
Campbell | Feminism and evolutionary psychology | |
Zhang et al. | RL-GEP: symbolic regression via gene expression programming and reinforcement learning | |
Pais | Emergent collective behavior in multi-agent systems: an evolutionary perspective | |
CN109800850A (en) | A kind of nomadic algorithm of novel colony intelligence | |
Ha et al. | Social learning spontaneously emerges by searching optimal heuristics with deep reinforcement learning | |
Mustafa et al. | On Analysis and Evaluation of Learning Creativity Quantification via Naturally Neural Networks' Simulation and Realistic Modeling of Swarm Intelligence” published at the proceeding of the conference Eminent Association of Researchers in Engineering & Technology (EARET) | |
Podgórski | Humberto Maturana’s view on the theory of evolution. From autopoiesis to natural drift metaphor | |
Saravana et al. | A Fuzzy-GA Based controlling System for Wireless sensor networks | |
Liu et al. | Baby search algorithm | |
Forradellas et al. | Advantages of using self-organizing maps to analyse student evaluations of teaching | |
Kalpana et al. | Bio-inspired firefly algorithm a methodical survey–swarm intelligence algorithm | |
Onet et al. | Nature inspired algorithms and Artificial Intelligence | |
Lu et al. | Redesigning Ideological and Political Courses in the Digital Era with Computer Network Technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |