CN106295794A - The neural network modeling approach of fractional order based on smooth Group Lasso penalty term - Google Patents

The neural network modeling approach of fractional order based on smooth Group Lasso penalty term Download PDF

Info

Publication number
CN106295794A
CN106295794A CN201610601738.2A CN201610601738A CN106295794A CN 106295794 A CN106295794 A CN 106295794A CN 201610601738 A CN201610601738 A CN 201610601738A CN 106295794 A CN106295794 A CN 106295794A
Authority
CN
China
Prior art keywords
fractional order
neural network
weight matrix
weights
penalty term
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610601738.2A
Other languages
Chinese (zh)
Inventor
王健
温艳青
黄炳家
桑兆阳
柳毓松
陈华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Petroleum CUP
Original Assignee
China University of Petroleum CUP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Petroleum CUP filed Critical China University of Petroleum CUP
Priority to CN201610601738.2A priority Critical patent/CN106295794A/en
Publication of CN106295794A publication Critical patent/CN106295794A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Feedback Control In General (AREA)

Abstract

The invention discloses a kind of fractional order neural network modeling method based on smooth Group Lasso penalty term, comprise the following steps, select neural network model, determine error function;Fractional order steepest descent method iteration is utilized to update training network weights;I.e. weights update along error function about the fractional order negative gradient direction of weights;The network parameter of neural network model is obtained according to the fractional order steepest descent method in step 2;Test sample is utilized to calculate the precision of neural network model.The invention has the beneficial effects as follows: the present invention uses fractional order steepest descent method training network weights, owing to fractional model often describes than integer model more more accurately, therefore, compared to the algorithm of integer rank derivation gradient, the precision of the present invention is higher.

Description

The neural network modeling approach of fractional order based on smooth Group Lasso penalty term
Technical field
The present invention relates to neural network model modeling field, a kind of based on smooth Group Lasso penalty term Fractional order neural network modeling method.
Background technology
At present, error back propagation (Back Propagation is abbreviated as BP) neutral net is a Multilayer feedforward neural nets Network, interlayer neuron realizes weights and entirely connects, and without connecting in layer, for the study of its weights, using most is that gradient declines Method, i.e. according to the error of ideal output with actual output, seeks the object function partial derivative to each weights, and it is contrary to press partial derivative Direction amendment weights, reach the ever-reduced purpose of output error.It addition, BP algorithm is also frequently with conjugate The methods such as gradient (conjugate gradient) method and Gauss-Newton (Gauss-Newton) method realize network training.But it is mentioned above And the best practice of typical training network be substantially all based on traditional integer rank calculus, i.e. weights are entered by object function Row single order or second order derivation.
Fractional calculus (Fractional Derivatives and Integrals, or Fraction Calculus, is abbreviated as FC) i.e. refer to that function, to variable non-integral order derivation or integration, is the most progressively applied at electromagnetism, signal The actual complex system aspects such as reason, quantum differentiation.Fractional calculus theory is combined with neutral net, defines fractional order god Through network.Fractional order artificial neural network has become current up-and-coming study hotspot.
Document [1] (come from Y.F.Pu, J.L.Zhou, Y.Zhang, N.Zhang, G.Huang, P.Siarry, Fractional Extreme Value Adaptive Training Method:Fractional Steepest Descent Approach,IEEE Transactions on neural networks and learning systems,VOL.26, NO.4, APRIL 2015.) use fractional calculus to propose fractional order steepest descent method.The document have studied fractional order two Secondary energy functional, uses fractional order steepest to decline learning algorithm by numerical experiment and finds its extreme point, and analyzes and proposed Fractional order steepest decline learning algorithm stability and convergence.
Fractional order steepest descent method is joined in neutral net, document [2] (come from J.Wang, Y.Q.Wen, Y.D.Gou,Z.Y.Ye,H.Chen,Convergence analysis of fractional-order BP neural Networks with Caputo derivative.Neural Networks, submitted.) disclosed one is based on mark The neural network algorithm of rank steepest descent method, according to the error of ideal output with actual output, asks object function to each weights Fractional order partial derivative, and by fractional order partial derivative contrary direction amendment weights, reach the ever-reduced purpose of output error. Numerical experiment proves that fractional order neural network precision based on steepest descent method is higher than integer rank network, and 7/9 rank neutral net Arithmetic accuracy is the highest (see J.Wang, Y.Q.Wen, Y.D.Gou, Z.Y.Ye, H.Chen, Convergence analysis of fractional-order BP neural networks with Caputo derivative.Neural Networks, submitted.).Document [3] have studied the dynamic property of a kind of fractional order recurrent neural networks model, and simulation result shows point The dynamic properties of soils of number rank neutral net is similar to the character of integer rank neutral net, and fractional order neural network algorithm the convergence speed Apparently higher than conventional integer rank neural network algorithm.
Though BP neutral net is widely used, but there are three essential shortcomings: convergence rate is slow, Its Fault Tolerance is poor And be easily absorbed in local minimum and can not get globally optimal solution.For its defect, it has been suggested that be much effectively improved algorithm, including Increase penalty term, Automatic adjusument learning rate, introduce the methods such as steepness factor.Wherein, increasing penalty term in error function is A kind of typical solution, the effect of penalty term is the proportion reducing weights, and makes some weights vanishing by iteration, Reach improve the generalization ability of network and improve the beta pruning effect of network.
Document [2] proposes a kind of neural network model based on fractional order steepest descent method, and this algorithm contributes to changing The convergence precision of network access network, but neuron number and linking number redundancy, neural network structure is complicated, does not have openness.
Summary of the invention
It is an object of the invention to as overcoming above-mentioned the deficiencies in the prior art, it is provided that one is punished based on smooth Group Lasso Penalize the fractional order neural network modeling method of item.
For achieving the above object, the present invention uses following technical proposals:
Fractional order neural network modeling method based on smooth Group Lasso penalty term, comprises the following steps:
Step one, selects neural network model, determines error function;
Step 2, utilizes fractional order steepest descent method iteration to update training network weights;I.e. weights close along error function Update in the fractional order negative gradient direction of weights;
Step 3: obtain the network parameter of neural network model according to the fractional order steepest descent method in step 2;
Step 4: utilize test sample to calculate the precision of neural network model.
Preferably, the neural network model of described step one is as follows:
Select three layers of BP neutral net, including input layer, hidden layer and output layer, described input layer, hidden layer and output layer joint Point number is respectively p, n and 1, selects J training sample setWherein, xj=(x1 j,Κ,xp j)T, xjRepresent jth Individual input sample, xjFor p dimensional vector, the ideal that jth input sample is corresponding is output as Oj, the weights square of input layer to hidden layer Battle array is V=(vim)n×p, V=(vim)n×pDimension is n × p, remembers vi=(vi1,Κ,vip)T, i=1, Κ, n;Hidden layer arrives output layer Weight vector is u=(u1,Κ,un)T, weight matrix is n dimensional vector, remembers weight matrix v, hidden layer and output layer activation primitive For sigmoid function or other functions, it is designated as g, f respectively.
It is further preferred that in described step one, error function is expressed formula and is:
In formula, s (b) represents smooth function, and b represents a finite dimensional vector, and β≤1 is a fixing constant.
Preferably, described step 2 includes following sub-step:
Step S21: initialize weight matrix all weights W0
Step S22: to initial weight vector W0It is optimized.
Preferably, described step S22 initial weight vector W0Optimization step is as follows:
Step S221: primary iteration number of times k=0;
Step S222: calculate the error function weight matrix W about input layer to hidden layerkFractional order gradient;
If fractional order gradient is α, 0 < α < 1, then error function is about the weight matrix W of input layer to hidden layerkMark Ladder degree Derivative Formula is:
Wherein c is WkThe minima of middle element;Γ () is Gamma function;
Step S223: to the weight matrix W in step S222kBeing updated, update mode is as follows:
Wherein, η represents learning rate;
Step S224: by weight matrix WkCorresponding error function value compares with error threshold, if weight matrix WkCorresponding Error function value is less than error threshold, then proceed to step 4, otherwise, proceed to next step;
Error function is
Step S225: make k=k+1, returns and performs step S222.
It is further preferred that in described step S223, obtain learning rate η by the way of line search.
The invention has the beneficial effects as follows:
1. the present invention uses fractional order steepest descent method training network weights, owing to fractional model describes often than integer Order mode type is more accurate, and therefore, compared to the algorithm of integer rank derivation gradient, the precision of the present invention is higher;
2., by using the log on model training of the present invention, the situation that weights are zero is carried out beta pruning, the god of redundancy Through unit with is connected weights and will be deleted, network structure is simplified, openness preferably, neutral net is for training and testing Data can realize good matching, improves network generalization.
Accompanying drawing explanation
Fig. 1 is the method flow diagram that the present invention provides;
Fig. 2 is the neural network model schematic diagram that the present invention provides.
Detailed description of the invention
The present invention is further described with embodiment below in conjunction with the accompanying drawings.
As it is shown in figure 1, fractional order neural network modeling method based on smooth Group Lasso penalty term, including following Step:
Step one, selects neural network model, utilizes smooth function to approach Group Lasso penalty term, obtains error letter Number;
Step 2, utilizes fractional order steepest descent method training network weights, i.e. weights along error function about weights Fractional order negative gradient direction updates;
Step 3: obtain the network parameter of neural network model according to the fractional order steepest descent method in step 2;
Step 4: utilize test sample to calculate the precision of neural network model.
Preferably, the neural network model of described step one is as follows:
Select three layers of BP neutral net, including input layer, hidden layer and output layer, described input layer, hidden layer and output layer joint Point number is respectively p, n and 1, selects J training sample setWherein, xj=(x1 j,Κ,xp j)T, xjRepresent jth Individual input sample, xjFor p dimensional vector, the ideal that jth input sample is corresponding is output as Oj, the weights square of input layer to hidden layer Battle array is V=(vim)n×p, V=(vim)n×pDimension is n × p, remembers vi=(vi1,Κ,vip)T, i=1, Κ, n.Hidden layer arrives output layer Weight vector is u=(u1,Κ,un)T, weight matrix is n dimensional vector, remembers weight matrixHidden layer It is sigmoid function or other functions with output layer activation primitive, is designated as g, f respectively.
In prior art, the error function following formula of employing:
Owing to numerical oscillation phenomenon the most easily occurs in weights sequence, for overcoming this technical problem, the present invention Introduce smooth function, and utilize smooth function to approach the Group Lasso penalty term in tradition error function, wherein, smooth letter Number expression formula is as follows:
Wherein b represents a finite dimensional vector, and β≤1 is a fixing constant.
During the present invention utilizes gradient descent algorithm, | | u | | and | | vi| | at the origin is clearly and there is not partial derivative , so gradient descent method cannot be introduced directly in tradition error function, to this end, smooth function optimization problem is forced by the present invention Nearly nonsmooth optimization, i.e. uses up and slides to valued function s (u) and s (vi) approach respectively | | u | | and | | vi||。
Therefore, the error function after introducing smooth function expresses formula:
Described step 2 includes following sub-step:
Step S21: initialize weight matrix all weights W0
Step S22: to initial weight vector W0It is optimized.
Preferably, described step S22 initial weight vector W0Optimization step is as follows:
Step S221: primary iteration number of times k=0;
Step S222: calculate the error function weight matrix W about input layer to hidden layerkFractional order gradient;
If fractional order gradient is α, 0 < α < 1, then error function is about the weight matrix W of input layer to hidden layerkMark Ladder degree Derivative Formula is:
Wherein c is WkThe minima of middle element;Γ () is Gamma function;
Step S223: to the weight matrix W in step S222kBeing updated, update mode is as follows:
Wherein, η represents learning rate;
Step S224: by weight matrix WkCorresponding error function value compares with error threshold, if weight matrix WkCorresponding Error function value is less than error threshold, then proceed to step 4, otherwise, proceed to next step;
Error function is
The purpose setting up neural network model is training weight matrix and weight vector so that error function numerical value gradually subtracts Little, when error function reaches error threshold, the most i.e. show that neural metwork training is preferable, obtain the weights square optimized Battle array, wherein, error threshold is default accuracy value, and presetting accuracy value interval in Theoretical Calculation is [0.001,0.005].
Step S225: make k=k+1, returns and performs step S222.
As in figure 2 it is shown, the fractional order neural network modeling side based on smooth Group Lasso penalty term that the present invention provides In method, the weights of the shaded side in Fig. 2 represent Group Lasso penalty term, by using the log on model of the present invention to instruct Practicing, the situation that weights are zero carries out beta pruning, delete the neuron of redundancy and be connected weights, network structure is simplified, sparse Property preferable, neutral net can realize good matching for the data of training and test, improves network generalization.
The present invention uses fractional order steepest descent method to be optimized weight matrix and weight vector, and fractional order steepest declines Owned by France in fractional order adaptive learning algorithm, this algorithm utilizes error function to search minimal point about the fractional-order of weights Rope, and then obtain the direction of steepest descent at minimal point, and weight matrix and weight vector are carried out right value update.It compares integer Rank optimized algorithm and has the biggest improvement in precision in convergence rate.The existence of Group Lasso penalty term is for improving Network generalization, improving hidden node beta pruning performance has vital effect, and the present invention is general at improvement neural network algorithm While change ability and improve the beta pruning effect of network, simplify the structure of network and improve network speed.
The present invention has used Fractional Derivative, due to Fractional Derivative and integer rank phase when optimizing input weight matrix Ratio, advantage is to have Memorability and heritability, simultaneously for complication system, uses integer level system to describe it and describes precision phase To relatively low, systematic function can not be accurately reflected.And substantial amounts of research shows for various real systems, retouching of fractional model It is often more accurate than integer model to state, and therefore the present invention is higher than gradient descent algorithm precision based on the derivation of integer rank.
Although the detailed description of the invention of the present invention is described by the above-mentioned accompanying drawing that combines, but not the present invention is protected model The restriction enclosed, one of ordinary skill in the art should be understood that on the basis of technical scheme, and those skilled in the art are not Need to pay various amendments or deformation that creative work can make still within protection scope of the present invention.

Claims (6)

1. the neural network modeling approach of fractional order based on smooth Group Lasso penalty term, is characterized in that, including following step Rapid:
Step one: select neural network model, determine error function;
Step 2: utilize fractional order steepest descent method iteration to update training network weights;
Step 3: obtain the network parameter of neural network model according to the fractional order steepest descent method in step 2;
Step 4: utilize test sample to calculate the precision of neural network model.
2. the neural network modeling approach of fractional order based on smooth Group Lasso penalty term as claimed in claim 1, its Feature is, the neural network model of described step one is as follows:
Select three layers of BP neutral net, including input layer, hidden layer and output layer, described input layer, hidden layer and output layer node Number is respectively p, n and 1, selects J training sample setWherein, xj=(x1 j,Κ,xp j)T, xjRepresent that jth is defeated Enter sample, xjFor p dimensional vector, the ideal that jth input sample is corresponding is output as Oj, the weight matrix of input layer to hidden layer is V =(vim)n×p, V=(vim)n×pDimension is n × p, remembers vi=(vi1,Κ,vip)T, i=1, Κ, n;Hidden layer is to the weights of output layer Vector is u=(u1,Κ,un)T, weight matrix is n dimensional vector, remembers weight matrixHidden layer and output Layer activation primitive is designated as g, f respectively.
3. fractional order neural network modeling method based on smooth Group Lasso penalty term as claimed in claim 2, described In step one, error function is expressed formula and is:
E ( W ) = 1 2 Σ j = 1 J ( O j - f ( u · G ( Vx j ) ) ) 2 + λ ( s ( u ) + Σ i = 1 n s ( v i ) )
s ( b ) = | | b | | , | | b | | &GreaterEqual; &beta; , | | b | | 2 2 &beta; + &beta; 2 , | | b | | < &beta; .
In formula, s (b) represents smooth function, and b represents a finite dimensional vector, and β≤1 is a fixing constant.
4. fractional order neural network modeling method based on smooth Group Lasso penalty term as claimed in claim 1, described Step 2 comprises the following steps:
Described step 2 includes following sub-step:
Step S21: initialize weight matrix all weights W0
Step S22: to initial weight vector W0It is optimized.
5. fractional order neural network modeling method based on smooth Group Lasso penalty term as claimed in claim 4, described Step S22 initial weight vector W0Optimization step is as follows:
Step S221: primary iteration number of times k=0;
Step S222: calculate the error function weight matrix W about input layer to hidden layerkFractional order gradient;
If fractional order gradient is α, 0 < α < 1, then error function is about the weight matrix W of input layer to hidden layerkFractional order gradient Derivative Formula is:
Wherein c is WkThe minima of middle element;Γ () is Gamma function;
Step S223: to the weight matrix W in step S222kBeing updated, update mode is as follows:
Wherein, ηcRepresent learning rate;
Step S224: by weight matrix WkCorresponding error function value compares with error threshold, if weight matrix WkCorresponding error Functional value is less than error threshold, then proceed to step 4, otherwise, proceed to next step;
Error function is
Step S225: make k=k+1, returns and performs step S222.
6. fractional order neural network modeling method based on smooth Group Lasso penalty term as claimed in claim 5, it is special Levying and be, described step is in 223, obtains learning rate η by the way of line search.
CN201610601738.2A 2016-07-27 2016-07-27 The neural network modeling approach of fractional order based on smooth Group Lasso penalty term Pending CN106295794A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610601738.2A CN106295794A (en) 2016-07-27 2016-07-27 The neural network modeling approach of fractional order based on smooth Group Lasso penalty term

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610601738.2A CN106295794A (en) 2016-07-27 2016-07-27 The neural network modeling approach of fractional order based on smooth Group Lasso penalty term

Publications (1)

Publication Number Publication Date
CN106295794A true CN106295794A (en) 2017-01-04

Family

ID=57662637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610601738.2A Pending CN106295794A (en) 2016-07-27 2016-07-27 The neural network modeling approach of fractional order based on smooth Group Lasso penalty term

Country Status (1)

Country Link
CN (1) CN106295794A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169137A (en) * 2017-06-09 2017-09-15 华东师范大学 A kind of semi-supervised hashing image searcher based on Group Lasso
CN108646719A (en) * 2018-07-05 2018-10-12 中南大学 A kind of Weak fault detection method and system
CN109242098A (en) * 2018-07-25 2019-01-18 深圳先进技术研究院 Limit neural network structure searching method and Related product under cost
CN109492747A (en) * 2017-09-13 2019-03-19 杭州海康威视数字技术股份有限公司 A kind of the network structure generation method and device of neural network
CN109807893A (en) * 2019-02-19 2019-05-28 宁波凯德科技服务有限公司 A kind of welding robot motion model Smoothing Method
CN110414565A (en) * 2019-05-06 2019-11-05 北京邮电大学 A kind of neural network method of cutting out based on Group Lasso for power amplifier
CN111178520A (en) * 2017-06-15 2020-05-19 北京图森智途科技有限公司 Data processing method and device of low-computing-capacity processing equipment
CN111507530A (en) * 2020-04-17 2020-08-07 集美大学 RBF neural network ship traffic flow prediction method based on fractional order momentum gradient descent
CN111505709A (en) * 2020-04-28 2020-08-07 西安交通大学 Attenuation qualitative analysis method based on sparse spectral decomposition
WO2022247049A1 (en) * 2021-05-24 2022-12-01 苏州大学 Method for predicting wind speed based on complex-valued forward neural network

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169137B (en) * 2017-06-09 2019-10-08 华东师范大学 A kind of semi-supervised hashing image searcher based on Group Lasso
CN107169137A (en) * 2017-06-09 2017-09-15 华东师范大学 A kind of semi-supervised hashing image searcher based on Group Lasso
CN111178520A (en) * 2017-06-15 2020-05-19 北京图森智途科技有限公司 Data processing method and device of low-computing-capacity processing equipment
CN111178520B (en) * 2017-06-15 2024-06-07 北京图森智途科技有限公司 Method and device for constructing neural network
CN109492747A (en) * 2017-09-13 2019-03-19 杭州海康威视数字技术股份有限公司 A kind of the network structure generation method and device of neural network
CN108646719A (en) * 2018-07-05 2018-10-12 中南大学 A kind of Weak fault detection method and system
CN108646719B (en) * 2018-07-05 2021-04-06 中南大学 Weak fault detection method and system
CN109242098A (en) * 2018-07-25 2019-01-18 深圳先进技术研究院 Limit neural network structure searching method and Related product under cost
CN109807893B (en) * 2019-02-19 2022-03-15 宁波凯德科技服务有限公司 Method for smoothing motion model of welding robot
CN109807893A (en) * 2019-02-19 2019-05-28 宁波凯德科技服务有限公司 A kind of welding robot motion model Smoothing Method
CN110414565A (en) * 2019-05-06 2019-11-05 北京邮电大学 A kind of neural network method of cutting out based on Group Lasso for power amplifier
CN110414565B (en) * 2019-05-06 2021-06-08 北京邮电大学 Group Lasso-based neural network cutting method for power amplifier
CN111507530A (en) * 2020-04-17 2020-08-07 集美大学 RBF neural network ship traffic flow prediction method based on fractional order momentum gradient descent
CN111507530B (en) * 2020-04-17 2022-05-31 集美大学 RBF neural network ship traffic flow prediction method based on fractional order momentum gradient descent
CN111505709B (en) * 2020-04-28 2021-07-13 西安交通大学 Attenuation qualitative analysis method based on sparse spectral decomposition
CN111505709A (en) * 2020-04-28 2020-08-07 西安交通大学 Attenuation qualitative analysis method based on sparse spectral decomposition
WO2022247049A1 (en) * 2021-05-24 2022-12-01 苏州大学 Method for predicting wind speed based on complex-valued forward neural network

Similar Documents

Publication Publication Date Title
CN106295794A (en) The neural network modeling approach of fractional order based on smooth Group Lasso penalty term
Ata et al. An adaptive neuro-fuzzy inference system approach for prediction of tip speed ratio in wind turbines
CN109784480A (en) A kind of power system state estimation method based on convolutional neural networks
WO2018040803A1 (en) Direct calculation method based on ring network power system
CN109443364A (en) Paths planning method based on A* algorithm
CN110161682B (en) Method for generating initial structure of free-form surface off-axis reflection system
CN106600050A (en) BP neural network-based ultra-short load prediction method
CN107947761A (en) Change threshold percentage renewal adaptive filter algorithm based on lowest mean square quadravalence
US20230254187A1 (en) Method for designing complex-valued channel equalizer
CN103279032A (en) Robust convergence control method of heterogeneous multi-agent system
CN109343554B (en) Heuristic spacecraft task planning method based on state conversion cost value
CN106021880B (en) Jacket platform structural response calculation method based on BP neural network
CN103559538A (en) BP neural network structure optimizing method
CN105426962A (en) Method for constructing and training dynamic neural network of incomplete recursive support
CN110850893A (en) Spacecraft task planning method based on maximum cost evaluation
CN111144027A (en) Approximation method based on BP neural network full characteristic curve function
CN101901483B (en) Abinocular stereoscopic vision matching method for generalizing belief propagation
CN103559541A (en) Back propagation method for out-of-order data stream in big data
CN107330248A (en) A kind of short-term wind power prediction method based on modified neutral net
CN107135155A (en) A kind of opportunistic network routing method based on node social relationships
CN107273970B (en) Reconfigurable platform of convolutional neural network supporting online learning and construction method thereof
CN106022482B (en) Application enhancements type Fuzzy Neural Network Decoupling circulating fluidized bed boiler-bed pressure method
CN112364992A (en) Scene-constrained model pruning method for intelligent network search
Wen-Yi Research on optimization and implementation of BP neural network algorithm
CN106611082A (en) Efficient radial basis function supporting point simplification method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170104