CN105373830A - Prediction method and system for error back propagation neural network and server - Google Patents

Prediction method and system for error back propagation neural network and server Download PDF

Info

Publication number
CN105373830A
CN105373830A CN201510925606.0A CN201510925606A CN105373830A CN 105373830 A CN105373830 A CN 105373830A CN 201510925606 A CN201510925606 A CN 201510925606A CN 105373830 A CN105373830 A CN 105373830A
Authority
CN
China
Prior art keywords
neural network
hidden layer
convergence
neuron node
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510925606.0A
Other languages
Chinese (zh)
Inventor
杨庭清
徐俊
姜烨
徐正蓺
田欣
张晓凌
何子卿
张�浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Advanced Research Institute of CAS
Original Assignee
Shanghai Advanced Research Institute of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Advanced Research Institute of CAS filed Critical Shanghai Advanced Research Institute of CAS
Priority to CN201510925606.0A priority Critical patent/CN105373830A/en
Publication of CN105373830A publication Critical patent/CN105373830A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a prediction method and system for an error back propagation neural network. The method comprises: constructing an initial neural network; training the initial neural network by utilizing pre-acquired N training data samples to obtain a first convergent neural network; performing correlation analysis on output data of neuron nodes of a hidden layer in the first convergent neural network, and combining neuron nodes, greater than a preset correlation threshold, of the hidden layer to generate a second convergent neural network reserving m neuron nodes of the hidden layer; performing correlation analysis on the output data of the neuron nodes of the hidden layer in the second convergent neural network and output data of neuron nodes of an output layer in the first convergent neural network to obtain an optimized neural network; and training the optimized neural network by utilizing the N training data samples again to obtain a predicted neural network. According to the prediction method and system, the number of the nodes of the hidden layer of the single-hidden-layer neural network is determined, so that the deficiencies of single principal component analysis and correlation coefficient method are made up for.

Description

The Forecasting Methodology of error backward propagation method, system and server
Technical field
The invention belongs to safety and emergence technology field, relate to a kind of Forecasting Methodology and system, particularly relate to a kind of Forecasting Methodology of error backward propagation method, system and server.
Background technology
Along with the fast development of economy, the development of chemical industry is also more and more flourishing, and the harmful influence Release and dispersion accident of chemical industrial park occurs again and again simultaneously, all causes serious threat to the personal safety around the scene of the accident, physical environment etc.Meanwhile, when harmful influence leakage accident occurs, especially toxic gas leakage accident, the impact of meteorologic factor on rate of propagation and range of scatter can not be ignored, especially wind speed.Many times higher to the requirement of equipment to the collection of wind speed, the data simultaneously gathering website can not be also real-time toward the importing in gas diffusion assessment models completely, therefore wind speed size short-term prediction by the assessment of model and rate of propagation, scope estimate and the rescue decision-making at scene has good directive significance.
In the middle of the Forecasting Methodology of traditional short-term wind speed, the persistence forecasting method that one adopts, the method is the input predicted as next step according to the measurement data of final step, the data value that the consideration previous step due to just single step is measured, and the error of prediction and stability are all more disappointing; Linear time series predicting model due to process problem be confined to linear problem, then helpless for nonlinear problem, thus application be very restricted; And higher-order function model is correspondingly along with the rising of problem complexity, the form of model is difficult to determine, after problem rises to necessarily high complexity, model will lose actual application value, and applicability is restricted; Be under the known prerequisite of noise based on the Kalman filtering rule on filtering basis, but in the middle of practical application, the form of noise and characteristic are difficult to know or not easily estimate.Consider above and other the weak point of method, in addition the reverse characteristic of powerful None-linear approximation of clean culture neural network and the forecast function to complex nonlinear problem, therefore, complexity, in the middle of the modeling of nonlinear problem, reverse transmittance nerve network has very outstanding performance.In the middle of the modeling of neural network, the determination of the structure of neural network is concerning neural network problem-solving ability, if network is too little, inadequate to the processing accuracy of problem; If network is too large, although can matching training sample accurately, the generalization ability of network declines, and if containing noise signal in training sample, network also will learn noise signal while learning training sample, thus produces bad result.Therefore, the network structure that Choice and design one is suitable is very important.And the determination of related network structure does not have same standard, mostly determine according to particular problem or adopt traditional method of trial and error.In addition, for specific problem, sample, also there is the analytical approach determining neural network hidden layer node number based on Correlation analyses.
In the middle of Correlation analyses, a kind of is the initial neural network compared with macrostructure selecting a single hidden layer, by training sample, this network training is restrained to preliminary, extract the output valve of the hidden layer node of convergence neural network, then correlation analysis-principal component analysis (PCA) is carried out to the output valve of hidden layer node, reject the data that correlativity is larger, and then reduction dimension, corresponding to hidden layer namely makes the larger node of correlativity merge, and decreases neuron node number, another be export based on hidden layer and network exports between correlation analysis: be first also the neural network compared with macrostructure initially selecting a single hidden layer, by training sample, this network training is restrained to preliminary, extract the output valve of the hidden layer node of convergence neural network, extract the output valve of network output layer again, then the correlation coefficient of each hidden layer node output valve and network output layer node output valve is calculated, when correlation coefficient is less than the threshold value of a certain prior setting, then delete corresponding hidden layer node, and then determine suitable hidden layer node number.
But there is some deficiency following in prior art:
The first, only consider the correlativity between hidden layer node, do not consider the correlativity between itself and output layer node, also can containing the node very little with output layer node relevance in the middle of the hidden layer node likely remained;
The second, although decrease hidden layer node and the little node of output layer node relevance, do not consider the correlativity between hidden layer node, the node still having correlativity larger between the hidden layer node that may remain.
Therefore, how a kind of Forecasting Methodology of error backward propagation method, system and server are provided, to solve in prior art in the middle of hidden layer node that the correlativity not considering between hidden layer node and output layer node causes remaining containing the node little with output layer node relevance, and do not consider that the correlativity between hidden layer node causes remaining the defects such as the node still having correlativity larger between hidden layer node, become practitioner in the art's technical matters urgently to be resolved hurrily in fact.
Summary of the invention
The shortcoming of prior art in view of the above, the object of the present invention is to provide a kind of Forecasting Methodology of error backward propagation method, system and server, for solving in prior art containing the node little with output layer node relevance in the middle of hidden layer node that the correlativity not considering between hidden layer node and output layer node causes remaining, and do not consider that the correlativity between hidden layer node causes remaining the problem of the node still having correlativity larger between hidden layer node.
For achieving the above object and other relevant objects, one aspect of the present invention provides a kind of Forecasting Methodology of error backward propagation method, and the Forecasting Methodology of described error backward propagation method comprises the following steps: build an initial neural network; Described initial neural network comprises input layer, hidden layer and output layer; Described hidden layer comprises n neuron node; Wherein, n be greater than 1 positive integer; By the training function corresponding with described single hidden layer neural network, the N number of training data sample gathered in advance is utilized to train to obtain the first convergence neural network to described initial neural network; Wherein, N be greater than 1 positive integer; Correlation analysis is carried out to the output data of the neuron node of hidden layer in described first convergence neural network, merges the neuron node being greater than the hidden layer of default relevance threshold and restrain neural network with produce the neuron node retaining m hidden layer second; Wherein, m is greater than the positive integer that 1 is less than n; Correlation analysis is carried out to obtain optimization neural network to the output data of neuron node of output layer in the output data of the neuron node of hidden layer in described second convergence neural network and the first convergence neural network; N number of training data sample is again utilized to train to obtain prediction neural network to described optimization neural network.
In one embodiment of the invention, described initial neural network trains to obtain the first convergence neural network to the error function that the step that described initial neural network is trained specifically also comprises by prestoring by N number of training data sample that described utilization gathers in advance.
In one embodiment of the invention, described the step that the output data of neuron node of hidden layer in described first convergence neural network carry out correlation analysis specifically to be comprised: the data matrix extracting the output data of hidden layer in described initial neural network; Wherein, the output vector of the neuron node of hidden layer is shown in each list in described data matrix; Principal component analysis is carried out to described data matrix.
In one embodiment of the invention, described principal component analysis comprises: obtain eigenwert and an intermediate variable by described data matrix; According to the descending order arrangement of eigenwert, and calculate the principal component contributor rate of m hidden layer neuron node; If the principal component contributor rate sum of the m calculated a hidden layer neuron node is greater than a fixed value, extract corresponding m dimension data matrix to produce the second convergence neural network of the neuron node retaining m hidden layer.
In one embodiment of the invention, the step that the described output data to the neuron node of hidden layer in described second convergence neural network and the first output data restraining the neuron node of output layer in neural network carry out correlation analysis specifically comprises: the output data and first calculating the neuron node of hidden layer in described second convergence neural network restrain the correlation coefficient of the output data of output layer node in neural network; The described correlation coefficient calculated is converted to the degree of association; Delete the degree of association and be less than the neuron node of the hidden layer of default relevance threshold value to obtain optimization neural network.
In one embodiment of the invention, describedly again utilize N number of training data sample specifically to comprise the step that described optimization neural network trains to obtain prediction neural network: by the training function corresponding with described single hidden layer neural network, again utilize N number of training data sample to train to obtain the 3rd convergence neural network to described optimization neural network; Judge whether described 3rd convergence neural network is the neural network meeting the error precision that prestores, if so, then represent that described 3rd convergence neural network is prediction neural network; If not, then return previous step, until obtain the 3rd convergence neural network of the neural network meeting the error precision that prestores.
The present invention provides a kind of prognoses system of error backward propagation method on the other hand, and the prognoses system of described error backward propagation method comprises: build module, for building an initial neural network; Described initial neural network comprises input layer, hidden layer and output layer; Described hidden layer comprises n neuron node; Wherein, n be greater than 1 positive integer; Training module, with described structure model calling, for by the training function corresponding with described single hidden layer neural network, utilizes the N number of training data sample gathered in advance to train to obtain the first convergence neural network to described initial neural network; Wherein, N be greater than 1 positive integer; First analysis module, be connected with described structure module and training module, for carrying out correlation analysis to the output data of the neuron node of hidden layer in described first convergence neural network, merging the neuron node being greater than the hidden layer of default relevance threshold and restraining neural network with produce the neuron node retaining m hidden layer second; Wherein, m is greater than the positive integer that 1 is less than n; Second analysis module, be connected with described training module and the first analysis module, for carrying out correlation analysis to obtain optimization neural network to the output data of neuron node of output layer in the output data of the neuron node of hidden layer in described second convergence neural network and the first convergence neural network; Prediction module, is connected with described second analysis module, trains to obtain prediction neural network to described optimization neural network for again utilizing N number of training data sample.
In one embodiment of the invention, described initial neural network specifically also carries out training to obtain convergence neural network for the error function by prestoring by described training module.
In one embodiment of the invention, described first analysis module specifically comprises: extraction unit, for extracting the data matrix of the output data of hidden layer in described initial neural network; Wherein, the output vector of the neuron node of hidden layer is shown in each list in described data matrix; Principal component analysis unit, is connected with described extraction unit, for carrying out principal component analysis to described data matrix.
In one embodiment of the invention, described principal component analysis unit specifically for: obtain eigenwert and an intermediate variable by described data matrix; According to the descending order arrangement of eigenwert, and calculate the principal component contributor rate of m hidden layer neuron node; If the principal component contributor rate sum of the m calculated a hidden layer neuron node is greater than a fixed value, extract corresponding m dimension data matrix to produce the second convergence neural network of the neuron node retaining m hidden layer.
In one embodiment of the invention, described second analysis module specifically comprises: computing unit, and the output data and first for the neuron node calculating hidden layer in described second convergence neural network restrain the correlation coefficient of the output data of the neuron node of output layer in neural network; Converting unit, is connected with described computing unit, for the described correlation coefficient calculated is converted to the degree of association; Delete cells, is connected with described converting unit, is less than the neuron node of the hidden layer of default relevance threshold value to obtain optimization neural network for deleting the degree of association.
In one embodiment of the invention, described prediction module comprises: training unit, for by the training function corresponding with described single hidden layer neural network, N number of training data sample is again utilized to train to obtain the 3rd convergence neural network to described optimization neural network; Judging unit, is connected with described training unit, for judging whether described 3rd convergence neural network is the neural network meeting the error precision that prestores, if so, then represents that described 3rd convergence neural network is prediction neural network; If not, training unit described in re invocation, until the 3rd convergence neural network obtaining the neural network meeting the error precision that prestores.
In one embodiment of the invention, when described prediction module is also included in and represents that described 3rd convergence neural network is prediction neural network, for storing the storage unit of described prediction neural network.
Another aspect of the invention also provides a kind of server, states server and comprises: the prognoses system of described error backward propagation method.
In one embodiment of the invention, described server is for predicting that the backpropagation list with nonlinear characteristic implies neural network and implies neural network at the backpropagation list of subsequent time.
As mentioned above, the Forecasting Methodology of error backward propagation method of the present invention, system and server, have following beneficial effect:
The Forecasting Methodology of error backward propagation method of the present invention, system and server are with chemical industrial park harmful influence leakage accident, and in the practical applications such as research forecasting wind speed, BP neural network improves one's methods; Be sample by the air speed data of concrete collection in worksite, " principal component analysis-correlation coefficient " analytic approach is adopted to determine single hidden layer neural network hidden layer node number, compensate for the weak point of single principal component analysis method and single correlation coefficient method: train an initial larger network to restraining, in the vertical, first principal component analysis method is adopted, to the output data analysis of network hidden layer neuron node, merge the node that correlativity is larger, reach the object reducing node; Then in the horizontal, the remaining data of hidden layer node and the output data of network output layer node are carried out correlation analysis, according to the order arrangement that relative coefficient is descending, remove the node of correlativity lower than certain threshold value, the node that retention relationship coefficient is large, reaches the object reducing horizontal node redundancy.Said method compensate for the weak point adopting single principal component analysis method and Correlation analyses, has more scientific basis in theory.
Accompanying drawing explanation
Fig. 1 is shown as the theory structure schematic diagram of error backward propagation method of the present invention.
Fig. 2 is shown as the schematic flow sheet of Forecasting Methodology in an embodiment of error backward propagation method of the present invention.
Fig. 3 is shown as the schematic flow sheet of step S3 in the Forecasting Methodology of error backward propagation method of the present invention.
Fig. 4 is shown as the schematic flow sheet of step S4 in the Forecasting Methodology of error backward propagation method of the present invention.
Fig. 5 is shown as the schematic flow sheet of step S5 in the Forecasting Methodology of error backward propagation method of the present invention.
Fig. 6 is shown as the theory structure schematic diagram of prognoses system in an embodiment of error backward propagation method of the present invention.
Fig. 7 is shown as the theory structure schematic diagram of prediction module in the prognoses system of error backward propagation method of the present invention.
Fig. 8 is shown as the theory structure schematic diagram of server of the present invention in an embodiment.
Element numbers explanation
The prognoses system of 1 error backward propagation method
11 build module
12 training modules
13 first analysis modules
14 second analysis modules
15 prediction module
131 extraction units
132 principal component analysis unit
141 computing units
142 converting units
143 delete cellses
151 training units
152 judging units
153 storage unit
2 error backward propagation method
21 input layers
22 hidden layers
23 output layers
3 servers
S1 ~ S5 step
S31 ~ S32 step
S41 ~ S43 step
S51 ~ S53 step
Embodiment
Below by way of specific instantiation, embodiments of the present invention are described, those skilled in the art the content disclosed by this instructions can understand other advantages of the present invention and effect easily.The present invention can also be implemented or be applied by embodiments different in addition, and the every details in this instructions also can based on different viewpoints and application, carries out various modification or change not deviating under spirit of the present invention.It should be noted that, when not conflicting, the feature in following examples and embodiment can combine mutually.
It should be noted that, the diagram provided in following examples only illustrates basic conception of the present invention in a schematic way, then only the assembly relevant with the present invention is shown in graphic but not component count, shape and size when implementing according to reality is drawn, it is actual when implementing, and the kenel of each assembly, quantity and ratio can be a kind of change arbitrarily, and its assembly layout kenel also may be more complicated.
The know-why of the Forecasting Methodology of error backward propagation method of the present invention, system and server is as follows:
This patent is conceived to determine improving one's methods of reverse clean culture neural network hidden layer node number, in addition therefore make the following assumptions: first neural network is known as the emergent scene of predicting means, is such as leak scene at harmful influence; Research object simultaneously under this emergent scene is also known, the impact of the rate of propagation that such as wind speed leaks harmful influence and range of scatter is very large, therefore utilize the historical time data of wind speed to predict that the air speed data of following subsequent time is necessary, sample data is known; Then it is known for supposing that the sample of neural network inputs dimension, and the neuron node number also just corresponding to the input layer of neural network is known; The number of last hypothetical target variable is also known, and the output node number corresponding to the output layer of neural network is known.
The principle summary of reverse clean culture neural network is: oppositely clean culture neural network is the one in neural network, adopts error backpropagation algorithm to revise the feedforward neural network connecting weights between neuron threshold value and neuron.Feature be exactly data-signal by input layer through the propagated forward of hidden layer to output layer, if the output error of output layer does not meet preset error threshold size, then error is proceeded to backpropagation by error function, the connection weights of network and neuronic threshold value is adjusted with this in conjunction with relevant learning algorithm, repeat above-mentioned training process, the output valve of network is made to approach desired output, until the output error of network meets predetermined threshold value size.
According to by increasing or reduce hidden layer node number, the network of a single hidden layer can approach the characteristic of any non-linear continuous function with arbitrary accuracy, first single hidden layer neural network that structure one is larger, the individual training data sample collected is utilized to train initial network, under the convergence precision condition preset, network training to convergence.
Train convergence based on initial network in the middle of previous step, extract the output data matrix of hidden layer node according to the transport function of nodes, wherein represent the output vector of the node in hidden layer; Carry out PCA analysis (principal component analysis) to matrix, according to the descending order arrangement of eigenwert, before extracting, a front dimension that larger eigenwert is corresponding, reaches the object merging dimensionality reduction.
Based on the hidden layer node remained in the middle of previous step, the output data of the data of these nodes and output layer node (multidimensional export can analogy one-dimensional export) are carried out CC analysis (correlation coefficient), calculate the correlation coefficient between hidden layer node output data and output layer node output data, and then calculate the degree of association of each hidden layer node and output node.
Descending for degree of association arrangement, obtain relating sequence, the degree of association shows that more greatly this node is larger on the impact exported, otherwise less.When the numerical value of the degree of association is less than certain threshold value, thinks that this hidden layer node can be ignored the impact exported, delete this node, thus the structure of optimized network.
Have obtaining one the network determining hidden layer node number, recycling training sample data carry out training again to network, obtain the network meeting error precision, carry out network verification with checking sample.
Embodiment one
The present embodiment provides a kind of Forecasting Methodology of error backward propagation method, and the Forecasting Methodology of described error backward propagation method comprises the following steps:
Build an initial neural network; Described initial neural network comprises input layer, hidden layer and output layer; Described hidden layer comprises n neuron node; Wherein, n be greater than 1 positive integer;
By the training function corresponding with described single hidden layer neural network, the N number of training data sample gathered in advance is utilized to train to obtain the first convergence neural network to described initial neural network; Wherein, N be greater than 1 positive integer;
Correlation analysis is carried out to the output data of the neuron node of hidden layer in described first convergence neural network, merges the neuron node being greater than the hidden layer of default relevance threshold and restrain neural network with produce the neuron node retaining m hidden layer second; Wherein, m is greater than the positive integer that 1 is less than n;
Correlation analysis is carried out to obtain optimization neural network to the output data of neuron node of output layer in the output data of the neuron node of hidden layer in described second convergence neural network and the first convergence neural network;
N number of training data sample is again utilized to train to obtain prediction neural network to described optimization neural network.
Below with reference to diagram, the Forecasting Methodology of the error backward propagation method described in the present embodiment is described in detail.Described error backward propagation method adopts error backpropagation algorithm to revise the feedforward neural network connecting weights between neuron threshold value and neuron, feature be data by input layer through the propagated forward of hidden layer to output layer, if the output error of output layer does not meet preset error threshold size, then error is proceeded to backpropagation by error function, the connection weights of network and the threshold value of neuron node is adjusted with this in conjunction with relevant learning algorithm, repeat above-mentioned training process, the output valve of network is made to approach desired output, until the output error of network meets predetermined threshold value size.Refer to Fig. 1, be shown as the theory structure schematic diagram of error backward propagation method.As shown in Figure 1, described error backward propagation method 2 comprises circle in input layer 21, hidden layer 22 and output layer 23, Fig. 1 and represents every layer of multiple neuron node comprised.W ijfor the connection weights between input layer and these two adjacent layers of hidden layer between hidden layer i-th neuron node and an input layer jth neuron node; v kjfor the connection weights between an output layer kth neuron node and a hidden layer jth neuron node; In jfor the input of a hidden layer jth neuron node; out jthe output of a hidden layer jth neuron node; I jfor the input of an output layer jth neuron node; y jthe output of an output layer jth neuron node; θ is neuron threshold value, and f is the transforming function transformation function of hidden layer neuron node, and the transforming function transformation function of described hidden layer neuron node is as follows:
f ( x ) = 1 1 + e - x Formula (1)
G is the transforming function transformation function of output layer neuron node, and the transforming function transformation function of described output layer neuron node is as follows:
G (x)=x formula (2)
Refer to Fig. 2, be shown as the schematic flow sheet of Forecasting Methodology in an embodiment of error backward propagation method.As shown in Figure 2, the Forecasting Methodology of described error backward propagation method specifically comprises following step:
S1, builds an initial neural network.The theory structure of the theory structure error backward propagation method as shown in Figure 1 of described initial neural network is the same, and described initial neural network also comprises input layer, hidden layer and output layer.In the present embodiment, the hidden layer of described initial neural network comprises n neuron node, and output layer comprises a neuron node; Wherein, n be greater than 1 positive integer.
S2, by the training function corresponding with described single hidden layer neural network, utilizes the N number of training data sample gathered in advance to train to obtain the first convergence neural network to described initial neural network; Wherein, N be greater than 1 positive integer.In the present embodiment, by the error function prestored, described error function is as follows:
E p = 1 2 Σ k ( t k - y k ) 2 Formula (3)
Described initial neural network is carried out train to obtain the first convergence neural network.In formula (3), E prepresent error, t krepresent the actual value of training data sample, y krepresent training data sample output valve.
S3, the the first convergence neural network obtained is trained based on step S2, correlation analysis is carried out to the output data of the neuron node of hidden layer in described first convergence neural network, merges the neuron node being greater than the hidden layer of default relevance threshold and restrain neural network to reach the object reducing neuron node with produce the neuron node retaining m hidden layer second; Wherein, m is greater than the positive integer that 1 is less than n.Refer to Fig. 3, be shown as the schematic flow sheet of step S3.As shown in Figure 3, described step S3 comprises following step:
S31, according to transport function f and the g in neural network neuron stage in Fig. 1, namely formula (1) and formula (2), extract the data matrix of the output data of hidden layer in described initial neural network, i.e. X=X 1, X 2, X 3..., X n; The output vector of the neuron node of hidden layer is shown in each list in described data matrix X, i.e. X irepresent the output vector of i-th neuron node in hidden layer.
S32, carries out principal component analysis to described data matrix X.Described principal component analysis comprises: obtain eigenvalue λ and an intermediate variable P by described data matrix X, according to the descending order arrangement of eigenvalue λ, and calculates the principal component contributor rate of m hidden layer neuron node.In the present embodiment, the contribution rate of the major component of i-th neuron node is:
G i = λ i λ 1 + λ 2 + λ 3 + ... + λ n Formula (4)
If the principal component contributor rate sum of the m calculated a hidden layer neuron node is greater than a fixed value, namely relevance threshold is preset, in the present embodiment, fixed value is set to 85%, extract corresponding m dimension data matrix Z (the m dimension data matrix Z that before extracting in other words, m larger eigenwert is corresponding, namely this data matrix Z comprises the neuron node of the larger hidden layer of m eigenwert) and restrain neural network with produce the neuron node retaining m hidden layer second.。In the present embodiment, the intermediate variable P of i-th neuron node i=(X ii)/δ i, μ ifor X iaverage, δ ifor X istandard deviation; Z i=e ip i, wherein, e ifor the proper vector of i-th neuron node in hidden layer.
S4, based on the second convergence neural network that step S3 obtains, correlation analysis is carried out to obtain optimization neural network to the output data of neuron node of output layer in the output data of the neuron node of hidden layer in described second convergence neural network and the first convergence neural network.Refer to Fig. 4, be shown as the idiographic flow schematic diagram of step S4.As shown in Figure 4, described step S4 comprises following step:
S41, the output data and first calculating the neuron node of hidden layer in described second convergence neural network restrain the correlation coefficient of the output data of the neuron node of output layer in neural network.In the present embodiment, the output data and first utilizing CC (correlation coefficient) analytical to calculate the neuron node of hidden layer in described second convergence neural network restrain the correlation coefficient of the output data of output layer node in neural network.In the present embodiment, the output data Z of neuron node of hidden layer in described second convergence neural network irepresent, the output data Y=(y of output layer node in initial neural network 1, y 2..., y n) represent.Z is calculated according to formula (5) iwith the correlation coefficient of Y.
ζ i ( k ) = min i min k | y k - Z i ( k ) | + ρ max i max k | y k - Z i ( k ) | | y k - Z i ( k ) | + max i max k | y k - Z i ( k ) | Formula (5)
Wherein, ρ represents resolution ratio, ρ ∈ (0 ,+∞), and the less resolution characteristic of ρ value is larger, and general span is [0,1], in the present embodiment, and ρ=0.5.
S42, is converted to the degree of association by the described correlation coefficient calculated.Due to the Z calculated in step S41 icomparatively disperse with the correlation coefficient of Y, be unfavorable for comparing a point information, so need correlation coefficient to be converted into the degree of association.In the present embodiment, the correlation coefficient calculated by step S41 is converted to the degree of association corresponding with it by formula (6).
γ i = 1 N Σ i = 1 N ζ i ( k ) Formula (6)
S43, arranges the degree of association that step S43 calculates, obtains relating sequence according to descending order, delete the degree of association and be less than the neuron node of the hidden layer of default relevance threshold epsilon ∈ (0,1) to obtain optimization neural network.In the present embodiment, this neuron stage of the larger expression of the degree of association is larger on the impact exported, otherwise less.The neuron node of an only remaining p hidden layer in described optimization neural network, wherein, p is greater than the positive integer that 1 is less than n.
S5, utilizes N number of training data sample to train to obtain prediction neural network to described optimization neural network again.Refer to Fig. 5, be shown as the idiographic flow schematic diagram of step S5, as shown in Figure 5, described step S5 specifically comprises:
S51, by the training function corresponding with described single hidden layer neural network, utilizes N number of training data sample to train to obtain the 3rd convergence neural network to described optimization neural network again;
S52, judges whether described 3rd convergence neural network is the neural network meeting the error precision that prestores, and if so, then represents that described 3rd convergence neural network is prediction neural network, and proceeds to step S53; If not, then return step S51, again described optimization neural network is trained, until obtain the 3rd convergence neural network of the neural network meeting the error precision that prestores.
S53, thinks that after step S52 judges the 3rd convergence neural network that step S51 obtains is prediction neural network, then stores described prediction neural network.
The Forecasting Methodology of the error backward propagation method described in the present embodiment is with chemical industrial park harmful influence leakage accident background, and in the practical applications such as research forecasting wind speed, BP neural network improves one's methods; Be sample by the air speed data of concrete collection in worksite, " principal component analysis-correlation coefficient " analytic approach is adopted to determine single hidden layer neural network hidden layer node number, compensate for the weak point of single principal component analysis method and single correlation coefficient method: train an initial larger network to restraining, in the vertical, first principal component analysis method is adopted, to the output data analysis of network hidden layer neuron node, merge the node that correlativity is larger, reach the object reducing node; Then in the horizontal, the remaining data of hidden layer node and the output data of network output layer node are carried out correlation analysis, according to the order arrangement that relative coefficient is descending, remove the node of correlativity lower than certain threshold value, the node that retention relationship coefficient is large, reaches the object reducing horizontal node redundancy.Said method compensate for the weak point adopting single principal component analysis method and Correlation analyses, has more scientific basis in theory.
Embodiment two
The present embodiment provides a kind of prognoses system 1 of error backward propagation method, refer to Fig. 6, the prognoses system 1 being shown as described error backward propagation method comprises: build module 11, training module 12, first analysis module 13, second analysis module 14 and prediction module 15.
Described structure module 11 is for building an initial neural network.The theory structure of the theory structure error backward propagation method as shown in Figure 1 of described initial neural network is the same, and described initial neural network also comprises input layer, hidden layer and output layer.In the present embodiment, the hidden layer of described initial neural network comprises n neuron node, and output layer comprises a neuron node; Wherein, n be greater than 1 positive integer.
With the training module 12 that described structure module 11 connects for by the training function corresponding with described single hidden layer neural network, the N number of training data sample gathered in advance is utilized to train to obtain the first convergence neural network to described initial neural network; Wherein, N be greater than 1 positive integer.In the present embodiment, by the error function prestored, described error function is as follows:
E p = 1 2 Σ k ( t k - y k ) 2 Formula (3)
Described initial neural network is carried out train to obtain the first convergence neural network.In formula (3), E prepresent error, t krepresent the actual value of training data sample, y krepresent training data sample output valve.In the present embodiment, described initial neural network specifically also carries out training to obtain the first convergence neural network for the error function by prestoring by described training module 12.
The first analysis module 13 be connected with described structure module 11 and training module 12 for train based on described training module 12 obtain first restrain neural network, correlation analysis is carried out to the output data of the neuron node of hidden layer in described first convergence neural network, merges the neuron node being greater than the hidden layer of default relevance threshold and restrain neural network to reach the object reducing neuron node with produce the neuron node retaining m hidden layer second; Wherein, m is greater than the positive integer that 1 is less than n.Please continue to refer to Fig. 6, described first analysis module 13 comprises: extraction unit 131 and principal component analysis unit 132.
Described extraction unit 131 is for according to the transport function f in neural network neuron stage in Fig. 1 and g, the i.e. formula (1) that provides of embodiment one and formula (2), extract the data matrix of the output data of hidden layer in described initial neural network, i.e. X=X 1, X 2, X 3..., X n; The output vector of the neuron node of hidden layer is shown in each list in described data matrix X, i.e. X irepresent the output vector of i-th neuron node in hidden layer.
The principal component analysis unit 132 be connected with described extraction unit 131 is for carrying out principal component analysis to described data matrix X.In the present embodiment, described principal component analysis unit 132 specifically performs following function:
Obtain eigenvalue λ and an intermediate variable P by described data matrix X, according to the descending order arrangement of eigenvalue λ, and calculate the principal component contributor rate of m hidden layer neuron node.In the present embodiment, the contribution rate of the major component of i-th neuron node is:
G i = λ i λ 1 + λ 2 + λ 3 + ... + λ n Formula (4)
If the principal component contributor rate sum of the m calculated a hidden layer neuron node is greater than a fixed value, in the present embodiment, fixed value is set to 85%, extracts corresponding m dimension data matrix Z (the m dimension data matrix Z that before extracting in other words, m larger eigenwert is corresponding) to produce the second convergence neural network of the neuron node retaining m hidden layer.。In the present embodiment, the intermediate variable P of i-th neuron node i=(X ii)/δ i, μ ifor X iaverage, δ ifor X istandard deviation; Z i=e ip i, wherein, e ifor the proper vector of i-th neuron node in hidden layer.
The second analysis module 14 be connected with described training module 12 and the first analysis module 13 second restrains neural network for what obtain based on described first analysis module 13, carries out correlation analysis to obtain the optimization neural network with P hidden layer neuron to the output data of the neuron node of hidden layer in described second convergence neural network and first first convergence output data of the neuron node of output layer in network.Wherein, p is greater than the positive integer that 1 is less than n.Please continue to refer to Fig. 6, described second analysis module 14 specifically comprises: computing unit 141, converting unit 142 and delete cells 143.
Described computing unit 141 restrains the correlation coefficient of the output data of output layer node in neural network for calculating the described second output data and first restraining the neuron node of hidden layer in neural network.In the present embodiment, described computing unit 141 utilizes CC (correlation coefficient) analytical to calculate the correlation coefficient of the output data of output layer node in the output data of neuron node of hidden layer in described second convergence neural network and initial neural network.In the present embodiment, the output data Z of neuron node of hidden layer in described second convergence neural network irepresent, the output data Y=(y of output layer node in initial neural network 1, y 2..., y n) represent.Z is calculated according to formula (5) iwith the correlation coefficient of Y.
ζ i ( k ) = min i min k | y k - Z i ( k ) | + ρ max i max k | y k - Z i ( k ) | | y k - Z i ( k ) | + max i max k | y k - Z i ( k ) | Formula (5)
Wherein, ρ represents resolution ratio, ρ ∈ (0 ,+∞), and the less resolution characteristic of ρ value is larger, and general span is [0,1], in the present embodiment, and ρ=0.5.
The converting unit 142 be connected with described computing unit 141 is for being converted to the degree of association by the described correlation coefficient calculated.Due to the Z calculated in described computing unit 141 icomparatively disperse with the correlation coefficient of Y, be unfavorable for comparing a point information, so need correlation coefficient to be converted into the degree of association.In the present embodiment, the correlation coefficient calculated in described computing unit 141 is converted to the degree of association corresponding with it by the formula that prestores (6).
γ i = 1 N Σ i = 1 N ζ i ( k ) Formula (6)
The delete cells 143 be connected with described converting unit 142 arranges according to descending order for the degree of association that described converting unit 142 is calculated, obtain relating sequence, delete the degree of association and be less than the neuron node of the hidden layer of default relevance threshold epsilon ∈ (0,1) to obtain optimization neural network.In the present embodiment, this neuron stage of the larger expression of the degree of association is larger on the impact exported, otherwise less.The neuron node of an only remaining p hidden layer in described optimization neural network.
The prediction module 15 be connected with described second analysis module 14 trains to obtain prediction neural network to described optimization neural network for again utilizing N number of training data sample.Refer to Fig. 7, be shown as the theory structure schematic diagram of prediction module, as shown in Figure 7, described prediction module 15 specifically comprises: training unit 151, judging unit 152 and storage unit 153.
Described training unit 151, by the training function corresponding with described single hidden layer neural network, utilizes N number of training data sample to train to obtain the 3rd convergence neural network to described optimization neural network again.
Whether for judging the described 3rd, to restrain neural network be the neural network meeting the error precision that prestores to the judging unit 152 be connected with described training unit 151, if, then represent that described 3rd convergence neural network is prediction neural network, and be invoked at and represent that described 3rd convergence neural network is when being prediction neural network, for storing the storage unit 153 of described prediction neural network; If not, then continue to call described training module 151, make it again train described optimization neural network, until obtain the 3rd convergence neural network of the neural network meeting the error precision that prestores.
The present embodiment also provides a kind of server 3, refers to Fig. 8, is shown as the theory structure schematic diagram of server in an embodiment.As shown in Figure 8, described server 3 comprises the prognoses system 1 of above-mentioned error backward propagation method, and all functions of the prognoses system 1 of this error backward propagation method can be realized by processor.Described server 3 is for predicting that the backpropagation list with nonlinear characteristic implies neural network and implies neural network at the backpropagation list of subsequent time.
In sum, the Forecasting Methodology of error backward propagation method of the present invention, system and server are with chemical industrial park harmful influence leakage accident, and in the practical applications such as research forecasting wind speed, BP neural network improves one's methods; Be sample by the air speed data of concrete collection in worksite, " principal component analysis-correlation coefficient " analytic approach is adopted to determine single hidden layer neural network hidden layer node number, compensate for the weak point of single principal component analysis method and single correlation coefficient method: train an initial larger network to restraining, in the vertical, first principal component analysis method is adopted, to the output data analysis of network hidden layer neuron node, merge the node that correlativity is larger, reach the object reducing node; Then in the horizontal, the remaining data of hidden layer node and the output data of network output layer node are carried out correlation analysis, according to the order arrangement that relative coefficient is descending, remove the node of correlativity lower than certain threshold value, the node that retention relationship coefficient is large, reaches the object reducing horizontal node redundancy.Said method compensate for the weak point adopting single principal component analysis method and Correlation analyses, has more scientific basis in theory.So the present invention effectively overcomes various shortcoming of the prior art and tool high industrial utilization.
Above-described embodiment is illustrative principle of the present invention and effect thereof only, but not for limiting the present invention.Any person skilled in the art scholar all without prejudice under spirit of the present invention and category, can modify above-described embodiment or changes.Therefore, such as have in art usually know the knowledgeable do not depart from complete under disclosed spirit and technological thought all equivalence modify or change, must be contained by claim of the present invention.

Claims (15)

1. a Forecasting Methodology for error backward propagation method, is characterized in that, the Forecasting Methodology of described error backward propagation method comprises the following steps:
Build an initial neural network; Described initial neural network comprises input layer, hidden layer and output layer; Described hidden layer comprises n neuron node; Wherein, n be greater than 1 positive integer;
By the training function corresponding with described single hidden layer neural network, the N number of training data sample gathered in advance is utilized to train to obtain the first convergence neural network to described initial neural network; Wherein, N be greater than 1 positive integer;
Correlation analysis is carried out to the output data of the neuron node of hidden layer in described first convergence neural network, merges the neuron node being greater than the hidden layer of default relevance threshold and restrain neural network with produce the neuron node retaining m hidden layer second; Wherein, m is greater than the positive integer that 1 is less than n;
Correlation analysis is carried out to obtain optimization neural network to the output data of neuron node of output layer in the output data of the neuron node of hidden layer in described second convergence neural network and the first convergence neural network;
N number of training data sample is again utilized to train to obtain prediction neural network to described optimization neural network.
2. the Forecasting Methodology of error backward propagation method according to claim 1, is characterized in that: described initial neural network trains to obtain the first convergence neural network to the error function that the step that described initial neural network is trained specifically also comprises by prestoring by N number of training data sample that described utilization gathers in advance.
3. the Forecasting Methodology of error backward propagation method according to claim 1, is characterized in that: describedly specifically comprise the step that the output data of neuron node of hidden layer in described first convergence neural network carry out correlation analysis:
Extract the data matrix of the output data of hidden layer in described initial neural network; Wherein, the output vector of the neuron node of hidden layer is shown in each list in described data matrix;
Principal component analysis is carried out to described data matrix.
4. the Forecasting Methodology of error backward propagation method according to claim 3, is characterized in that: described principal component analysis comprises:
Eigenwert and an intermediate variable is obtained by described data matrix;
According to the descending order arrangement of eigenwert, and calculate the principal component contributor rate of m hidden layer neuron node;
If the principal component contributor rate sum of the m calculated a hidden layer neuron node is greater than a fixed value, extract corresponding m dimension data matrix to produce the second convergence neural network of the neuron node retaining m hidden layer.
5. the Forecasting Methodology of error backward propagation method according to claim 1, is characterized in that: the step that the described output data to the neuron node of hidden layer in described second convergence neural network and the first output data restraining the neuron node of output layer in neural network carry out correlation analysis specifically comprises:
The output data and first calculating the neuron node of hidden layer in described second convergence neural network restrain the correlation coefficient of the output data of output layer node in neural network;
The described correlation coefficient calculated is converted to the degree of association;
Delete the degree of association and be less than the neuron node of the hidden layer of default relevance threshold value to obtain optimization neural network.
6. the Forecasting Methodology of error backward propagation method according to claim 1, is characterized in that: describedly again utilize N number of training data sample specifically to comprise the step that described optimization neural network trains to obtain prediction neural network:
By the training function corresponding with described single hidden layer neural network, N number of training data sample is again utilized to train to obtain the 3rd convergence neural network to described optimization neural network;
Judge whether described 3rd convergence neural network is the neural network meeting the error precision that prestores, if so, then represent that described 3rd convergence neural network is prediction neural network; If not, then return previous step, until obtain the 3rd convergence neural network of the neural network meeting the error precision that prestores.
7. a prognoses system for error backward propagation method, is characterized in that, the prognoses system of described error backward propagation method comprises:
Build module, for building an initial neural network; Described initial neural network comprises input layer, hidden layer and output layer; Described hidden layer comprises n neuron node; Wherein, n be greater than 1 positive integer;
Training module, with described structure model calling, for by the training function corresponding with described single hidden layer neural network, utilizes the N number of training data sample gathered in advance to train to obtain the first convergence neural network to described initial neural network; Wherein, N be greater than 1 positive integer;
First analysis module, be connected with described structure module and training module, for carrying out correlation analysis to the output data of the neuron node of hidden layer in described first convergence neural network, merging the neuron node being greater than the hidden layer of default relevance threshold and restraining neural network with produce the neuron node retaining m hidden layer second; Wherein, m is greater than the positive integer that 1 is less than n;
Second analysis module, be connected with described training module and the first analysis module, for carrying out correlation analysis to obtain optimization neural network to the output data of neuron node of output layer in the output data of the neuron node of hidden layer in described second convergence neural network and the first convergence neural network;
Prediction module, is connected with described second analysis module, trains to obtain prediction neural network to described optimization neural network for again utilizing N number of training data sample.
8. the prognoses system of error backward propagation method according to claim 7, is characterized in that: described initial neural network specifically also carries out training to obtain convergence neural network for the error function by prestoring by described training module.
9. the prognoses system of error backward propagation method according to claim 7, is characterized in that: described first analysis module specifically comprises:
Extraction unit, for extracting the data matrix of the output data of hidden layer in described initial neural network; Wherein, the output vector of the neuron node of hidden layer is shown in each list in described data matrix;
Principal component analysis unit, is connected with described extraction unit, for carrying out principal component analysis to described data matrix.
10. the prognoses system of error backward propagation method according to claim 9, is characterized in that: described principal component analysis unit specifically for:
Eigenwert and an intermediate variable is obtained by described data matrix;
According to the descending order arrangement of eigenwert, and calculate the principal component contributor rate of m hidden layer neuron node;
If the principal component contributor rate sum of the m calculated a hidden layer neuron node is greater than a fixed value, extract corresponding m dimension data matrix to produce the second convergence neural network of the neuron node retaining m hidden layer.
The prognoses system of 11. error backward propagation method according to claim 7, is characterized in that: described second analysis module specifically comprises:
Computing unit, the output data and first for the neuron node calculating hidden layer in described second convergence neural network restrain the correlation coefficient of the output data of the neuron node of output layer in neural network;
Converting unit, is connected with described computing unit, for the described correlation coefficient calculated is converted to the degree of association;
Delete cells, is connected with described converting unit, is less than the neuron node of the hidden layer of default relevance threshold value to obtain optimization neural network for deleting the degree of association.
The prognoses system of 12. error backward propagation method according to claim 11, is characterized in that: described prediction module comprises:
Training unit, for by the training function corresponding with described single hidden layer neural network, utilizes N number of training data sample to train to obtain the 3rd convergence neural network to described optimization neural network again;
Judging unit, is connected with described training unit, for judging whether described 3rd convergence neural network is the neural network meeting the error precision that prestores, if so, then represents that described 3rd convergence neural network is prediction neural network; If not, training unit described in re invocation, until the 3rd convergence neural network obtaining the neural network meeting the error precision that prestores.
The prognoses system of 13. error backward propagation method according to claim 12, it is characterized in that: when described prediction module is also included in and represents that described 3rd convergence neural network is prediction neural network, for storing the storage unit of described prediction neural network.
14. 1 kinds of servers, is characterized in that, described server comprises:
The prognoses system of the error backward propagation method according to any one of claim 7-13.
15. servers according to claim 14, is characterized in that: described server is for predicting that the backpropagation list with nonlinear characteristic implies neural network and implies neural network at the backpropagation list of subsequent time.
CN201510925606.0A 2015-12-11 2015-12-11 Prediction method and system for error back propagation neural network and server Pending CN105373830A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510925606.0A CN105373830A (en) 2015-12-11 2015-12-11 Prediction method and system for error back propagation neural network and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510925606.0A CN105373830A (en) 2015-12-11 2015-12-11 Prediction method and system for error back propagation neural network and server

Publications (1)

Publication Number Publication Date
CN105373830A true CN105373830A (en) 2016-03-02

Family

ID=55376011

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510925606.0A Pending CN105373830A (en) 2015-12-11 2015-12-11 Prediction method and system for error back propagation neural network and server

Country Status (1)

Country Link
CN (1) CN105373830A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446352A (en) * 2016-08-31 2017-02-22 郑州航空工业管理学院 Multi-response-parameter optimization method for metalized polypropylene film capacitor
CN107154258A (en) * 2017-04-10 2017-09-12 哈尔滨工程大学 Method for recognizing sound-groove based on negatively correlated incremental learning
CN107516132A (en) * 2016-06-15 2017-12-26 耐能有限公司 The simplification device and method for simplifying of artificial neural networks
CN107590759A (en) * 2017-09-27 2018-01-16 武汉青禾科技有限公司 A kind of students ' behavior based on big data judges system
CN107590565A (en) * 2017-09-08 2018-01-16 北京首钢自动化信息技术有限公司 A kind of method and device for building building energy consumption forecast model
WO2018068421A1 (en) * 2016-10-11 2018-04-19 广州视源电子科技股份有限公司 Method and device for optimizing neural network
CN109255432A (en) * 2018-08-22 2019-01-22 中国平安人寿保险股份有限公司 Neural network model construction method and device, storage medium, electronic equipment
WO2019041752A1 (en) * 2017-08-31 2019-03-07 江苏康缘药业股份有限公司 Process parameter-based result feedback method and apparatus
CN110059858A (en) * 2019-03-15 2019-07-26 深圳壹账通智能科技有限公司 Server resource prediction technique, device, computer equipment and storage medium
CN110705821A (en) * 2019-08-23 2020-01-17 上海科技发展有限公司 Hotspot subject prediction method, device, terminal and medium based on multiple evaluation dimensions
WO2020228796A1 (en) * 2019-05-15 2020-11-19 Huawei Technologies Co., Ltd. Systems and methods for wireless signal configuration by a neural network
CN113777648A (en) * 2021-09-09 2021-12-10 南京航空航天大学 Random coding and neural network detector imaging method and gamma camera

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1662085A (en) * 2004-02-28 2005-08-31 乐金电子(中国)研究开发中心有限公司 Content list displaying method of mobile telephone content server
CN201229601Y (en) * 2008-05-09 2009-04-29 中华人民共和国黄埔海关 Automatically checking clearance system for channel
CN102594927A (en) * 2012-04-05 2012-07-18 高汉中 Neural-network-based cloud server structure
CN102663513A (en) * 2012-03-13 2012-09-12 华北电力大学 Combination forecast modeling method of wind farm power by using gray correlation analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1662085A (en) * 2004-02-28 2005-08-31 乐金电子(中国)研究开发中心有限公司 Content list displaying method of mobile telephone content server
CN201229601Y (en) * 2008-05-09 2009-04-29 中华人民共和国黄埔海关 Automatically checking clearance system for channel
CN102663513A (en) * 2012-03-13 2012-09-12 华北电力大学 Combination forecast modeling method of wind farm power by using gray correlation analysis
CN102594927A (en) * 2012-04-05 2012-07-18 高汉中 Neural-network-based cloud server structure

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107516132A (en) * 2016-06-15 2017-12-26 耐能有限公司 The simplification device and method for simplifying of artificial neural networks
CN106446352A (en) * 2016-08-31 2017-02-22 郑州航空工业管理学院 Multi-response-parameter optimization method for metalized polypropylene film capacitor
WO2018068421A1 (en) * 2016-10-11 2018-04-19 广州视源电子科技股份有限公司 Method and device for optimizing neural network
CN107154258A (en) * 2017-04-10 2017-09-12 哈尔滨工程大学 Method for recognizing sound-groove based on negatively correlated incremental learning
WO2019041752A1 (en) * 2017-08-31 2019-03-07 江苏康缘药业股份有限公司 Process parameter-based result feedback method and apparatus
CN107590565A (en) * 2017-09-08 2018-01-16 北京首钢自动化信息技术有限公司 A kind of method and device for building building energy consumption forecast model
CN107590759A (en) * 2017-09-27 2018-01-16 武汉青禾科技有限公司 A kind of students ' behavior based on big data judges system
CN109255432B (en) * 2018-08-22 2024-04-30 中国平安人寿保险股份有限公司 Neural network model construction method and device, storage medium and electronic equipment
CN109255432A (en) * 2018-08-22 2019-01-22 中国平安人寿保险股份有限公司 Neural network model construction method and device, storage medium, electronic equipment
CN110059858A (en) * 2019-03-15 2019-07-26 深圳壹账通智能科技有限公司 Server resource prediction technique, device, computer equipment and storage medium
WO2020228796A1 (en) * 2019-05-15 2020-11-19 Huawei Technologies Co., Ltd. Systems and methods for wireless signal configuration by a neural network
US11533115B2 (en) 2019-05-15 2022-12-20 Huawei Technologies Co., Ltd. Systems and methods for wireless signal configuration by a neural network
US12021572B2 (en) 2019-05-15 2024-06-25 Huawei Technologies Co., Ltd. Systems and methods for wireless signal configuration by a neural network
CN110705821A (en) * 2019-08-23 2020-01-17 上海科技发展有限公司 Hotspot subject prediction method, device, terminal and medium based on multiple evaluation dimensions
CN113777648A (en) * 2021-09-09 2021-12-10 南京航空航天大学 Random coding and neural network detector imaging method and gamma camera
CN113777648B (en) * 2021-09-09 2024-04-12 南京航空航天大学 Method and gamma camera based on random encoding and neural network detector imaging

Similar Documents

Publication Publication Date Title
CN105373830A (en) Prediction method and system for error back propagation neural network and server
CN108900546A (en) The method and apparatus of time series Network anomaly detection based on LSTM
CN106453293A (en) Network security situation prediction method based on improved BPNN (back propagation neural network)
Bao et al. A data-driven framework for error estimation and mesh-model optimization in system-level thermal-hydraulic simulation
CN107786369A (en) Based on the perception of IRT step analyses and LSTM powerline network security postures and Forecasting Methodology
Shiri et al. Estimation of daily suspended sediment load by using wavelet conjunction models
CN108053052B (en) A kind of oil truck oil and gas leakage speed intelligent monitor system
CN110445126A (en) A kind of non-intrusion type load decomposition method and system
CN110716012B (en) Oil gas concentration intelligent monitoring system based on field bus network
CN110321361A (en) Test question recommendation and judgment method based on improved LSTM neural network model
Ghorbani et al. A hybrid artificial neural network and genetic algorithm for predicting viscosity of Iranian crude oils
CN109523021A (en) A kind of dynamic network Structure Prediction Methods based on long memory network in short-term
Selvaggio et al. Application of long short-term memory recurrent neural networks for localisation of leak source using 3D computational fluid dynamics
CN105046377A (en) Method for screening optimum indexes of reservoir flood control dispatching scheme based on BP neural network
CN106096723A (en) A kind of based on hybrid neural networks algorithm for complex industrial properties of product appraisal procedure
CN107976934A (en) A kind of oil truck oil and gas leakage speed intelligent early-warning system based on wireless sensor network
Li et al. Tailings pond risk prediction using long short-term memory networks
CN116451567A (en) Leakage assessment and intelligent disposal method for gas negative pressure extraction pipeline
CN117076887A (en) Pump station unit running state prediction and health assessment method and system
CN114970745B (en) Intelligent security and environment big data system of Internet of things
Rajabi et al. Intelligent prediction of turbulent flow over backward-facing step using direct numerical simulation data
Liu et al. Prediction of dam horizontal displacement based on CNN-LSTM and attention mechanism
Borisov et al. The System of Fuzzy Cognitive Analysis and Modeling of System Dynamics
CN114331019A (en) Urban traffic safety risk real-time assessment method and device based on risk factor
CN103337000A (en) Safety monitoring and prewarning method for oil-gas gathering and transferring system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160302