CN108197809A - Weights based on dimension optimum translation share the real-time scheduling method of depth network - Google Patents

Weights based on dimension optimum translation share the real-time scheduling method of depth network Download PDF

Info

Publication number
CN108197809A
CN108197809A CN201711497688.9A CN201711497688A CN108197809A CN 108197809 A CN108197809 A CN 108197809A CN 201711497688 A CN201711497688 A CN 201711497688A CN 108197809 A CN108197809 A CN 108197809A
Authority
CN
China
Prior art keywords
data
network
matrix
layer
dimension
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711497688.9A
Other languages
Chinese (zh)
Other versions
CN108197809B (en
Inventor
王万良
臧泽林
李伟琨
王宇乐
赵燕伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201711497688.9A priority Critical patent/CN108197809B/en
Publication of CN108197809A publication Critical patent/CN108197809A/en
Application granted granted Critical
Publication of CN108197809B publication Critical patent/CN108197809B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06312Adjustment or analysis of established resource schedule, e.g. resource or task levelling, or dynamic rescheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Software Systems (AREA)
  • Development Economics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Educational Administration (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Game Theory and Decision Science (AREA)
  • Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

Weights based on dimension optimum translation share the real-time scheduling method of depth network, including:Step 1. obtains the real time data of acquisition actual schedule occasion and dispatches data as training data;Step 2. handles the real time data obtained in step 1, is treated as the multilayer two-dimension matrix form for meeting depth network inputs;Step 3., respectively as the input and output of depth network, is trained depth network using the scheduling data obtained in the multilayer two-dimension matrix in step 2 and step 1;Step 4. is by convolutional neural networks trained in step 3, using in practical dispatch environment;Carry out practical network scheduling.

Description

Weights based on dimension optimum translation share the real-time scheduling method of depth network
The present invention relates to the real-time scheduling methods that a kind of weights share depth network.
Background technology
For production scheduling problems, current main stream approach combines optimization heuritic approach using mathematical model and is asked Solution, can obtain higher solution accuracy.But under the background of big data, the explosive increasing of manufacturing parameter in production environment Long and stringent scheduling time mark sense dispatching method proposes further requirement.Apparently traditional dispatching algorithm is difficult at present Meet industry requirement in terms of in processing magnanimity scheduling data and completing quick response two to scheduling problem.
Invention content
The present invention will overcome the disadvantages mentioned above of the prior art, propose a kind of real-time big data tune based on depth convolutional network Spend technical method.
The present invention combines deep neural network algorithm and weights technology of sharing, it is proposed that " the power based on dimension optimum translation The real-time scheduling method of the shared depth network of value " trains depth net by a large amount of schedule history data that big data system provides Network understands the tacit knowledge in scheduling scenario.And the real-time scheduling at scheduling scene is completed using trained depth network Response.
The present invention proposes a kind of scheduling neural network structure of quick response based on big data and a set of depth network tune Degree method completes complicated industrial control and the production scheduling problems in the case of big data.
Weights based on dimension optimum translation share the real-time scheduling method of depth network, include the following steps:
Step 1. obtains the real time data of acquisition actual schedule occasion and scheduling data (correspond to flow chart as training data Middle data acquiring portion)
Step 2. handles the real time data obtained in step 1, is treated as the multilayer for meeting depth network inputs Two-dimensional matrix form.(data processing section in corresponding flow chart)." real time data is handled " specifically includes:
The pretreatment of 2.1 real time datas, whether the data of verificating sensor acquisition are reasonable, and 0 rank can be used to keep Principle illegal data are substituted.
2.2 share to carry out weights in depth network training, and convolution group is carried out to dynamic history samples data It closes.Convolutionization combination is as follows.
A1. set in step 1 that there are k sensors, respectively S1,S2,…,Sk, what i-th of sensor acquired under the j times Data are dij
A2., sampled signal input time window t is set in processing procedurew, rule of thumb parameter tw=10.In time window Under limitation, data input matrix D is obtaineds.There is equation below:
D in formulai jI-th of sensor is all represented in j moment locality numerical value, what each one scheduling sensor of behavior transmitted Sampling parameter, in twThe lower j=t of thresholding effectw=10.
A3. following mapping relations are established, multilayer two-dimension matrix form is generated by following two modes.
Method one:The signal that any two sensor acquires is converted into multilayer two-dimension matrix using cartesian product operation M, mathematical description are as follows.
In formula, c be two-dimensional matrix the number of plies, Sp, Sq be serial number p and q sensor acquisition data picture vector, due to by T is arrivedwInfluence, only taken 10 and immediate ten values of present moment.
If not to McThe number of plies abandoned, then the maximum number of plies of c meets formula combinations number formula
The result of the step operation is multilayer two-dimension matrix Mc
The maximum number c of 2.3 multilayer two-dimension matrixes is generated by number of combinations, therefore can equally generate showing for multiple shot array As.In order to contain multiple shot array, using optimal method, established with reference to the correlativity of the practical significance of sensor gathered data Optimal combination chain.In the case of determining chain length, the maximum correlation of chain is found.
B1. mathematical description is carried out to optimizing equation, the descriptive equation of optimization is as follows:
s.t.:0 < i < tw
E represents entire overall relevancy in a row in formula, sums with being described as R (l), and R (l) is each two plane earth in arrangement Correlation, i are relatively indexing parameter.
B2. the multilayer two-dimension matrix M that the optimization problem optimized is solved using genetic algorithmc', Mc' matrix dimension Degree is [n*10*10*c], wherein the number of n bands just training data.
Step 3. is using the scheduling data obtained in the multilayer two-dimension matrix in step 2 and step 1 respectively as depth net The input and output of network are trained depth network.(network training part in corresponding flow chart)
3.1 use the optimization multilayer two-dimension matrix M handled well in step 2c' as input matrix, using in step 1) The control matrix B of acquisition is brought convolutional neural networks into as label matrix and is trained.Trained process can be in nerve of increasing income Network platform karas is carried out.
The training flow of convolutional neural networks is described below:
Convolutional neural networks (Convolutional Neural Network, CNN) are a kind of feedforward neural networks, it Artificial neuron can respond the surrounding cells in a part of coverage area, have outstanding performance for large-scale image procossing.It is wrapped Include convolutional layer (convolutional layer) and pond layer (pooling layer).
Usually, the basic structure of CNN includes two layers, and one is characterized extract layer, the input of each neuron with it is previous The local acceptance region of layer is connected, and extract the feature of the part.After the local feature is extracted, it is between other feature Position relationship is also decided therewith;The second is Feature Mapping layer, each computation layer of network is made of multiple Feature Mappings, often A Feature Mapping is a plane, and the weights of all neurons are equal in plane.Feature Mapping structure is small using influence function core Activation primitive of the sigmoid functions as convolutional network so that Feature Mapping has shift invariant.Further, since one Neuron on mapping face shares weights, thus reduces the number of network freedom parameter.Each in convolutional neural networks Convolutional layer all followed by one is used for asking the computation layer of local average and second extraction, this distinctive feature extraction structure twice Reduce feature resolution.
The training step of 3.2CNN is as follows:
Input training set
For each sample M in training setc', the corresponding activation value a of setting input layer Input layer1
3.2.1 the propagated forward of input data, it is as follows that propagated forward meets formula:
zl=wlMc'+bl,al=σ (zl) (3)
In formula, zlFor the information of downward Primary Transmit, wlFor the weights of neural network, alFor the biasing at network, σ (zl) be Nonlinear neuron processing.Convolutional network carries out the setting of weights by sharing the convolution kernel of weights, does not do in the present invention It is discussed in detail.
3.2.2 the error that output layer generates is calculated, error meets equation below:
δL=▽aC⊙σ'(zL) (4)
δLFor the final output of network and the wrong error of label, ▽aFor gradient operator, C ⊙ σ ' (zL) represent in output layer It obtains target output and exports to obtain gap with model calculating.
3.2.3 each layer of reverse propagated error is calculated, the error of backpropagation meets equation below:
δl=((wl+1)Tδl+1)⊙σ'(zl) (5)
δlTo use δLThe every layer of error to front transfer calculated, l is the number of plies.
3.2.4 it is trained using gradient decline, the method that training uses meets equation below:
η is convergence step-length in formula, and m is to obtain number, δ using datax,l(ax,l-1)TGradient direction is obtained obtaining for each iteration,
The formula describes the mode of the change of weights.
Step 4. is by convolutional neural networks trained in step 3, using in practical dispatch environment.Carry out reality Network scheduling.(Real-Time Scheduling part in corresponding flow chart)
The solution of traditional algorithm could only start dependent on specific scheduling data after data all obtain The solution of scheduling is optimized, solution generally requires a large amount of time, so real-time is poor.In addition for big data in the case of The problem of extensive ultra-large, solves, even if using didactic optimizing algorithm solve when being also required to largely calculate Between.
It is an advantage of the invention that:The depth network algorithm that the present invention uses can will determine deep in prior training process Network structure is spent, finds the implicit knowledge of scheduling, so as to be obtained under true application environment by very a small amount of operation To real-time scheduling result, accomplish quick corresponding.Response speed is also very quick in the case of big data inputs.
Description of the drawings
Fig. 1 is the flow chart of the method for the present invention.
Fig. 2 is deep neural network pattern and input and output.
Fig. 3 is small data set Comparative result schematic diagram.
Fig. 4 is intermediate data collection result contrast schematic diagram.
Fig. 5 is large data collection result contrast schematic diagram.
Specific embodiment
The technical solution that 1-5 is further illustrated the present invention below in conjunction with the accompanying drawings.
Fig. 1 shows the flow chart of the method for the present invention.
Embodiment is summarized
Setting Shop-floor Scheduling, n platform machines, p workpiece, workpiece have q processing flow.Purpose is to each work Each flow arrangement processing machine of part.N is changed, the value of p, q can change the scale of problem, setting small scale problem n=3, q= 3, q=3.Intermediate scale problem n=30, q=30, q=30.Large-scale problem n=300, q=300, q=300.Specific data mode It is as shown in table 1 below:
Table 1
Workpiece flow Expend the time (input) Serial number (output)
1 flow 1 of workpiece 5 5
1 flow 2 of workpiece 12 6
1 flow 3 of workpiece 3 2
2 flow 1 of workpiece 4 1
2 flow 1 of workpiece 8 4
2 flow 1 of workpiece 6 8
3 flow 1 of workpiece 9 3
3 flow 2 of workpiece 41 9
3 flow 3 of workpiece 4 7
The problem is the classical NP problems of comparison, is difficult to solve in the case where data volume is big.But this problem is very It is ancient, it has been carried out more adequately studying.And the problem is the scheduling problem under a kind of more line, it is impossible to solve burst Problem on line.Therefore it is all relatively advanced in electronic hardware and big data system, it is necessary to study the real-time streams on line Scheduling problem between waterwheel.
The input of real-time Flow Shop Scheduling does not expend the time in the processing of only part, is actually processing Preceding is difficult really to obtain the process time of part, therefore it is a kind of model it is assumed that so in the real-time of design to expend the time Scheduling system in the processing of part is predicted.
Embodiment inputs and output
The data of input model are:
1) estimated value of each workpiece time.
2) machining state of current each machine.
3) discreet value of the remaining process time of the current workpiece of each machine.
Output data is:
Some workpiece is sent to the control signal that some machine completes some manufacturing procedure, as shown in Figure 2 by current state.
Embodiment exports result:Response speed compares
Small data set, medium-sized data set and large data collection are solved using two methods respectively, is rung in obtained result The comparison for answering speed is as shown in Figure 3.
The result of traditional optimal method represents with light Nogata, the dark Nogata of methods and results of deep neural network Figure represents.In the modelling phase, different network performances can be obtained by carrying out modeling using different network structure and network depth, In Fig. 3, the time for listing the wherein network structure randomly generated spends.It is it can be seen that optimal in small data set The response time of change method is generally more quicker than neural network method.It is such the result is that it is anticipated that because optimal Change method is more suitable for solving the problems, such as small data set.
The results are shown in Figure 4 for intermediate data collection calculating method, with the increase of data volume, the time loss of optimal method into It is linearly increasing, but the increase of neural network method response time is not obvious.In the case of intermediate data collection, the response of two methods Time is in the same order of magnitude, and depth network method is slightly won.
With the increase again of data dimension, problem becomes complicated scheduling problem, in this case, optimal method Response time becomes very huge, can not meet the needs of Real-Time Scheduling.But the dispatching method of depth network due to the use of The good neural network structure of precondition in the time delay very little that scheduling is live, can complete the task of spot dispatch, such as Fig. 5 institutes Show.
Content described in this specification embodiment is only enumerating to the way of realization of inventive concept, protection of the invention Range is not construed as being only limitted to the concrete form that embodiment is stated, protection scope of the present invention is also and in art technology Personnel according to present inventive concept it is conceivable that equivalent technologies mean.

Claims (1)

1. the weights based on dimension optimum translation share the real-time scheduling method of depth network, include the following steps:
Step 1. obtains the real time data of acquisition actual schedule occasion and dispatches data as training data;
Step 2. handles the real time data obtained in step 1, is treated as the multilayer two-dimension for meeting depth network inputs Matrix form, the real time data carry out processing and specifically include:
The pretreatment of 2.1 real time datas, the data principle pair that is whether reasonable, and being kept using 0 rank of verificating sensor acquisition Illegal data are substituted;
2.2 share to carry out weights in depth network training, and convolution combination is carried out to dynamic history samples data;Volume Productization combination is as follows;
A1. set in step 1 that there are k sensors, respectively S1,S2,…,Sk, data that i-th of sensor acquires under the j times For dij
A2., sampled signal input time window t is set in processing procedurew, rule of thumb parameter tw=10;In the limitation of time window Under, obtain data input matrix Ds;There is equation below:
D in formulai jI-th of sensor is all represented in j moment locality numerical value, the sampling that each one scheduling sensor of behavior transmits Parameter, in twThe lower j=t of thresholding effectw=10;
A3. following mapping relations are established, multilayer two-dimension matrix form is generated by following two modes;
Method one:The signal that any two sensor acquires is converted into multilayer two-dimension matrix M using cartesian product operation, number Be described as follows;
In formula, c is the number of plies of two-dimensional matrix, and Sp, Sq are the data picture vector of serial number p and q sensor acquisition, due to receiving twInfluence, only taken 10 and immediate ten values of present moment;
If not to McThe number of plies abandoned, then the maximum number of plies of c meets formula combinations number formula
Method two:It using Lie groupoid method, is converted, step is as follows;
T1. blank two-dimensional matrix, dimension t are generatedw*tw
T2. successively willIt depicts to obtain matrix Mc
The numbers of plies of the c for two-dimensional matrix, S in formulap,SqData picture for the acquisition of serial number p and q sensor is vectorial, due to receiving tw Influence, only taken 10 and immediate ten values of present moment;If not to McThe number of plies abandoned, then the maximum layer of c Number meets formula combinations number formula
The result of the step operation is multilayer two-dimension matrix Mc
The maximum number c of 2.3 multilayer two-dimension matrixes is generated by number of combinations, therefore the phenomenon that can equally generate multiple shot array; In order to contain multiple shot array, using optimal method, established most with reference to the correlativity of the practical significance of sensor gathered data Optimum organization chain;In the case of determining chain length, the maximum correlation of chain is found;
B1. mathematical description is carried out to optimizing equation, the descriptive equation of optimization is as follows:
E represents entire overall relevancy in a row in formula, sums with being described as R (l), and R (l) is related for each two plane earth in arrangement Property, i is relatively indexing parameter;
B2. the multilayer two-dimension matrix M that the optimization problem optimized is solved using genetic algorithmc', Mc' dimension of matrix is The number of [n*10*10*c], wherein n bands just training data;
Step 3. is using the scheduling data obtained in the multilayer two-dimension matrix in step 2 and step 1 respectively as depth network Input and output are trained depth network;
3.1 use the optimization multilayer two-dimension matrix M handled well in step 2c' as input matrix, use what is acquired in step 1 Control matrix B is brought convolutional neural networks into as label matrix and is trained;Trained process can be put down in neural network of increasing income Platform karas is carried out;
The training flow of convolutional neural networks is as follows:
Convolutional neural networks CNN is a kind of feedforward neural network, its artificial neuron can be responded in a part of coverage area Surrounding cells, have outstanding performance for large-scale image procossing;It includes convolutional layer and pond layer;
Usually, the basic structure of CNN includes two layers, and one is characterized extract layer, the input of each neuron and preceding layer Local acceptance region is connected, and extracts the feature of the part;After the local feature is extracted, its position between other feature Relationship is also decided therewith;The second is Feature Mapping layer, each computation layer of network is made of multiple Feature Mappings, Mei Gete Sign mapping is a plane, and the weights of all neurons are equal in plane;Feature Mapping structure is small using influence function core Activation primitive of the sigmoid functions as convolutional network so that Feature Mapping has shift invariant;Further, since one reflects The neuron penetrated on face shares weights, thus reduces the number of network freedom parameter;Each volume in convolutional neural networks Lamination all followed by one is used for asking the computation layer of local average and second extraction, and this distinctive structure of feature extraction twice subtracts Small feature resolution;
The training step of 3.2 CNN is as follows:
Input training set
For each sample M in training setc', the corresponding activation value a of setting input layer Input layer1
3.2.1 the propagated forward of input data, it is as follows that propagated forward meets formula:
zl=wlMc'+bl,al=σ (zl) (3)
In formula, zlFor the information of downward Primary Transmit, wlFor the weights of neural network, alFor the biasing at network, σ (zl) it is non-linear Neuron processing;Convolutional network carries out the setting of weights by sharing the convolution kernel of weights, is not detailed Jie in the present invention It continues;
3.2.2 the error that output layer generates is calculated, error meets equation below:
δLFor the final output of network and the wrong error of label,For gradient operator, C ⊙ σ ' (zL) represent and in output layer obtain mesh Mark output exports to obtain gap with model calculating;
3.2.3 each layer of reverse propagated error is calculated, the error of backpropagation meets equation below:
δl=((wl+1)Tδl+1)⊙σ'(zl) (5)
δlTo use δLThe every layer of error to front transfer calculated, l is the number of plies;
3.2.4 it is trained using gradient decline, the method that training uses meets equation below:
η is convergence step-length in formula, and m is to obtain number, δ using datax,l(ax,l-1)TGradient direction is obtained obtaining for each iteration, the public affairs Formula describes the mode of the change of weights;
Step 4. is by convolutional neural networks trained in step 3, using in practical dispatch environment;Carry out practical network Scheduling.
CN201711497688.9A 2017-12-28 2017-12-28 Real-time scheduling method of weight sharing deep network based on dimension optimal conversion Active CN108197809B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711497688.9A CN108197809B (en) 2017-12-28 2017-12-28 Real-time scheduling method of weight sharing deep network based on dimension optimal conversion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711497688.9A CN108197809B (en) 2017-12-28 2017-12-28 Real-time scheduling method of weight sharing deep network based on dimension optimal conversion

Publications (2)

Publication Number Publication Date
CN108197809A true CN108197809A (en) 2018-06-22
CN108197809B CN108197809B (en) 2021-06-08

Family

ID=62587596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711497688.9A Active CN108197809B (en) 2017-12-28 2017-12-28 Real-time scheduling method of weight sharing deep network based on dimension optimal conversion

Country Status (1)

Country Link
CN (1) CN108197809B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599496A (en) * 2019-07-30 2019-12-20 浙江工业大学 Sun shadow displacement positioning method based on deep learning
CN112184056A (en) * 2020-10-19 2021-01-05 中国工商银行股份有限公司 Data feature extraction method and system based on convolutional neural network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005013537A2 (en) * 2003-07-28 2005-02-10 Cetacean Networks, Inc. Systems and methods for the synchronization of a real-time scheduled packet network using relative timing
CN102075014A (en) * 2011-01-06 2011-05-25 清华大学 Large grid real-time scheduling method for accepting access of wind power
US8762190B1 (en) * 2012-12-21 2014-06-24 PagerDuty, Inc. Realtime schedule management interface
CN105844350A (en) * 2016-03-21 2016-08-10 广西电网有限责任公司电力科学研究院 Short period wind power prediction system based on covariance preferable combination model
CN106210727A (en) * 2016-08-16 2016-12-07 广东中星电子有限公司 Video spatial scalable code stream coded method based on neural network processor array and framework
CN106650982A (en) * 2016-08-30 2017-05-10 华北电力大学 Depth learning power prediction method based on multi-point NWP
CN106940790A (en) * 2017-03-13 2017-07-11 重庆文理学院 A kind of flow congestion's Forecasting Methodology and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005013537A2 (en) * 2003-07-28 2005-02-10 Cetacean Networks, Inc. Systems and methods for the synchronization of a real-time scheduled packet network using relative timing
CN102075014A (en) * 2011-01-06 2011-05-25 清华大学 Large grid real-time scheduling method for accepting access of wind power
US8762190B1 (en) * 2012-12-21 2014-06-24 PagerDuty, Inc. Realtime schedule management interface
CN105844350A (en) * 2016-03-21 2016-08-10 广西电网有限责任公司电力科学研究院 Short period wind power prediction system based on covariance preferable combination model
CN106210727A (en) * 2016-08-16 2016-12-07 广东中星电子有限公司 Video spatial scalable code stream coded method based on neural network processor array and framework
CN106650982A (en) * 2016-08-30 2017-05-10 华北电力大学 Depth learning power prediction method based on multi-point NWP
CN106940790A (en) * 2017-03-13 2017-07-11 重庆文理学院 A kind of flow congestion's Forecasting Methodology and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵霞,张荣荣,赵瑞锋,颜伟,余娟: "CPS标准下AGC机组动态优化调度的", 《电工技术学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599496A (en) * 2019-07-30 2019-12-20 浙江工业大学 Sun shadow displacement positioning method based on deep learning
CN112184056A (en) * 2020-10-19 2021-01-05 中国工商银行股份有限公司 Data feature extraction method and system based on convolutional neural network
CN112184056B (en) * 2020-10-19 2024-02-09 中国工商银行股份有限公司 Data feature extraction method and system based on convolutional neural network

Also Published As

Publication number Publication date
CN108197809B (en) 2021-06-08

Similar Documents

Publication Publication Date Title
Fernando et al. Runoff forecasting using RBF networks with OLS algorithm
CN111192270A (en) Point cloud semantic segmentation method based on point global context reasoning
CN115018021B (en) Machine room abnormity detection method and device based on graph structure and abnormity attention mechanism
CN110222760B (en) Quick image processing method based on winograd algorithm
CN107220734A (en) CNC Lathe Turning process Energy Consumption Prediction System based on decision tree
CN110245709A (en) Based on deep learning and from the 3D point cloud data semantic dividing method of attention
CN107705806A (en) A kind of method for carrying out speech emotion recognition using spectrogram and deep convolutional neural networks
CN110196928B (en) Fully parallelized end-to-end multi-turn dialogue system with domain expansibility and method
Wang et al. Process cost modelling using neural networks
CN110264079A (en) Hot-rolled product qualitative forecasting method based on CNN algorithm and Lasso regression model
CN108197809A (en) Weights based on dimension optimum translation share the real-time scheduling method of depth network
CN107578822B (en) Pretreatment and feature extraction method for medical multi-modal big data
CN105844334A (en) Radial basis function neural network-based temperature interpolation algorithm
CN110289987B (en) Multi-agent system network anti-attack capability assessment method based on characterization learning
CN113534678B (en) Migration method from simulation of operation question-answering task to physical system
CN110532545A (en) A kind of data information abstracting method based on complex neural network modeling
Zuo et al. Domain selection of transfer learning in fuzzy prediction models
CN116030537B (en) Three-dimensional human body posture estimation method based on multi-branch attention-seeking convolution
CN110691319B (en) Method for realizing high-precision indoor positioning of heterogeneous equipment in self-adaption mode in use field
CN109816103A (en) A kind of PSO-BFGS neural network BP training algorithm
CN111880489A (en) Regression scheduling method for complex manufacturing system
CN116403054A (en) Image optimization classification method based on brain-like network model
Zhiyuan et al. Research on the evaluation of enterprise competitiveness based on the wavelet neural network forecasting system
CN112990618A (en) Prediction method based on machine learning method in industrial Internet of things
CN110322037A (en) Method for predicting and device based on inference pattern

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant