CN108229557A - The acceleration training method and system of a kind of neural network with label - Google Patents
The acceleration training method and system of a kind of neural network with label Download PDFInfo
- Publication number
- CN108229557A CN108229557A CN201711482884.9A CN201711482884A CN108229557A CN 108229557 A CN108229557 A CN 108229557A CN 201711482884 A CN201711482884 A CN 201711482884A CN 108229557 A CN108229557 A CN 108229557A
- Authority
- CN
- China
- Prior art keywords
- neural network
- training
- sample
- layer
- test
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
Acceleration training method and system, method the invention discloses a kind of neural network with label include:The training of identical quantity difference sample inputs in S1, the training input set that sample database used in extraction neural network includes in batches, and each sample standard deviation in sample database has corresponding label;S2, the dimension of the corresponding label of samples all in the output node number and sample data of output layer is extended for original at least twice;S3, the initial connection weights between two layers of the neuron that be connected of neural network carry out random initializtion, the training input that every batch of extracts is sequentially input in neural network, neural network is trained by error backpropagation algorithm, the neural network after being trained.The beneficial effects of the invention are as follows:The technical program improves network training speed, can obtain higher precision in shorter training time number.
Description
Technical field
The present invention relates to data processing field, the acceleration training method of more particularly to a kind of neural network with label and
System.
Background technology
Neural network is a kind of strong image classification identification facility, can apply to various occasions.Such as handwritten numeral
Identification, recognition of face, using the cloud atlas image prediction weather of meteorological satellite, the Intelligent license-plate of vehicle in traffic system identifies etc..Structure
Neural network model simultaneously puts into the training that the key that identification classification uses is network.This training is typically via given data storehouse
And weight is initialized to carry out.But existing training method is slower, accuracy is not also high.
Invention content
The present invention provides the acceleration training methods and system of a kind of neural network with label, solve the prior art
The technical issues of.
The technical solution that the present invention solves above-mentioned technical problem is as follows:
A kind of acceleration training method of the neural network with label, including:
Identical quantity is not similary in S1, the training input set that sample database used in extraction neural network includes in batches
This training inputs, and each sample standard deviation in the sample database has corresponding label, and the neural network includes:Successively
The input layer of connection, the first convolutional layer, the first sub-sample layer, the second convolutional layer, the second sub-sample layer, full connection hidden layer and output
Layer;
S2, the dimension by the corresponding label of samples all in the output node number of the output layer and the sample database
Number is extended for original at least twice;
S3, the initial connection weights between two layers of the neuron that be connected of the neural network carry out random initializtion, will
The training input that every batch of extracts is sequentially input in the neural network, by error backpropagation algorithm to the nerve net
Network is trained, the neural network after being trained.
The beneficial effects of the invention are as follows:The technical program adjusts whole network by multigroup training data sample composite, more
Rapid convergence expands output, improves network training speed, can be obtained in shorter training time number to optimum value
Higher precision.
Based on the above technical solution, the present invention can also be improved as follows.
Preferably, it after the completion of training process, further includes:
S4, the test input set that the sample database includes is inputted in the neural network after the training, obtains institute
State the corresponding test output of test input that each test sample in test input set includes;
S5, classified according to the test output to all test samples in the test input set, obtain classification knot
Fruit verifies the corresponding label of samples all in the test input set according to the classification results, obtains the training
The classification accuracy of neural network afterwards.
Preferably, in step S5, all test samples in the test input set are divided by maxout graders
Class.
A kind of acceleration training system of the neural network with label, including:
Abstraction module is identical in the training input set that sample database used in neural network includes for extracting in batches
The training of quantity difference sample inputs, and each sample standard deviation in the database has corresponding label, the neural network packet
It includes:Sequentially connected input layer, the first convolutional layer, the first sub-sample layer, the second convolutional layer, the second sub-sample layer, connect entirely it is hidden
Layer and output layer;
Enlargement module, for samples all in the output node number of the output layer and the sample database to be corresponded to
The dimension of label be extended for original at least twice;
Training module carries out random for the initial connection weights between two layers of the neuron that be connected to the neural network
The training input that every batch of extracts is sequentially input in the neural network, passes through error backpropagation algorithm pair by initialization
The neural network is trained, the neural network after being trained.
Preferably, it further includes:
Test module, the test input set for the sample database to be included input the neural network after the training
In, obtain the corresponding test output of test input that each test sample includes in the test input set;
Authentication module, for being classified according to the test output to all test samples in the test input set,
Classification results are obtained, the corresponding label of samples all in the test input set is verified according to the classification results, is obtained
The classification accuracy of neural network after to the training.
Preferably, the authentication module is specifically used for through maxout graders to all tests in the test input set
Sample is classified.
Description of the drawings
Fig. 1 is that a kind of flow of acceleration training method of the neural network with label provided in an embodiment of the present invention is illustrated
Figure;
Fig. 2 is the structure diagram of a kind of neural network with label that another embodiment of the present invention provides;
Fig. 3 is the flow of the acceleration training method of a kind of neural network with label that another embodiment of the present invention provides
Schematic diagram;
Fig. 4 is a kind of structure of the acceleration training system for neural network with label that another embodiment of the present invention provides
Schematic diagram;
Fig. 5 is a kind of structure of the acceleration training system for neural network with label that another embodiment of the present invention provides
Schematic diagram;
Fig. 6 is the structure diagram of a kind of pattern recognition device that another embodiment of the present invention provides.
Specific embodiment
The principle and features of the present invention will be described below with reference to the accompanying drawings, and the given examples are served only to explain the present invention, and
It is non-to be used to limit the scope of the present invention.
As shown in Figure 1, a kind of acceleration training method of the neural network with label, including:
Identical quantity is different in S101, the training input set that sample database used in extraction neural network includes in batches
The training of sample inputs, and each sample standard deviation in sample database has corresponding label, and neural network includes:It is sequentially connected
Input layer, the first convolutional layer, the first sub-sample layer, the second convolutional layer, the second sub-sample layer, full connection hidden layer and output layer;
S102, the dimension of the corresponding label of samples all in the output node number and sample database of output layer is expanded
It fills for original at least twice;
S103, the initial connection weights between two layers of the neuron that be connected of neural network carry out random initializtion, will be every
It criticizes the training input extracted to sequentially input in neural network, neural network is trained by error backpropagation algorithm, is obtained
Neural network after to training.
Label processing method:
For label processing method, core is to deform network output layer, and former output layer number of nodes k is expanded as m*k.It is former
The output label of each sample is k dimensional vectors, and new output label is by following deformation:
Wherein, Y is output vector, and y is output layer node label value.
In the exemplary embodiment of label processing method, classification, original training is identified for mnist handwritten numeral pictures
Data sample concentrates each sample label to have 10 dimensions corresponding number 0-9 respectively, and after network structure is deformed, output node expands
It it is 10*n, the output dimension of tranining database accordingly expands, and new output data (10*n dimensions) is by n former output data (10 dimension)
It is arranged in order.When carrying out network training, this 10*n output node value, which both participates in, to be calculated error and returns, it is assumed that training is defeated
It was 0100000000 originally to go out to concentrate some, and it is exactly 01000000000100000000 to expand twice, then exports node layer
Just into 20, when training it is exactly this 20 and calculates error passback together;Carry out Classification and Identification when, new sample label by
The label of n 0-9 number is arranged in order, and is compared using maxout graders into row label, and gained maximum value is " winning
Person " (for example, the 1st, 11,21 ... it is number 0 that a output label is corresponding), be with dimension by observing its corresponding label
It is no to be consistent to verify classification accuracy.In simple terms, this sample set includes many samples, and a sample input has one greatly
Heap, output have 10 at the beginning, than if any 500 samples, output collection is exactly the table of a 10x500, and 10 of each row are exactly
10 output datas of each sample, it is 1 that this 10 numbers, which only have one, and others are 0.Its label meaning is exactly such as
First is 1, and it is 0 to illustrate this numeral sample, second be 1 with regard to explanation be 1, and so on label be exactly 0-9.
Neural network has:Input layer, training data are input into input layer;Output layer is generated from output layer and is exported;
First convolutional layer, the first convolutional layer by the different convolution kernels that random weights form with input layer to be connect;First sub-sample layer, the
One convolutional layer obtains the first sub-sample layer by average sample;Second convolutional layer, the second convolutional layer is to be made of random weights
Different convolution kernels are connect with the first sub-sample layer;Second sub-sample layer, the second convolutional layer obtain the second son by average sample and take out
Sample layer;Full connection hidden layer, connect full hidden layer (full articulamentum) by the expansion of the second sub-sample layer ranks be able to random weight it is heavy with it is defeated
Go out layer to be connected.
The training process is related to accessing training data sample set, each training data sample tool in training data sample set
There is the corresponding label being assigned to thereon.Each data sample is input in input layer one by one later, until whole
Sample input is once considered as one wheel training of completion.
Training process is that training data sample is inputted neural network in batches, and that identical quantity is extracted per a batch differs sample
Originally it is trained, each associated weight of interlayer is set via error back propagation (BP back-propagation) process, it is each
Criticize weight of Sample Refreshment so that the output generated from output layer is matched with the associated label of training data sample.
The basic thought of BP algorithm is that learning process is by the forward-propagating of signal and two process groups of backpropagation of error
Into.During forward-propagating, input sample is passed to from input layer, after each hidden layer is successively handled, is transmitted to output layer.If the reality of output layer
Border is exported not to be inconsistent with desired output (teacher signal), then is transferred to the back-propagation phase of error.Error-duration model is to miss output
Error distribution is given all units of each layer by difference with some form by hidden layer to input layer successively anti-pass, each so as to obtain
The error signal of layer unit, this error signal are as the foundation for correcting each unit weights.This signal forward-propagating and error
Each layer weighed value adjusting process of backpropagation, carries out again and again.The process that weights constantly adjust, that is, network
Learning training process.The error that this process is performed until network output is reduced to acceptable degree or proceeds to and sets in advance
Until fixed study number.
Output layer is compared using maxout graders into row label, and a maximum value conduct is chosen from network generation output
The person of winning, corresponding label data compares to be returned as error.
As shown in Fig. 2, original nerve network output number is increased by multiple, multiple branches are set with this, corresponding instruction
Corresponding output will be expanded with multiple by practicing label data, obtain the neural network of new neural network, i.e. the present embodiment.New neural network
During test, the corresponding label of maximum output value of some branch of maxout is selected as classification results.Improve network training speed
Degree, can obtain higher precision in shorter training time number.
Neural network in the present embodiment is convolutional neural networks, and convolutional neural networks are one kind of artificial neural network,
The research hotspot of field of image recognition is become at present.It simulates biological vision processing procedure, in a manner that weights are shared,
Make the reduction of network model complexity, reduce the quantity of weights.Convolutional neural networks can be directly defeated using multidimensional image as network
Enter, different from the fully-connected network input data reconstruction that is that treated.
Feedforward network:
Training data sample set is accessed, each data sample in the training data sample set has corresponding mark
Label.Each training data sample is input to input layer one by one, a wheel is then completed until whole samples are entered.
For convolutional layer, pass through after the different convolution nuclear convolutions being made of last layer characteristic image according to different random weight
One activation primitive is got, and ι layers of convolutional layer are characterized as:
Wherein,For l layers of j-th of input feature vector figure,It is special for i-th of input feature vector figure of l layers and j-th of output
The convolution kernel (weight) between figure is levied, b is convolutional layer biasing, and f is activation primitive, MjIt is connected with j-th of output characteristic pattern
The set of input feature vector figure grade i.
A sub-sample layer can be connected after each convolutional layer, neural network sub-samples layer is using mean sample
Method, sampling window are equal to all identical convolution kernel of weight, and for the sampling window of n × n, this weight is 1/ (n × n).
Wherein, β=1/ (n × n), down () represent the summation of sampling window interior element.
A full connection hidden layer can be connected after the last one sub-sample layer, full articulamentum is the exhibition of all features of front layer
It opens, obtains a feature vector for fully-connected network input.
Output layer is connected with the full hidden layer that connect by weight with biasing:
Wherein, yjFor j-th of node output valve of output layer, xiFor complete i-th of node input value of articulamentum, wijTo connect entirely
The connection weight of i-th of node of layer and j-th of node of output layer, b are biased for output layer.
In back-propagation process, using gradient descent method transmission error gradient, cost function is square error:
Wherein, t is exported for sample label, and y is exported for network, and N is total sample number, and n is sample current number, and c is output
Dimension, k are numbered for output node.
Process is returned for single sample:
Wherein, tn、ynRespectively sample label and network output vector.
Backpropagation:
The error of backpropagation can regard sensitivity of the neuron to weight and offset change as, and error gradient is reached
Full connection hidden layer:
Wherein, δ is node sensitivity value, is had:
δij=ef'(Sj) (8)
The last one sub-sample layer is reached for error, by the transformation of forward-propagating process, error gradient inversion is shifted to
Respective dimensions.If being all connected with convolutional layer before and after sub-sample layer, sensitivity is:
Wherein, δ represents node susceptibility vector matrix, and u is node output matrix, and k is convolution kernel weight matrix,It represents
Element multiplication in vector matrix, conv2 are convolutional calculation, and rot180 rotates convolution kernel, and convolution function implements cross-correlation calculation,
Full represents full convolution, and 0 is mended to default portion in matlab.Calculating error gradient is:
This formula represents the sensitivity value element summation that will own (u, v) positions.
Wherein, d represents front layer output characteristic pattern element summation.
Convolutional layer is reached for error, susceptibility is:
Wherein up () is the inverse realization of down (), and each feature ranks direction is replicated n times, is returned to from sub-sample layer
The dimension size of convolutional layer, calculating error gradient is:
As shown in figure 3, after the completion of training process shown in Fig. 1, further include:
In neural network after S104, the test input set input training for including sample database, test input is obtained
Concentrate the corresponding test output of the test input that each test sample includes;
S105, classified according to test output to all test samples in test input set, obtain classification results, according to
Classification results verify that the classification of the neural network after being trained is accurate to the corresponding label of samples all in test input set
True rate.
Test process is exactly that the input of test set is brought into trained network, obtains the output of network, Ran Houyong
Maxout graders, simple point is said choose between it is maximum, the label at this time tested should extended correspondence, see
This maximum output node is then which node compares the tag along sort tested, and is known that whether identify correctly
(such as maximum value be the 2nd that with regard to it is corresponding be 1, compare tag along sort, tag along sort should be at this time
01234567890123456789), as soon as all thousands of a entries all and so on time, it is accurate to obtain an identification
Rate.
As shown in figure 4, a kind of acceleration training system of the neural network with label, including:
Abstraction module 401, for extracting in the training input set that sample database used in neural network includes in batches
The training of identical quantity difference sample inputs, and each sample standard deviation in database has corresponding label, and neural network includes:According to
The input layer of secondary connection, the first convolutional layer, the first sub-sample layer, the second convolutional layer, the second sub-sample layer, full connection hidden layer and defeated
Go out layer;
Enlargement module 402, for the corresponding mark of samples all in the output node number and sample database by output layer
The dimension of label is extended for original at least twice;
Training module 403 carries out random for the initial connection weights between two layers of the neuron that be connected to neural network
Initialization sequentially inputs the training input that every batch of extracts in neural network, by error backpropagation algorithm to neural network
It is trained, the neural network after being trained.
As shown in figure 5, it further includes:
Test module 404, the test input set for sample database to be included are inputted in the neural network after training, are obtained
The corresponding test output of test input that each test sample includes into test input set;
Authentication module 405 for being classified according to test output to all test samples in test input set, is divided
Class is as a result, verify the corresponding label of samples all in test input set according to classification results, the nerve after being trained
The classification accuracy of network.
Specifically, authentication module is specifically used for carrying out all test samples in test input set by maxout graders
Classification.
As shown in fig. 6, a kind of acceleration training method using the above-mentioned neural network with label carries out image identification
Pattern recognition device, the device should have enough storage capacity and computing capability, can carry out general universal calculating operation.Meter
Calculation ability is embodied in processor 601 and data converter 602, and calculated with the practicable acceleration of GPU graphics processors 603
Function.They are connected communication with the memory of device 604, and memory 604 includes RAM and two kinds of ROM, and can external other deposit
Reservoir.Processor 601 is used to implement embodiment shown in FIG. 1.
In addition, device shown in fig. 6 further includes sensing device 605 and display device 606.Simplification device in Fig. 6, can be from
The external world perceives target image and then carries out follow-up storage, data conversion, identification etc..Sensing device 605, for example, camera, taking the photograph
As head, scanner, touch input device etc..Display device 606, for example, LED lamp, instrument board, electronic curtain etc..
Simplification device in Fig. 6 may also include communication interface 607.This communication interface 607 can be using external equipment or as defeated
Enter output device and carry out data information transfer, for example, external connection keyboard, mouse, video input device, camera, scanner, touch
Input unit, u disks, CD-ROM drive or other external storage equipments etc..
The image or other information that data converter 602 perceives sensing device are converted into can be by wrapping in processor 601
The information that the program that contains and instruction perform, then stored device 604 be passed to processor 601 and handled.The information warp handled
After memory 604, after being converted into 606 displayable information of display device by data converter 602, by display device 606 into
Row display.
In addition, the simplification device in Fig. 6 can realize that remote processing performs task and distribution clouds computing function, connect by communication
Mouth connection internet, LAN etc. update instructing, being passed to export data information etc. for processing module in cloud.
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all the present invention spirit and
Within principle, any modification, equivalent replacement, improvement and so on should all be included in the protection scope of the present invention.
Claims (6)
1. a kind of acceleration training method of the neural network with label, which is characterized in that including:
Identical quantity difference sample in S1, the training input set that sample database used in extraction neural network includes in batches
Training inputs, and each sample standard deviation in the sample database has corresponding label, and the neural network includes:It is sequentially connected
Input layer, the first convolutional layer, the first sub-sample layer, the second convolutional layer, the second sub-sample layer, full connection hidden layer and output layer;
It is S2, the dimension of the corresponding label of samples all in the output node number of the output layer and the sample database is equal
It is extended for original at least twice;
S3, the initial connection weights between two layers of the neuron that be connected of the neural network carry out random initializtion, by every batch of
Extract it is described training input sequentially inputs in the neural network, by error backpropagation algorithm to the neural network into
Row training, the neural network after being trained.
2. the acceleration training method of a kind of neural network with label according to claim 1, which is characterized in that instructing
After the completion of practicing process, further include:
S4, the test input set that the sample database includes is inputted in the neural network after the training, obtains the survey
The corresponding test output of test input that each test sample includes in examination input set;
S5, classified according to the test output to all test samples in the test input set, obtain classification results, root
The corresponding label of samples all in the test input set is verified according to the classification results, obtains the god after the training
Classification accuracy through network.
A kind of 3. acceleration training method of neural network with label according to claim 2, which is characterized in that step
In S5, classified by maxout graders to all test samples in the test input set.
4. a kind of acceleration training system of the neural network with label, which is characterized in that including:
Abstraction module, for extracting identical quantity in the training input set that sample database used in neural network includes in batches
The training of different samples inputs, and each sample standard deviation in the database has corresponding label, and the neural network includes:According to
The input layer of secondary connection, the first convolutional layer, the first sub-sample layer, the second convolutional layer, the second sub-sample layer, full connection hidden layer and defeated
Go out layer;
Enlargement module, for by the corresponding mark of samples all in the output node number of the output layer and the sample database
The dimension of label is extended for original at least twice;
Training module carries out random initial for the initial connection weights between two layers of the neuron that be connected to the neural network
Change, the training input that every batch of extracts is sequentially input in the neural network, by error backpropagation algorithm to described
Neural network is trained, the neural network after being trained.
5. the acceleration training system of a kind of neural network with label according to claim 4, which is characterized in that also wrap
It includes:
Test module, the test input set for the sample database to be included are inputted in the neural network after the training,
Obtain the corresponding test output of test input that each test sample includes in the test input set;
Authentication module for being classified according to the test output to all test samples in the test input set, obtains
Classification results verify the corresponding label of samples all in the test input set according to the classification results, obtain institute
State the classification accuracy of the neural network after training.
6. the acceleration training system of a kind of neural network with label according to claim 5, which is characterized in that described
Authentication module is specifically used for classifying to all test samples in the test input set by maxout graders.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711482884.9A CN108229557A (en) | 2017-12-29 | 2017-12-29 | The acceleration training method and system of a kind of neural network with label |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711482884.9A CN108229557A (en) | 2017-12-29 | 2017-12-29 | The acceleration training method and system of a kind of neural network with label |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108229557A true CN108229557A (en) | 2018-06-29 |
Family
ID=62646167
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711482884.9A Pending CN108229557A (en) | 2017-12-29 | 2017-12-29 | The acceleration training method and system of a kind of neural network with label |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108229557A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109800751A (en) * | 2019-01-25 | 2019-05-24 | 上海深杳智能科技有限公司 | A kind of bank slip recognition method and terminal based on building deep learning network |
CN109978036A (en) * | 2019-03-11 | 2019-07-05 | 华瑞新智科技(北京)有限公司 | Target detection deep learning model training method and object detection method |
CN110532318A (en) * | 2019-09-02 | 2019-12-03 | 安徽三马信息科技有限公司 | A kind of injection molding machine operating condition data analysis system based on more hidden layer neural networks |
CN110716792A (en) * | 2019-09-19 | 2020-01-21 | 华中科技大学 | Target detector and construction method and application thereof |
CN112231975A (en) * | 2020-10-13 | 2021-01-15 | 中国铁路上海局集团有限公司南京供电段 | Data modeling method and system based on reliability analysis of railway power supply equipment |
-
2017
- 2017-12-29 CN CN201711482884.9A patent/CN108229557A/en active Pending
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109800751A (en) * | 2019-01-25 | 2019-05-24 | 上海深杳智能科技有限公司 | A kind of bank slip recognition method and terminal based on building deep learning network |
CN109800751B (en) * | 2019-01-25 | 2023-04-28 | 上海深杳智能科技有限公司 | Bill identification method and terminal based on deep learning network construction |
CN109978036A (en) * | 2019-03-11 | 2019-07-05 | 华瑞新智科技(北京)有限公司 | Target detection deep learning model training method and object detection method |
CN110532318A (en) * | 2019-09-02 | 2019-12-03 | 安徽三马信息科技有限公司 | A kind of injection molding machine operating condition data analysis system based on more hidden layer neural networks |
CN110716792A (en) * | 2019-09-19 | 2020-01-21 | 华中科技大学 | Target detector and construction method and application thereof |
CN110716792B (en) * | 2019-09-19 | 2023-06-06 | 华中科技大学 | Target detector and construction method and application thereof |
CN112231975A (en) * | 2020-10-13 | 2021-01-15 | 中国铁路上海局集团有限公司南京供电段 | Data modeling method and system based on reliability analysis of railway power supply equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108229557A (en) | The acceleration training method and system of a kind of neural network with label | |
CN112766199B (en) | Hyperspectral image classification method based on self-adaptive multi-scale feature extraction model | |
CN108764050B (en) | Method, system and equipment for recognizing skeleton behavior based on angle independence | |
CN110287800B (en) | Remote sensing image scene classification method based on SGSE-GAN | |
CN107132516B (en) | A kind of Radar range profile's target identification method based on depth confidence network | |
CN104978580B (en) | A kind of insulator recognition methods for unmanned plane inspection transmission line of electricity | |
CN107563422A (en) | A kind of polarization SAR sorting technique based on semi-supervised convolutional neural networks | |
CN112070729B (en) | Anchor-free remote sensing image target detection method and system based on scene enhancement | |
CN108647741A (en) | A kind of image classification method and system based on transfer learning | |
CN110097029B (en) | Identity authentication method based on high way network multi-view gait recognition | |
CN112446388A (en) | Multi-category vegetable seedling identification method and system based on lightweight two-stage detection model | |
CN109325547A (en) | Non-motor vehicle image multi-tag classification method, system, equipment and storage medium | |
CN107092870A (en) | A kind of high resolution image semantics information extracting method and system | |
CN109584337A (en) | A kind of image generating method generating confrontation network based on condition capsule | |
CN106355151A (en) | Recognition method, based on deep belief network, of three-dimensional SAR images | |
CN104732243A (en) | SAR target identification method based on CNN | |
CN104050507B (en) | Hyperspectral image classification method based on multilayer neural network | |
CN107480774A (en) | Dynamic neural network model training method and device based on integrated study | |
CN115222946B (en) | Single-stage instance image segmentation method and device and computer equipment | |
CN109817276A (en) | A kind of secondary protein structure prediction method based on deep neural network | |
CN106203625A (en) | A kind of deep-neural-network training method based on multiple pre-training | |
CN110245711A (en) | The SAR target identification method for generating network is rotated based on angle | |
CN107657204A (en) | The construction method and facial expression recognizing method and system of deep layer network model | |
CN113705580B (en) | Hyperspectral image classification method based on deep migration learning | |
CN106022287A (en) | Over-age face verification method based on deep learning and dictionary representation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180629 |
|
RJ01 | Rejection of invention patent application after publication |