CN109816002A - The single sparse self-encoding encoder detection method of small target migrated certainly based on feature - Google Patents

The single sparse self-encoding encoder detection method of small target migrated certainly based on feature Download PDF

Info

Publication number
CN109816002A
CN109816002A CN201910028640.6A CN201910028640A CN109816002A CN 109816002 A CN109816002 A CN 109816002A CN 201910028640 A CN201910028640 A CN 201910028640A CN 109816002 A CN109816002 A CN 109816002A
Authority
CN
China
Prior art keywords
training
sample
sae
feature
trained
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910028640.6A
Other languages
Chinese (zh)
Other versions
CN109816002B (en
Inventor
武继刚
孙一飞
张欣鹏
孟敏
孙为军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201910028640.6A priority Critical patent/CN109816002B/en
Publication of CN109816002A publication Critical patent/CN109816002A/en
Application granted granted Critical
Publication of CN109816002B publication Critical patent/CN109816002B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of based on feature from the single sparse coding device detection method of small target of migration, and this method is as follows: training sample set, test sample collection, raw data set of the building for the Weak target of training;Training sample set is inputted into training in SAE model, obtains the sparse features of sample, i.e. model parameterWith sparse features training softmax, i.e. input feature vector f (Wm+1x+bm+1) each training is trained to softmax after the completion of, retain positive sample, randomly select negative sample similar in the quantity of quantity and positive sample;Using model parameter as next time trained original model parameter, realizes that the parameter of SAE model updates, repeat above step, when the value of the loss function of SAE model training is identical as the value of a preceding loss function, training terminates;Test sample collection is inputted the softmax that last time training obtains to test, obtains test result.The present invention can accurately in detection image Weak target.

Description

The single sparse self-encoding encoder detection method of small target migrated certainly based on feature
Technical field
The present invention relates to computer vision processing technology fields, more particularly to a kind of list migrated certainly based on feature One sparse self-encoding encoder detection method of small target.
Background technique
Dim target detection is a difficult point of field of image processing, the faint mesh in natural image especially medical image Target detection difficulty is very big, and general blur margin is clear in the picture for Weak target, and contrast is low, and exists in most cases Noise jamming greatly increases detection difficulty.Currently, traditional method and deep learning are for such dim target detection All have some limitations.Detection for weak target, feature extraction are a highly important job, effective feature Extract the accuracy that can greatly improve late detection.
Summary of the invention
The present invention detects to solve the problems, such as that the prior art can not carry out high-precision to weak target, provides a kind of base In feature from the single sparse self-encoding encoder detection method of small target of migration, there is the small and weak mesh accurately detected in image Mark.
To realize aforementioned present invention purpose, the technical solution adopted is as follows: a kind of single sparse from what is migrated based on feature Encoder detection method of small target, it is as follows that the method comprising the steps of:
S1: it is the image data of a as training sample set that quantity is chosen from image data base, for constructing training sample The positive sample and negative sample of concentration;It is the image data of 1-a as test sample collection that quantity is chosen from database, for constructing The positive sample and negative sample that test sample is concentrated;The positive sample includes aneurysms, and 21* is constructed centered on aneurysms The block of 21 pixels;The negative sample does not include the pixel of aneurysms, and size is the block of 21*21 pixel;Simultaneously from positive sample, The green channel in color image, blue channel, the contrast enhancing knot corrected by Gamma are chosen in negative sample respectively Fruit is as raw data set;
Wherein: a indicates that training sample set accounts for the percentage of image data base, and 0 < a < 1, a are manually set;
S2: being trained training sample set, and training sample set is input in SAE model, and training obtains training sample The sparse features of collection, i.e. SAE model parameter
Wherein:Indicate the weight and biasing of the SAE model obtained by backpropagation;
S3: sparse features training softmax, i.e. input feature vector f (W are utilizedm+1x+bm+1) softmax is trained, often After the completion of secondary training, retains positive sample, randomly select negative sample similar in the quantity of quantity and positive sample;
Wherein: f indicates sigmod activation primitiveM indicates the m times training;Wm+1、bm+1Respectively indicate The weight and biasing of the SAE of m+1 training;
S4: by SAE model parameterAs next time trained original model parameter, the parameter of SAE model is realized It updates, the feature for completing SAE model migrates certainly;Execute S2;Until obtaining the value of the loss function of SAE model training and preceding primary When the value of loss function is identical, S5 is executed;
S5: after trained SAE model, by the softmax of test sample collection input to the end, test result is obtained.
Preferably, a value is 0.75, i.e., the image data conduct that quantity is 75% is chosen from image data base Training sample set, it is 25% image data as test sample collection that quantity is chosen from database.
Preferably, the expression formula of step S2, the Softmax are as follows:
Wherein, ViIt is the output of classifier prime output unit;I indicates classification index, and total classification number is C;SiIt indicates Be the corresponding feature vector of current training sample index and all sample index and ratio.
Preferably, the formula that the parameter of the SAE model in step S4 updates are as follows:
Wherein: WmIndicate the m times it is trained when SAE weight matrix;α is learning rate;s2Indicate the number of hiding layer unit; ΔWmPartial derivative matrix for the m times loss function when trained about weight;λ is regularization penalty factor;bmIndicate the m times instruction The bias matrix of SAE when practicing;For matrix Δ WmAn element in matrix;hW,b(x(i)) it is when to input be x(i)It is corresponding Output;Layer is hidden for sparse coding device shadow to be averaged activity;Indicate the activation of j-th of neuron of hidden layer;ΔbmFor m Partial derivative matrix when secondary trained about weight b.
Further, the loss function formula of the S4 are as follows:
Wherein, β is sparse item penalty factor;Referred to as KL divergence, for measuring two probability distribution Degree of closeness;For the average activity of j-th of neuron of hidden layer;ρ is sparsity parameter;The J (W, b) passes through following formula It is expressed:
Wherein: n is the number of sample;x(i)Indicate the input of i-th of neuron;It is l i-th of neuron of layer to next The weight of j-th of neuron of layer.
Beneficial effects of the present invention are as follows: the present invention utilizes training sample by building training sample set, test sample collection This collection SAE model, more new training sample set and SAE model repeatedly, make the value of the loss function of SAE model with it is previous When being worth identical, terminate training;It goes to test trained model by test sample collection again, obtains test result, this method can The accurately Weak target in detection image.
Detailed description of the invention
Fig. 1 is the flow chart of the method for the present invention.
Fig. 2 is the schematic diagram of training process of the present invention.
Fig. 3 is the structural schematic diagram of sparse coding device.
Fig. 4 is the present embodiment test result comparison diagram.
Specific embodiment
The present invention will be described in detail with reference to the accompanying drawings and detailed description.
Embodiment 1
As shown in Figure 1, a kind of single sparse coding device detection method of small target migrated certainly based on feature, this method tool Steps are as follows for body:
Step S1: respectively from database Retinopathy Online Challenge, DIARETDB1 and E-ophtha The image data of selection 75% is as training sample set, for constructing the positive sample and negative sample of training sample concentration;From data 25% other image data is chosen in library Retinopathy Online Challenge, DIARETDB1 and E-ophtha As test sample collection, for constructing the positive sample and negative sample of test sample concentration;The positive sample be comprising aneurysms, And the block of 21*21 pixel is constructed centered on aneurysms;The negative sample is not comprising aneurysms, and pixel is 21*21's Block;It chooses the green channel in color image, blue channel respectively from positive sample, negative sample simultaneously, corrected by Gamma The contrast enhancing result arrived is as raw data set;Shown color image is true color image, the color value of each of which pixel All determined by tri- numerical value of R, G, B.
The present embodiment is also constructed simultaneously by green channel, blue channel, by the school Gamma when constructing training sample set The raw data set of the contrast enhancing result composition just obtained.Current embodiment require that the database of selection needs to meet small and weak mesh Target feature, and the sample standard deviation symbol in database Retinopathy Online Challenge, DIARETDB1 and E-ophtha Close the feature of Weak target.
Step S2: being trained training sample set, as shown in Fig. 2, training sample set is input in SAE model, instructs Get the sparse features of sample, i.e. model parameter
Wherein:Indicate the weight and biasing of the SAE model obtained by backpropagation;
Step S3: sparse features training softmax, i.e. input feature vector f (W are utilizedm+1x+bm+1) softmax is instructed Practice;Every time after the completion of training, retains positive sample, randomly select negative sample similar in the quantity of quantity and positive sample;
Wherein: f indicates sigmod activation primitiveM indicates the m times training;Wm+1、bm+1Respectively indicate The weight and biasing of the SAE of m+1 training;
One deep neural network model being made of the sparse self-encoding encoder of multilayer of SAE model described in the present embodiment, Input of the output of preceding layer self-encoding encoder as its later layer self-encoding encoder, the last layer are (logistic points of a classifier Class device or softmax classifier)
As shown in figure 3, the sparse self-encoding encoder is a kind of unsupervised machine learning algorithm, sparsity refers to when one When the output of a neuron is 1, then it is assumed that this neuron is activation;When output is 0, then it is assumed that this mind It is to inhibit through member.Neuron is set then to be referred to as sparsity limitation in holddown within the most of the time.The present embodiment is in reality In the training process of border, it is desirable to which machine oneself can learn some important features into sample, by applying to hidden layer Limitation and sparsity limitation enable the machine to learn under rugged environment the feature for preferably expressing sample, and can be effective Dimensionality reduction is carried out to sample.But in the actual operation process, it can not correctly judge which neuron needs which is activated need to press down System.Therefore the concept for needing to introduce average activity, is usedIt indicates, formula is as follows:
Wherein: s2Indicate the number of hidden layer neuron;Indicate that this is hidden when network is endowed specific input x The activation of unit;Parameter ρ, referred to as sparsity parameter are introduced simultaneously in calculating process, and is made as far as possible
Softmax described in the present embodiment has very extensive application in machine learning, and Softmax calculates simple, effect Fruit is significant, and especially in processing (C > 2) problems of classifying, the last output unit of classifier needs Softmax function to carry out numerical value more Processing.It is defined as follows about Softmax function:
Wherein, ViIt is the output of classifier prime output unit;I indicates classification index, and total classification number is C;SiIt indicates Be currentElement index and all elements index and ratio.
Polytypic output numerical value is converted relative probability by Softmax, and numerical value is made to be easier to understand and compare.
Step S4: by model parameterAs next time trained original model parameter, the parameter of SAE model is realized It updates, the feature for completing SAE model migrates certainly;After feature migration, the parameters of SAE model are updated;Repetition step S2, Step S3;Until obtain the value of loss function of SAE model training it is identical as previous value when, training terminate.
The present embodiment updates SAE model parameter using backpropagation, and it is leading biography that the backpropagation, which is with error, Movement is broadcast, by asking local derviation progressive updating weight and biasing to weight and biasing in back-propagation process.Then parameter update can To be obtained by following formula:
Wherein: WmIndicate the m times it is trained when SAE weight matrix;α is learning rate;s2Indicate the number of hiding layer unit; ΔWmPartial derivative matrix for the m times loss function when trained about weight;λ is regularization penalty factor;bmIndicate the m times instruction The bias matrix of SAE when practicing;For matrix Δ WmAn element in matrix;hW,b(x(i)) it is when to input be x(i)It is corresponding Output;Layer is hidden for sparse coding device shadow to be averaged activity;Indicate the activation of j-th of neuron of hidden layer;ΔbmFor m Partial derivative matrix when secondary trained about weight b.
The value of this implementation loss function is calculated by loss function, the loss function formula are as follows:
Wherein, β is sparse item penalty factor;Referred to as KL divergence, for measuring two probability distribution Degree of closeness;For the average activity of j-th of neuron of hidden layer;ρ is sparsity parameter;The J (W, b) passes through following formula It is expressed:
Wherein: n is the number of sample;x(i)Indicate the input of i-th of neuron;It is l i-th of neuron of layer to next The weight of j-th of neuron of layer.
In the present embodiment every time training when training sample set all can include raw data set, raw data set it is main Effect is in training process, in order to improve the accuracy of final mask.
Step S5: it after trained SAE model, is tested using the softmax that last time training obtains;It will test Sample set is input in softmax, obtains test result.
The present embodiment test result mainly passes through two measurement standards, specific Sensitivity and accuracy rate Accuracy.The present embodiment is obtained by the single sparse coding device detection method of small target migrated certainly based on feature It arrives that test results are shown in figure 4, obtains the accuracy rate and specificity of each database.It may indicate that and be based on by test result The detection method of small target of the feature of sparse coding device and softmax migration, can be extracted well by sparse coding device The sparse features of sample improve the classification capacity of softmax by progressive training method step by step, significant to improve The accuracy and specificity of target detection.
Obviously, the above embodiment of the present invention be only to clearly illustrate example of the present invention, and not be pair The restriction of embodiments of the present invention.Any modification done within the spirit and principles of the present invention and changes equivalent replacement Into etc., it should all be included in the scope of protection of the claims of the present invention.

Claims (5)

1. a kind of single sparse coding device detection method of small target migrated certainly based on feature, it is characterised in that: this method packet Include that steps are as follows:
S1: it is the image data of a as training sample set that quantity is chosen from image data base, is concentrated for constructing training sample Positive sample and negative sample;It is the image data of 1-a as test sample collection that quantity is chosen from database, for constructing test Positive sample and negative sample in sample set;The positive sample includes aneurysms, and 21*21 picture is constructed centered on aneurysms The block of element;The negative sample does not include the pixel of aneurysms, and size is the block of 21*21 pixel;Simultaneously from positive sample, negative sample The green channel in color image, blue channel, the contrast enhancing result corrected by Gamma are chosen in this respectively to make For raw data set;
Wherein: a indicates that training sample set accounts for the percentage of image data base, and 0 < a < 1, a are manually set;
S2: being trained training sample set, and training sample set is input in SAE model, and training obtains training sample set Sparse features, i.e. SAE model parameter
Wherein:Indicate the weight and biasing of the SAE model obtained by backpropagation;
S3: sparse features training softmax, i.e. input feature vector f (W are utilizedm+1x+bm+1) softmax is trained, it instructs every time After the completion of white silk, retains positive sample, randomly select negative sample similar in the quantity of quantity and positive sample;
Wherein: f indicates sigmod activation primitiveM indicates the m times training;Wm+1、bm+1It respectively indicates the m+1 times The weight and biasing of trained SAE;
S4: by SAE model parameterAs next time trained original model parameter, realize that the parameter of SAE model updates, The feature for completing SAE model migrates certainly;Execute S2;Until obtaining the value and preceding primary loss of the loss function of SAE model training When the value of function is identical, S5 is executed;
S5: after trained SAE model, by the softmax of test sample collection input to the end, test result is obtained.
2. the single sparse coding device detection method of small target according to claim 1 migrated certainly based on feature, special Sign is: a value is 0.75, i.e., it is 75% image data as training sample that quantity is chosen from image data base Collection, it is 25% image data as test sample collection that quantity is chosen from database.
3. the single sparse coding device detection method of small target according to claim 1 migrated certainly based on feature, special Sign is: step S2, the expression formula of the Softmax are as follows:
Wherein, ViIt is the output of classifier prime output unit;I indicates classification index, and total classification number is C;SiIndicate be The index of the current corresponding feature vector of training sample and all sample index and ratio.
4. the single sparse coding device detection method of small target according to claim 1 migrated certainly based on feature, special Sign is: the formula that the parameter of the SAE model in step S4 updates are as follows:
Wherein: WmIndicate the m times it is trained when SAE weight matrix;α is learning rate;s2Indicate the number of hiding layer unit;ΔWm Partial derivative matrix for the m times loss function when trained about weight;λ is regularization penalty factor;bmIndicate the m times it is trained when The bias matrix of SAE;For matrix Δ WmAn element in matrix;hW,b(x(i)) it is when to input be x(i)Corresponding output;Layer is hidden for sparse coding device shadow to be averaged activity;Indicate the activation of j-th of neuron of hidden layer;ΔbmFor the m times training When partial derivative matrix about weight b.
5. the single sparse coding device detection method of small target according to claim 4 migrated certainly based on feature, special Sign is: the loss function formula of the S4 are as follows:
Wherein, β is sparse item penalty factor;Referred to as KL divergence, for measure two probability distribution close to journey Degree;For the average activity of j-th of neuron of hidden layer;ρ is sparsity parameter;The J (W, b) carries out table by following formula It reaches:
Wherein: n is the number of sample;x(i)Indicate the input of i-th of neuron;It is i-th of neuron of l layer to next layer The weight of j neuron.
CN201910028640.6A 2019-01-11 2019-01-11 Single sparse self-encoder weak and small target detection method based on feature self-migration Active CN109816002B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910028640.6A CN109816002B (en) 2019-01-11 2019-01-11 Single sparse self-encoder weak and small target detection method based on feature self-migration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910028640.6A CN109816002B (en) 2019-01-11 2019-01-11 Single sparse self-encoder weak and small target detection method based on feature self-migration

Publications (2)

Publication Number Publication Date
CN109816002A true CN109816002A (en) 2019-05-28
CN109816002B CN109816002B (en) 2022-09-06

Family

ID=66603394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910028640.6A Active CN109816002B (en) 2019-01-11 2019-01-11 Single sparse self-encoder weak and small target detection method based on feature self-migration

Country Status (1)

Country Link
CN (1) CN109816002B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472667A (en) * 2019-07-19 2019-11-19 广东工业大学 Small object classification method based on deconvolution neural network
CN110930409A (en) * 2019-10-18 2020-03-27 电子科技大学 Salt body semantic segmentation method based on deep learning and semantic segmentation model
CN110972174A (en) * 2019-12-02 2020-04-07 东南大学 Wireless network interruption detection method based on sparse self-encoder
CN111462817A (en) * 2020-03-25 2020-07-28 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Classification model construction method and device, classification model and classification method
CN112465042A (en) * 2020-12-02 2021-03-09 中国联合网络通信集团有限公司 Generation method and device of classification network model

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070002275A1 (en) * 2005-07-01 2007-01-04 Siemens Corporate Research Inc. Method and System For Local Adaptive Detection Of Microaneurysms In Digital Fundus Images
CN104156736A (en) * 2014-09-05 2014-11-19 西安电子科技大学 Polarized SAR image classification method on basis of SAE and IDL
CN104166859A (en) * 2014-08-13 2014-11-26 西安电子科技大学 Polarization SAR image classification based on SSAE and FSALS-SVM
US20150379708A1 (en) * 2010-12-07 2015-12-31 University Of Iowa Research Foundation Methods and systems for vessel bifurcation detection
CN105224943A (en) * 2015-09-08 2016-01-06 西安交通大学 Based on the image swift nature method for expressing of multi thread normalization non-negative sparse coding device
AU2014271202A1 (en) * 2013-05-19 2016-01-07 Commonwealth Scientific And Industrial Research Organisation A system and method for remote medical diagnosis
CN105320965A (en) * 2015-10-23 2016-02-10 西北工业大学 Hyperspectral image classification method based on spectral-spatial cooperation of deep convolutional neural network
CN105654117A (en) * 2015-12-25 2016-06-08 西北工业大学 Hyperspectral image spectral-spatial cooperative classification method based on SAE depth network
CN105787517A (en) * 2016-03-11 2016-07-20 西安电子科技大学 Polarized SAR image classification method base on wavelet sparse auto encoder
US20160292856A1 (en) * 2015-04-06 2016-10-06 IDx, LLC Systems and methods for feature detection in retinal images
CN106096652A (en) * 2016-06-12 2016-11-09 西安电子科技大学 Based on sparse coding and the Classification of Polarimetric SAR Image method of small echo own coding device
CN106651899A (en) * 2016-12-09 2017-05-10 东北大学 Fundus image micro-aneurysm detection system based on Adaboost
CN106815601A (en) * 2017-01-10 2017-06-09 西安电子科技大学 Hyperspectral image classification method based on recurrent neural network
CN107341511A (en) * 2017-07-05 2017-11-10 西安电子科技大学 Classification of Polarimetric SAR Image method based on super-pixel Yu sparse self-encoding encoder
CN107590515A (en) * 2017-09-14 2018-01-16 西安电子科技大学 The hyperspectral image classification method of self-encoding encoder based on entropy rate super-pixel segmentation
CN107798349A (en) * 2017-11-03 2018-03-13 合肥工业大学 A kind of transfer learning method based on the sparse self-editing ink recorder of depth
CN108537233A (en) * 2018-03-15 2018-09-14 南京师范大学 A kind of pathology brain image sorting technique based on the sparse self-encoding encoder of depth stack
CN108921233A (en) * 2018-07-31 2018-11-30 武汉大学 A kind of Raman spectrum data classification method based on autoencoder network
CN109033952A (en) * 2018-06-12 2018-12-18 杭州电子科技大学 M-sequence recognition methods based on sparse self-encoding encoder
CN109102019A (en) * 2018-08-09 2018-12-28 成都信息工程大学 Image classification method based on HP-Net convolutional neural networks
CN109145832A (en) * 2018-08-27 2019-01-04 大连理工大学 Polarimetric SAR image semisupervised classification method based on DSFNN Yu non local decision

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070002275A1 (en) * 2005-07-01 2007-01-04 Siemens Corporate Research Inc. Method and System For Local Adaptive Detection Of Microaneurysms In Digital Fundus Images
US20150379708A1 (en) * 2010-12-07 2015-12-31 University Of Iowa Research Foundation Methods and systems for vessel bifurcation detection
AU2014271202A1 (en) * 2013-05-19 2016-01-07 Commonwealth Scientific And Industrial Research Organisation A system and method for remote medical diagnosis
CN104166859A (en) * 2014-08-13 2014-11-26 西安电子科技大学 Polarization SAR image classification based on SSAE and FSALS-SVM
CN104156736A (en) * 2014-09-05 2014-11-19 西安电子科技大学 Polarized SAR image classification method on basis of SAE and IDL
US20160292856A1 (en) * 2015-04-06 2016-10-06 IDx, LLC Systems and methods for feature detection in retinal images
CN105224943A (en) * 2015-09-08 2016-01-06 西安交通大学 Based on the image swift nature method for expressing of multi thread normalization non-negative sparse coding device
CN105320965A (en) * 2015-10-23 2016-02-10 西北工业大学 Hyperspectral image classification method based on spectral-spatial cooperation of deep convolutional neural network
CN105654117A (en) * 2015-12-25 2016-06-08 西北工业大学 Hyperspectral image spectral-spatial cooperative classification method based on SAE depth network
CN105787517A (en) * 2016-03-11 2016-07-20 西安电子科技大学 Polarized SAR image classification method base on wavelet sparse auto encoder
CN106096652A (en) * 2016-06-12 2016-11-09 西安电子科技大学 Based on sparse coding and the Classification of Polarimetric SAR Image method of small echo own coding device
CN106651899A (en) * 2016-12-09 2017-05-10 东北大学 Fundus image micro-aneurysm detection system based on Adaboost
CN106815601A (en) * 2017-01-10 2017-06-09 西安电子科技大学 Hyperspectral image classification method based on recurrent neural network
CN107341511A (en) * 2017-07-05 2017-11-10 西安电子科技大学 Classification of Polarimetric SAR Image method based on super-pixel Yu sparse self-encoding encoder
CN107590515A (en) * 2017-09-14 2018-01-16 西安电子科技大学 The hyperspectral image classification method of self-encoding encoder based on entropy rate super-pixel segmentation
CN107798349A (en) * 2017-11-03 2018-03-13 合肥工业大学 A kind of transfer learning method based on the sparse self-editing ink recorder of depth
CN108537233A (en) * 2018-03-15 2018-09-14 南京师范大学 A kind of pathology brain image sorting technique based on the sparse self-encoding encoder of depth stack
CN109033952A (en) * 2018-06-12 2018-12-18 杭州电子科技大学 M-sequence recognition methods based on sparse self-encoding encoder
CN108921233A (en) * 2018-07-31 2018-11-30 武汉大学 A kind of Raman spectrum data classification method based on autoencoder network
CN109102019A (en) * 2018-08-09 2018-12-28 成都信息工程大学 Image classification method based on HP-Net convolutional neural networks
CN109145832A (en) * 2018-08-27 2019-01-04 大连理工大学 Polarimetric SAR image semisupervised classification method based on DSFNN Yu non local decision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHAN JUAN ET AL.: "A Deep Learning Method for Microaneurysm Detection in Fundus Images", 《2016 IEEE FIRST INTERNATIONAL CONFERENCE ON CONNECTED HEALTH: APPLICATIONS, SYSTEMS AND ENGINEERING TECHNOLOGIES》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472667A (en) * 2019-07-19 2019-11-19 广东工业大学 Small object classification method based on deconvolution neural network
CN110472667B (en) * 2019-07-19 2024-01-09 广东工业大学 Small target classification method based on deconvolution neural network
CN110930409A (en) * 2019-10-18 2020-03-27 电子科技大学 Salt body semantic segmentation method based on deep learning and semantic segmentation model
CN110930409B (en) * 2019-10-18 2022-10-14 电子科技大学 Salt body semantic segmentation method and semantic segmentation system based on deep learning
CN110972174A (en) * 2019-12-02 2020-04-07 东南大学 Wireless network interruption detection method based on sparse self-encoder
CN110972174B (en) * 2019-12-02 2022-12-30 东南大学 Wireless network interruption detection method based on sparse self-encoder
CN111462817A (en) * 2020-03-25 2020-07-28 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Classification model construction method and device, classification model and classification method
CN111462817B (en) * 2020-03-25 2023-06-20 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Classification model construction method and device, classification model and classification method
CN112465042A (en) * 2020-12-02 2021-03-09 中国联合网络通信集团有限公司 Generation method and device of classification network model
CN112465042B (en) * 2020-12-02 2023-10-24 中国联合网络通信集团有限公司 Method and device for generating classified network model

Also Published As

Publication number Publication date
CN109816002B (en) 2022-09-06

Similar Documents

Publication Publication Date Title
CN109816002A (en) The single sparse self-encoding encoder detection method of small target migrated certainly based on feature
CN107145908B (en) A kind of small target detecting method based on R-FCN
CN105825511B (en) A kind of picture background clarity detection method based on deep learning
Chen et al. Assessing four neural networks on handwritten digit recognition dataset (MNIST)
CN107391703B (en) The method for building up and system of image library, image library and image classification method
CN111858989B (en) Pulse convolution neural network image classification method based on attention mechanism
CN107909101A (en) Semi-supervised transfer learning character identifying method and system based on convolutional neural networks
CN110020682A (en) A kind of attention mechanism relationship comparison net model methodology based on small-sample learning
CN107563999A (en) A kind of chip defect recognition methods based on convolutional neural networks
CN108961245A (en) Picture quality classification method based on binary channels depth parallel-convolution network
CN108052984A (en) Method of counting and device
CN112101328A (en) Method for identifying and processing label noise in deep learning
CN105957086A (en) Remote sensing image change detection method based on optimized neural network model
CN110443367A (en) A kind of method of strength neural network model robust performance
CN107016415A (en) A kind of coloured image Color Semantic sorting technique based on full convolutional network
CN108647718A (en) A kind of different materials metallographic structure is classified the method for grading automatically
CN106203625A (en) A kind of deep-neural-network training method based on multiple pre-training
CN106780546B (en) The personal identification method of motion blur encoded point based on convolutional neural networks
CN106777402B (en) A kind of image retrieval text method based on sparse neural network
CN108596274A (en) Image classification method based on convolutional neural networks
CN111582397A (en) CNN-RNN image emotion analysis method based on attention mechanism
CN109146873A (en) A kind of display screen defect intelligent detecting method and device based on study
CN107506350A (en) A kind of method and apparatus of identification information
CN110263174A (en) - subject categories the analysis method based on focus
CN109740656A (en) A kind of ore method for separating based on convolutional neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant