CN109598336A - A kind of Data Reduction method encoding neural network certainly based on stack noise reduction - Google Patents

A kind of Data Reduction method encoding neural network certainly based on stack noise reduction Download PDF

Info

Publication number
CN109598336A
CN109598336A CN201811476554.3A CN201811476554A CN109598336A CN 109598336 A CN109598336 A CN 109598336A CN 201811476554 A CN201811476554 A CN 201811476554A CN 109598336 A CN109598336 A CN 109598336A
Authority
CN
China
Prior art keywords
layer
dae
reduction
neural network
noise reduction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811476554.3A
Other languages
Chinese (zh)
Inventor
肖子洋
邱日轩
付晨
李路明
褚红亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Information And Communication Branch Of Jiangxi Electric Power Co Ltd
State Grid Corp of China SGCC
Original Assignee
Information And Communication Branch Of Jiangxi Electric Power Co Ltd
State Grid Corp of China SGCC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Information And Communication Branch Of Jiangxi Electric Power Co Ltd, State Grid Corp of China SGCC filed Critical Information And Communication Branch Of Jiangxi Electric Power Co Ltd
Priority to CN201811476554.3A priority Critical patent/CN109598336A/en
Publication of CN109598336A publication Critical patent/CN109598336A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)

Abstract

The invention discloses a kind of Data Reduction methods for encoding neural network certainly based on stack noise reduction, it is characterized in that, the reduction model construction step of stack noise reduction from coding neural network is as follows: step 1: by the output of previous DAE, as the input of next DAE, achieve the purpose that encode layer by layer with this;Step 2: usingIt indicates original input sample, is used in combinationCome represent i-th layer DAE coding situation, it can be deduced that the coding situation of each layer of DAE;Successively greedy training and fine tuning are carried out, trim process adjusts the intersection entropy function of initial parameter by BP algorithm to guarantee the minimum of reconstructed error.The present invention is using improved method-stack noise reduction of noise reduction autoencoder network from neural network algorithm is encoded to sample characteristics collection progress dimensionality reduction, to reduce the complexity of each class model, the classifying quality of classifier in raising and machine learning application, the operation cost of all kinds of learning algorithms is reduced, and the feasibility and high efficiency of this method reduction are verified.

Description

A kind of Data Reduction method encoding neural network certainly based on stack noise reduction
Technical field
The present invention relates to technical field of data processing, in particular to a kind of number for encoding neural network certainly based on stack noise reduction According to reduction method.
Background technique
Self-encoding encoder (Autoencoder, AE) is to propose that structure was divided into input layer, defeated in 2006 by Hinton Layer and its hidden layer out.Input layer is identical with output layer neuron quantity, hidden layer neuron negligible amounts, wherein input layer Coding network part is constituted with hidden layer, data are partially compressed in coding network in AE.
Currently, the problem of data redundancy, is increasingly severe, it is not only with data acquisition, the fast development of memory technology Memory space is greatly wasted, the modeling based on data can be also significantly reduced.
Big, the stronger feature of relevance between index for the dimension height of mass data collection, redundancy, to improve to data Reason ability and availability of data need to propose that a kind of novel method pre-processes initial data.
Summary of the invention
Invention is designed to provide a kind of Data Reduction method based on stack noise reduction from coding neural network, the present invention Dimensionality reduction is carried out to sample characteristics collection from coding neural network algorithm using improved method-stack noise reduction of noise reduction autoencoder network, To reduce the complexity of each class model, the classifying quality of classifier, reduces all kinds of learning algorithms in raising and machine learning application Operation cost, and the feasibility and high efficiency of this method reduction are verified, to solve to propose in above-mentioned background technique The problem of.
To achieve the above object, the invention provides the following technical scheme:
A kind of Data Reduction method encoding neural network certainly based on stack noise reduction, passes through qD, initial data X multilated At, and using this with noisy data as the input of self-encoding encoder, pass through fθ, to the activation value of each neuron of hidden layer into Row calculates, and the reduction model construction step of stack noise reduction from coding neural network is as follows:
Step 1: the output of previous DAE as the input of next DAE, achievees the purpose that encode layer by layer with this;
Step 2: using x0It indicates original input sample, and uses xiCome represent i-th layer DAE coding situation, can obtain The coding situation of each layer of DAE, representation are as follows out:
xi=fθ(xi-1) i=1,2,3 ....
Step 3: carrying out successively greedy training and fine tuning, wherein successively greediness training process is originally inputted by minimizing The difference training weight of data and reconstruct coding, obtains initial parameter, trim process adjusts the friendship of initial parameter by BP algorithm Entropy function is pitched to guarantee the minimum of reconstructed error, to obtain the purpose of optimal quality reconstruction.
Further, it when the SDAE that training is made of multilayer DAE, needs using successively greedy principle, to each layer of DAE It carries out individually training and obtains initiation parameter, and parameter is finely adjusted on the basis of guaranteeing that reconstructed error minimizes.
Further comprising following steps: first with input sample characteristics training SDAE first layer, i.e., first DAE, and corresponding parameter is obtained by fine tuning, the hidden layer of the DAE is then exported into the input as second DAE, training And finely tune and obtain the parameter of second DAE, successively go down, the reduction model based on SDAE can be obtained.
Further, in entire training process, to guarantee the parameter constant of a upper DAE when training next DAE.
It further, further include the spam page discriminant criterion reduction being made of input layer, hidden layer and output layer The network structure of model, the structure of every layer of DAE are respectively 219-150,150-100,100-50,50-5, wherein setting input The neuron number of layer is 219, and the neuron number that output layer is arranged is 5, meanwhile, to achieve the purpose that reduction, it is arranged every layer 150,100,50 layer-by-layer decline trend is presented in the number of neuron.
Further, in reduction model, the hidden layer output of every layer of DAE is respectively the input of next layer of DAE, by layer-by-layer Between study so that the neuron of next layer of DAE can capture the high correlation of the neuron of preceding layer DAE, and can be accurate The non-linear relation for describing the neuron of preceding layer DAE, enables final exports coding to be fully contemplated by the information of high dimensional data.
Compared with prior art, the beneficial effects of the present invention are: the stack noise reduction proposed by the present invention that is based on is from encoding nerve The Data Reduction method of network first analyzes data set progress in detail comprehensively, is quantified to sample data set, standardized And equilibrating processing, using improved method-stack noise reduction of noise reduction autoencoder network from neural network algorithm is encoded to sample spy Collection carries out dimensionality reduction, and to reduce the complexity of each class model, the classifying quality of classifier, is reduced in raising and machine learning application The operation cost of all kinds of learning algorithms, and the feasibility and high efficiency of this method reduction are verified.
Detailed description of the invention
Fig. 1 is the schematic diagram of DAE of the invention;
Fig. 2 is the network structure of first DAE of the invention;
Fig. 3 is the network structure of second DAE of the invention;
Fig. 4 is the reduction model figure of the invention based on SDAE;
Fig. 5 is the network structure of spam page discriminant criterion reduction model of the invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
A kind of Data Reduction method encoding neural network certainly based on stack noise reduction, on the basis of the characteristics of retaining AE, Noise reduction self-encoding encoder (Denoising Autoencoder, DAE) learns AE from noise-containing input, by Certain noises are added in input data, the robustness of Lai Tigao system, the schematic diagram of DAE is as shown in Figure 1, pass through qD, original number According to X multilated at, and using this with noisy data as the input of self-encoding encoder, pass through fθ, to each neuron of hidden layer Activation value calculated, stack noise reduction from coding neural network reduction model construction step it is as follows:
Step 1: encoding neural network (Stacked Denoising Autoencoder Neural certainly in stack noise reduction Networks, SDAE) in model, the output of previous DAE is encoded with this to reach layer by layer as the input of next DAE Purpose;
Step 2: using x0It indicates original input sample, and uses xiCome represent i-th layer DAE coding situation, can obtain The coding situation of each layer of DAE, representation are as follows out:
xi=fθ(xi-1) i=1,2,3 ....
Step 3: successively greedy training and fine tuning are carried out during constructing SDAE reduction model, wherein successively greedy Training process obtains initial parameter, trim process by minimizing original input data and reconstructing the difference training weight of coding The intersection entropy function of initial parameter is adjusted to guarantee the minimum of reconstructed error, to obtain optimal quality reconstruction by BP algorithm Purpose.
When the SDAE that training is made of multilayer DAE, need to carry out each layer of DAE independent using successively greedy principle Training obtains initiation parameter, and is finely adjusted on the basis of guaranteeing that reconstructed error minimizes to parameter.I.e. first with input Sample characteristics training SDAE first layer, i.e. first DAE, process such as Fig. 2, and by fine tuning obtain corresponding parameter, so The hidden layer of the DAE is exported into the input as second DAE, process such as Fig. 3 afterwards, training simultaneously finely tunes to obtain second DAE's Parameter is successively gone down, and the reduction model based on SDAE, process such as Fig. 4 can be obtained.In entire training process, under training To guarantee the parameter constant of a upper DAE when one DAE.
It is 219 dimensions for spam page discriminant criterion sample dimension, the present invention chooses the stack noise reduction with 4 layers of DAE certainly Encoding nerve network structure comprising input layer, hidden layer and output layer, the structure of every layer of DAE be respectively 219-150, 150-100,100-50,50-5, wherein the neuron number that input layer is arranged is 219, and the neuron number that output layer is arranged is 5, meanwhile, to achieve the purpose that reduction, 150,100,50 layer-by-layer decline trend is presented in the number that every layer of neuron is arranged.Reduction In model, the hidden layer output of every layer of DAE is respectively the input of next layer of DAE, by the study between successively, so that next layer The neuron of DAE can capture the high correlation of the neuron of preceding layer DAE, and can accurate description preceding layer DAE nerve The non-linear relation of member, enables final exports coding to be fully contemplated by the information of high dimensional data, the spam page discriminant criterion The network structure of reduction model is illustrated in fig. 5 shown below.
By the foundation of above Indexes Reduction model carry out Indexes Reduction experimental result and analysis, haphazard selection 120 A sample pre-processes this 120 experiment samples with preprocess method, and using wherein 3/4 experiment sample as instruction Practice collection, 1/4 experiment sample is as test set.
The characteristics for being tieed up the spam page discriminant criterion sample reduction of 219 dimensions for 5 with obtained SDAE model, point Not Xuan Qu a training sample and test sample carry out reduction experiment, the characteristic of 5 dimensions obtained after reduction is such as Shown in table 1.
Characteristic after 1 reduction of table
Original data can be completely represented for the data after verifying reduction, spam page reduction will be used for from SDAE model Feasibility, and the aspect of validity two of work, which is verified, to be differentiated to subsequent spam page to this kind of reduction model.
Carry out feasibility analysis, verifying SDAE is used for the feasibility of reduction, be substantially verify the obtained coding of reduction can Represent the information that original index is included, i.e. data after verifying reconstruct and the difference between legacy data.
(1) evaluation criterion
This experiment chooses mean square error (Mean Squared Error, MSE) as reconstructed error in measurement training process Evaluation criterion.So, shown in MSE is defined as follows:
In formula, training sample or test sample ydataIt indicates, the sample y reconstructedreconIndicate, training sample or The quantity of test sample is indicated with N.
(2) experimental result and analysis
Using treated, 219 dimension samples are used as the input of SDAE, and using 5 dimension codings as exporting, the present invention devises 4 layers SDAE network structure, DAE1 219-150, DAE2 150-100, DAE3 100-50, DAE4 50-5, each layer neuronal quantity Are as follows: 219-150-100-50-5.The frequency of training that each layer DAE and SDAE model is arranged is 10 times.
During each layer DAE training 10 times, by continuous adjusting parameter, the change of the reconstructed error of obtained each layer DAE Change curve, when frequency of training reaches 5 times, the MSE of each layer DAE basically reaches stable state, that is, work as instruction already less than 0.009 When white silk number is set greater than equal to 5, the initial parameter of each layer can be obtained, the present invention will train 10 obtained each layer DAE's Initial parameter of the parameter as each layer DAE of SDAE model.And the model is used to verify stack noise reduction and is used from coding neural network In the feasibility of reduction.
During SDAE is trained to 10 times, the change curve of obtained reconstructed error, after 10 training, MSE is missed Difference basically reaches stabilization, according to Matlab experimental result it is found that the minimal reconstruction error that sample data reaches less than 0.0024 It is 0.00226.That is by training, the coding of 5 dimensions can map the original number of original 219 dimension with minimum reconstructed error According to.
Carry out high efficiency analysis, in order to prove the high efficiency of SDAE, the present invention will compare disaggregated model based on SDAE with The reduction effect of other disaggregated models.
(1) evaluation criterion
By the evaluation criterion used for accuracy (Accuracy), i.e., the test sample number correctly classified accounts for survey for this experiment The ratio of total sample number is tried, calculation formula is as follows:
Accuracy=(TN+TP)/C
Wherein, TN indicates that, by the spam page number of mistake classification, TP indicates the spam page number correctly classified, C Indicate the sum of test sample.
(2) experimental result and analysis
SVM classifier influence to test result of the present invention by comparison based on SDAE, PCA, EMD is based on to verify SDAE is used for the high efficiency of spam page reduction.Based on the SVM classifier of SDAE with the increase of experiment number, classification accuracy rate In incremental trend, and compared with other two kinds of classifiers for, show higher recognition capability.
In summary two kinds of experiments are demonstrated by outputting and inputting the reconstructed error of data in comparison SDAE model SDAE is used for the feasibility of reduction.By by SDAE and svm classifier models coupling composition and classification device, and with other two kinds of reduction sides Method compares final result with the SVM classifier constituted, demonstrates the high efficiency of SDAE reduction method.
In conclusion the Data Reduction method proposed by the present invention for encoding neural network certainly based on stack noise reduction, right first Data set progress analyze in detail comprehensively, sample data set is quantified, standardize and equilibrating handle, it is self-editing using noise reduction Improved method-stack noise reduction of code network carries out dimensionality reduction to sample characteristics collection from coding neural network algorithm, to reduce all kinds of moulds The classifying quality of classifier, reduces the operation cost of all kinds of learning algorithms in the complexity of type, raising and machine learning application, and The feasibility and high efficiency of this method reduction are verified.
The foregoing is only a preferred embodiment of the present invention, but scope of protection of the present invention is not limited thereto, Anyone skilled in the art within the technical scope of the present disclosure, according to the technique and scheme of the present invention and its Inventive concept is subject to equivalent substitution or change, should be covered by the protection scope of the present invention.

Claims (6)

1. a kind of Data Reduction method for encoding neural network certainly based on stack noise reduction, which is characterized in that pass through qD, initial data X Multilated at, and using this with noisy data as the input of self-encoding encoder, pass through fθ, each neuron of hidden layer is swashed Value living is calculated, and the reduction model construction step of stack noise reduction from coding neural network is as follows:
Step 1: the output of previous DAE as the input of next DAE, achievees the purpose that encode layer by layer with this;
Step 2: using x0It indicates original input sample, and uses xiCome represent i-th layer DAE coding situation, it can be deduced that it is every The coding situation of one layer of DAE, representation are as follows:
xi=fθ(xi-1) i=1,2,3 ....
Step 3: carrying out successively greedy training and fine tuning, wherein successively greediness training process is by minimizing original input data With the difference training weight of reconstruct coding, initial parameter is obtained, trim process adjusts the cross entropy of initial parameter by BP algorithm Function is to guarantee the minimum of reconstructed error, to obtain the purpose of optimal quality reconstruction.
2. a kind of Data Reduction method for encoding neural network certainly based on stack noise reduction according to claim 1, feature It is, when training the SDAE being made of multilayer DAE, needs individually to train each layer of DAE using successively greedy principle Initiation parameter is obtained, and parameter is finely adjusted on the basis of guaranteeing that reconstructed error minimizes.
3. a kind of Data Reduction method for encoding neural network certainly based on stack noise reduction according to claim 2, feature It is comprising following steps: first with the first layer of the sample characteristics training SDAE of input, i.e. first DAE, and passes through micro- Tune obtains corresponding parameter, and the hidden layer of the DAE is then exported the input as second DAE, trained and finely tune to obtain the The parameter of two DAE, successively goes down, and can obtain the reduction model based on SDAE.
4. a kind of Data Reduction method for encoding neural network certainly based on stack noise reduction according to claim 3, feature It is, in entire training process, guarantees the parameter constant of a upper DAE when training next DAE.
5. a kind of Data Reduction method for encoding neural network certainly based on stack noise reduction according to claim 1, feature It is, further includes the network knot for the spam page discriminant criterion reduction model being made of input layer, hidden layer and output layer Structure, the structure of every layer of DAE are respectively 219-150,150-100,100-50,50-5, wherein the neuron of input layer is arranged Number is 219, and the neuron number that output layer is arranged is 5, meanwhile, to achieve the purpose that reduction, the number of every layer of neuron is set 150,100,50 layer-by-layer decline trend is presented.
6. a kind of Data Reduction method for encoding neural network certainly based on stack noise reduction according to claim 5, feature It is, in reduction model, the hidden layer output of every layer of DAE is respectively the input of next layer of DAE, by the study between successively, is made Next layer of DAE neuron can capture preceding layer DAE neuron high correlation, and can accurate description preceding layer The non-linear relation of the neuron of DAE enables final exports coding to be fully contemplated by the information of high dimensional data.
CN201811476554.3A 2018-12-05 2018-12-05 A kind of Data Reduction method encoding neural network certainly based on stack noise reduction Pending CN109598336A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811476554.3A CN109598336A (en) 2018-12-05 2018-12-05 A kind of Data Reduction method encoding neural network certainly based on stack noise reduction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811476554.3A CN109598336A (en) 2018-12-05 2018-12-05 A kind of Data Reduction method encoding neural network certainly based on stack noise reduction

Publications (1)

Publication Number Publication Date
CN109598336A true CN109598336A (en) 2019-04-09

Family

ID=65961118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811476554.3A Pending CN109598336A (en) 2018-12-05 2018-12-05 A kind of Data Reduction method encoding neural network certainly based on stack noise reduction

Country Status (1)

Country Link
CN (1) CN109598336A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110411566A (en) * 2019-08-01 2019-11-05 四川长虹电器股份有限公司 A kind of Intelligent light spectrum signal denoising method
CN110412872A (en) * 2019-07-11 2019-11-05 中国石油大学(北京) Reciprocating compressor fault diagnosis optimization method and device
CN110866604A (en) * 2019-10-28 2020-03-06 国网河北省电力有限公司电力科学研究院 Cleaning method for power transformer state monitoring data
CN111275074A (en) * 2020-01-07 2020-06-12 东北电力大学 Power CPS information attack identification method based on stack type self-coding network model
CN112541874A (en) * 2020-12-11 2021-03-23 福州大学 Unsupervised denoising feature learning method based on self-encoder
CN113220876A (en) * 2021-04-16 2021-08-06 山东师范大学 Multi-label classification method and system for English text
CN113420794A (en) * 2021-06-04 2021-09-21 中南民族大学 Binaryzation Faster R-CNN citrus disease and pest identification method based on deep learning
CN114689700A (en) * 2022-04-14 2022-07-01 电子科技大学 Low-power EMAT signal noise reduction method based on stack-type self-encoder
CN115169499A (en) * 2022-08-03 2022-10-11 中国电子科技集团公司信息科学研究院 Asset data dimension reduction method and device, electronic equipment and computer storage medium
CN116108386A (en) * 2023-04-10 2023-05-12 南京信息工程大学 Antique glass classification method and system under improved mixed sampling and noise reduction self-coding

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750345A (en) * 2012-06-07 2012-10-24 山东师范大学 Method for identifying web spam through web page multi-view data association combination
CN105163121A (en) * 2015-08-24 2015-12-16 西安电子科技大学 Large-compression-ratio satellite remote sensing image compression method based on deep self-encoding network
CN106443447A (en) * 2016-09-26 2017-02-22 南京航空航天大学 An aero-generator fault feature extraction method based on iSDAE
CN106874952A (en) * 2017-02-16 2017-06-20 中国人民解放军国防科学技术大学 Feature fusion based on stack self-encoding encoder
CN107679031A (en) * 2017-09-04 2018-02-09 昆明理工大学 Based on the advertisement blog article recognition methods for stacking the self-editing ink recorder of noise reduction
CN107749757A (en) * 2017-10-18 2018-03-02 广东电网有限责任公司电力科学研究院 A kind of data compression method and device based on stacking-type own coding and PSO algorithms
CN107909105A (en) * 2017-11-13 2018-04-13 上海交通大学 A kind of Market Site Selection method and system
CN108304359A (en) * 2018-02-06 2018-07-20 中国传媒大学 Unsupervised learning uniform characteristics extractor construction method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750345A (en) * 2012-06-07 2012-10-24 山东师范大学 Method for identifying web spam through web page multi-view data association combination
CN105163121A (en) * 2015-08-24 2015-12-16 西安电子科技大学 Large-compression-ratio satellite remote sensing image compression method based on deep self-encoding network
CN106443447A (en) * 2016-09-26 2017-02-22 南京航空航天大学 An aero-generator fault feature extraction method based on iSDAE
CN106874952A (en) * 2017-02-16 2017-06-20 中国人民解放军国防科学技术大学 Feature fusion based on stack self-encoding encoder
CN107679031A (en) * 2017-09-04 2018-02-09 昆明理工大学 Based on the advertisement blog article recognition methods for stacking the self-editing ink recorder of noise reduction
CN107749757A (en) * 2017-10-18 2018-03-02 广东电网有限责任公司电力科学研究院 A kind of data compression method and device based on stacking-type own coding and PSO algorithms
CN107909105A (en) * 2017-11-13 2018-04-13 上海交通大学 A kind of Market Site Selection method and system
CN108304359A (en) * 2018-02-06 2018-07-20 中国传媒大学 Unsupervised learning uniform characteristics extractor construction method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
吴德烽: "《计算智能及其在三维表面扫描机器人***中的应用》", 30 November 2012, 大连海事大学出版社 *
张素智 等: "面向聚类的堆叠降噪自动编码器的特征提取研究", 《现代计算机》 *
殷复莲: "《数据分析与数据挖掘实用教程》", 30 September 2017, 中国传媒大学出版社 *
董海鹰: "《智能控制理论及应用》", 30 September 2016, 中国铁道出版社 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110412872A (en) * 2019-07-11 2019-11-05 中国石油大学(北京) Reciprocating compressor fault diagnosis optimization method and device
CN110411566A (en) * 2019-08-01 2019-11-05 四川长虹电器股份有限公司 A kind of Intelligent light spectrum signal denoising method
CN110866604A (en) * 2019-10-28 2020-03-06 国网河北省电力有限公司电力科学研究院 Cleaning method for power transformer state monitoring data
CN111275074B (en) * 2020-01-07 2022-08-05 东北电力大学 Power CPS information attack identification method based on stacked self-coding network model
CN111275074A (en) * 2020-01-07 2020-06-12 东北电力大学 Power CPS information attack identification method based on stack type self-coding network model
CN112541874A (en) * 2020-12-11 2021-03-23 福州大学 Unsupervised denoising feature learning method based on self-encoder
CN113220876A (en) * 2021-04-16 2021-08-06 山东师范大学 Multi-label classification method and system for English text
CN113420794A (en) * 2021-06-04 2021-09-21 中南民族大学 Binaryzation Faster R-CNN citrus disease and pest identification method based on deep learning
CN114689700A (en) * 2022-04-14 2022-07-01 电子科技大学 Low-power EMAT signal noise reduction method based on stack-type self-encoder
CN114689700B (en) * 2022-04-14 2023-06-06 电子科技大学 Low-power EMAT signal noise reduction method based on stack-type self-encoder
CN115169499A (en) * 2022-08-03 2022-10-11 中国电子科技集团公司信息科学研究院 Asset data dimension reduction method and device, electronic equipment and computer storage medium
CN115169499B (en) * 2022-08-03 2024-04-05 中国电子科技集团公司信息科学研究院 Asset data dimension reduction method, device, electronic equipment and computer storage medium
CN116108386A (en) * 2023-04-10 2023-05-12 南京信息工程大学 Antique glass classification method and system under improved mixed sampling and noise reduction self-coding

Similar Documents

Publication Publication Date Title
CN109598336A (en) A kind of Data Reduction method encoding neural network certainly based on stack noise reduction
Lin et al. Audio classification and categorization based on wavelets and support vector machine
CN110930976B (en) Voice generation method and device
CN111861945B (en) Text-guided image restoration method and system
CN111652049A (en) Face image processing model training method and device, electronic equipment and storage medium
CN109919364A (en) Multivariate Time Series prediction technique based on adaptive noise reduction and integrated LSTM
CN110175560A (en) A kind of radar signal intra-pulse modulation recognition methods
CN107220594B (en) Face posture reconstruction and recognition method based on similarity-preserving stacked self-encoder
CN110269625A (en) A kind of electrocardio authentication method and system of novel multiple features fusion
CN110726898B (en) Power distribution network fault type identification method
CN113312989B (en) Finger vein feature extraction network based on aggregated descriptors and attention
CN110379491A (en) Identify glioma method, apparatus, equipment and storage medium
CN108805802A (en) A kind of the front face reconstructing system and method for the stacking stepping self-encoding encoder based on constraints
Shankar et al. Non-parallel emotion conversion using a deep-generative hybrid network and an adversarial pair discriminator
CN113886792A (en) Application method and system of print control instrument combining voiceprint recognition and face recognition
CN113035228A (en) Acoustic feature extraction method, device, equipment and storage medium
CN100369047C (en) Image identifying method based on Gabor phase mode
CN112347910A (en) Signal fingerprint identification method based on multi-mode deep learning
CN108831486B (en) Speaker recognition method based on DNN and GMM models
Malik Fighting AI with AI: fake speech detection using deep learning
CN116613740A (en) Intelligent load prediction method based on transform and TCN combined model
CN115984911A (en) Attribute generation countermeasure network and face image continuous transformation method based on same
CN111104868B (en) Cross-quality face recognition method based on convolutional neural network characteristics
CN114283301A (en) Self-adaptive medical image classification method and system based on Transformer
Al-Rawi et al. Feature Extraction of Human Facail Expressions Using Haar Wavelet and Neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190409

RJ01 Rejection of invention patent application after publication