WO2022012144A1 - 基于不平衡数据深度信念网络的并行入侵检测方法和*** - Google Patents

基于不平衡数据深度信念网络的并行入侵检测方法和*** Download PDF

Info

Publication number
WO2022012144A1
WO2022012144A1 PCT/CN2021/094023 CN2021094023W WO2022012144A1 WO 2022012144 A1 WO2022012144 A1 WO 2022012144A1 CN 2021094023 W CN2021094023 W CN 2021094023W WO 2022012144 A1 WO2022012144 A1 WO 2022012144A1
Authority
WO
WIPO (PCT)
Prior art keywords
dbn
wkelm
cluster
model
cnt
Prior art date
Application number
PCT/CN2021/094023
Other languages
English (en)
French (fr)
Inventor
李肯立
唐卓
廖清
刘楚波
周旭
余思洋
杜亮
Original Assignee
湖南大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 湖南大学 filed Critical 湖南大学
Priority to US17/626,684 priority Critical patent/US11977634B2/en
Publication of WO2022012144A1 publication Critical patent/WO2022012144A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/554Detecting local intrusion or implementing counter-measures involving event detection and direct action
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • G06F21/566Dynamic detection, i.e. detection performed at run-time, e.g. emulation, suspicious activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24143Distances to neighbourhood prototypes, e.g. restricted Coulomb energy networks [RCEN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/086Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/03Indexing scheme relating to G06F21/50, monitoring users, programs or devices to maintain the integrity of platforms
    • G06F2221/034Test or assess a computer or a system

Definitions

  • the invention belongs to the technical field of intrusion detection, and more particularly, relates to a parallel intrusion detection method and system based on unbalanced data depth belief network.
  • Intrusion detection method is an effective and active defense method for network security problems. It judges whether there is abnormal intrusion behavior in the network by detecting information such as traffic in the network. Compared with the firewall, the security of the intrusion detection method is better. It not only requires less resources, but also basically does not affect the normal operation of the system, and can be adjusted dynamically.
  • the current mainstream intrusion detection methods mainly include: 1. Intrusion detection methods based on unbalanced data.
  • the intrusion detection methods are mainly aimed at unbalanced data sets, and the detection rate of minority classes is higher than that of majority classes through data optimization or algorithm optimization.
  • Algorithm optimization starts from the algorithm level, and assigns a larger weight to the minority class during classification, so that the error cost increases when the classifier misclassifies the minority class into the majority class, thereby increasing the detection accuracy of the minority class;
  • dimensional intrusion detection method the intrusion detection method first uses the multi-layer restricted Boltzmann machine in the deep belief network model to complete the feature extraction of the data, reduces the complex and intractable high-dimensional data, and then uses the depth
  • the back-propagation neural network model in the belief network model is responsible for completing the attack classification of the data;
  • the intrusion detection method based on extreme learning machine classification the intrusion detection method uses the extreme learning machine model to complete the classification work, compared with the back-propagation neural network.
  • the network model, the extreme learning machine model has a simple structure, and it does not need repeated iterations during training, and has the advantages of fast running speed and good generalization performance.
  • the above-mentioned existing intrusion detection methods all have some deficiencies that cannot be ignored:
  • the mainstream intrusion detection methods for unbalanced data usually only use data optimization and algorithm optimization methods. One of them cannot effectively solve the technical problem of data imbalance;
  • the classification performance of the deep belief network model is closely related to the initial parameters. It is said that the initial parameters of the deep belief network model are specified by humans and have certain randomness. If the parameters are improperly selected, the classification accuracy of the deep belief network model will decrease, and it is easy to fall into the local optimum. Therefore, the initial parameters can be optimized through the intelligent optimization algorithm.
  • the present invention provides a parallel intrusion detection method and system based on unbalanced data depth belief network, the purpose of which is to solve the lack of pertinence of the existing intrusion detection methods for imbalanced data sets At the same time, the speed of optimizing the parameters of the deep belief network model is improved. Finally, the inventive method can effectively improve the detection accuracy and detection speed of intrusion detection.
  • a parallel intrusion detection method based on unbalanced data depth belief network comprising the following steps:
  • step (2) Input the unbalanced data obtained in step (1) into the trained deep belief network DBN model to extract features, and then input the extracted features into the trained DBN-WKELM multi-classifier model
  • the weights of each DBN-WKELM base classifier are calculated by the adaptive weighted voting method, and the final classification results are obtained according to multiple weights and multiple preliminary classification results.
  • step (1) specifically includes the following substeps:
  • step (1-3) Obtaining the set N k formed by all samples in the k-nearest neighbor data D k obtained in step (1-2) that are of different types from the sample point x, and the number of samples num in the set N k;
  • step (1-4) Determine whether the number of samples num obtained in step (1-3) is greater than or equal to k-1, if so, go to step (1-5), otherwise go to step (1-6);
  • step (1-8) Judge whether i is equal to the total number of sample points in the unbalanced data set DS, if so, enter step (1-14), otherwise enter step (1-9);
  • step (1-12) Determine whether the maximum gravitational force g max is less than the set threshold r, if so, return to step (1-10), otherwise merge the sample points d i into the cluster C max , and update the merged sample points the centroid ⁇ max of the cluster C max after d i , and then go to step (1-13);
  • the gravitational force g between the sample point d i and the cluster C is calculated according to the following formula:
  • C num is the number of sample points in cluster C
  • ⁇ e2 represents the e2-th feature attribute value in the centroid ⁇ of cluster C
  • C maxn is the number of sample points in the cluster C max after combining the sample points d i
  • d p is the p-th sample point in the cluster C max after combining the sample points d i, and there is p ⁇ [1 , Cmaxn ].
  • the DBN model is obtained through the steps of training:
  • step (2-2) Train the DBN model optimized in step (2-1) to obtain a trained DBN model.
  • step (2-1) specifically includes the following substeps:
  • step (2-1-3) Determine whether cnt is equal to the maximum number of iterations T or the global optimal solution x best has converged, if so, output the global optimal solution, and the process ends, otherwise go to step (2-4);
  • step (2-1-4) Determine whether cnt is 1, if so, read the file written in step (2-1-2) from HDFS, divide the file into n pa input shards, each Each input shard contains a subpopulation, then go to step (2-1-5), otherwise read the updated file from HDFS, divide the file into n pa input shards, each input shard contains A subpopulation, then go to step (2-1-5);
  • step (2-1-4) For each sub-population obtained in step (2-1-4), obtain the j-th individual in the cnt-th generation in the sub-population As the number of neurons in each hidden layer in the DBN model , the classification results of n t classification points are obtained according to the corresponding DBN model, and the classification error CE of the DBN model is calculated according to the classification results, and the classification error CE is used as the subgroup.
  • step (2-6) For each subpopulation obtained in step (2-1-4), obtain a fitness value set composed of the fitness values of all individuals in the cnt-th generation of the subpopulation where sn is the total number of fitness values in the fitness value set F, sort all fitness values in the fitness value set according to the order from small to large to obtain a new fitness value set the new fitness set.
  • the individual corresponding to the minimum fitness value is regarded as the optimal solution of the sub-population, and the minimum fitness value is regarded as the optimal fitness value of the sub-population,
  • step (2-1-9) according to the first in the set I cnt obtained in step (2-1-8) target individual Generate adaptive mutant individuals in
  • step (2-1-10) For the adaptive mutant individual obtained in step (2-1-9) and the first in the set I cnt obtained in step (2-1-8) target individual perform crossover operations to generate experimental individuals
  • step (2-1-11) Obtain the experimental individuals obtained in the step (2-1-10) The corresponding fitness value and the target individual obtained in step (2-1-9) The corresponding fitness value Use the smaller fitness value of the two to replace the corresponding individuals in the set I cnt , and add the individuals in the set E cnt obtained in step (2-8) to I cnt , thereby obtaining the updated set I cnt ;
  • the classification error is obtained using the following formula:
  • n t is the number of classification points
  • randn is a random integer randomly generated from ⁇ 1, 2, ..., D ⁇
  • rand is a random real number belonging to a uniform distribution between [0, 1]
  • CR is the crossover factor
  • D is the individual gene dimension, where h ⁇ [1, D].
  • step (2-2) specifically includes the following substeps:
  • step (2-3) According to the training set obtained in step (2-2-1), set the initial state of the input layer of the DBN model optimized in step (2) as the training samples in the training set, and set the The input layer and the first hidden layer are constructed as a restricted Boltzmann machine RBM network, and the weight W between the input layer and the first hidden layer in the RBM network, the bias a of the input layer, and Bias b of the first hidden layer;
  • Step (2-2-6) Perform iterative training on the RBM network updated in step (2-2-5) until the reconstruction error of the RBM network reaches the minimum, so as to obtain the RBM model after the overall iterative training.
  • Step ( 2) The cnt2+1 hidden layer of the optimized DBN model is added to the RBM network after the overall iterative training to form a new RBM network.
  • the input layer in the new RBM network and the cnt2+1 hidden layer are combined
  • the weight W between is updated to the weight output by the RBM network after the overall iterative training, and the bias a of the input layer and the bias b of the cnt2+1th hidden layer are respectively updated to the RBM after the overall iterative training.
  • the offset value output by the network, and the output value of the RBM network after the overall iterative training is used as the input value of the new RBM network;
  • the DBN-WKELM multi-classifier model is trained through the following process: obtaining the trained DBN model, opening 4 sub-threads, and setting the output value of the trained DBN model to the WKELM hidden layer in each sub-thread
  • the input value X in is weighted to obtain the cost-sensitive matrix W cs
  • the output weight ⁇ of the hidden layer of WKELM is obtained according to the cost-sensitive matrix W cs , and based on the output weight ⁇ , the feature extraction based on DBN is obtained.
  • DBN-WKELM base classifier, four DBN-WKELM base classifiers based on DBN feature extraction together constitute the trained DBN-WKELM multi-classifier model;
  • the formula of the output weight ⁇ of the hidden layer of WKELM is:
  • C r is the regularization coefficient
  • is the kernel matrix corresponding to the kernel function F k of the WKELM base classifier
  • T l is the data label corresponding to the input value X in;
  • the weight calculation formula of the adaptive weighted voting method is as follows:
  • Wq is the voting weight of the qth DBN-WKELM base classifier in the DBN-WKELM multi-classifier model, is the classification accuracy of the qth DBN-WKELM base classifier, is the classification false positive rate of the qth DBN-WKELM base classifier, and has q ⁇ [1, m];
  • a parallel intrusion detection system based on unbalanced data depth belief network comprising:
  • the first module is used to obtain an unbalanced data set, use the domain cleaning rule algorithm to under-sample the unbalanced data set, and use the gravity-based clustering method to cluster the under-sampled unbalanced data set. , to obtain the imbalanced dataset after clustering;
  • the second module is used to input the clustered imbalanced data obtained by the first module into the trained deep belief network DBN model to extract features, and then input the extracted features into the trained DBN-WKELM multi-classifier Multiple DBN-WKELM base classifiers in the model to obtain multiple preliminary classification results, calculate the weight of each DBN-WKELM base classifier through the adaptive weighted voting method, and obtain according to multiple weights and multiple preliminary classification results The final classification result, and the intrusion behavior category corresponding to the classification result.
  • the present invention adopts steps (1-1) to (1-14), it adopts an improved undersampling algorithm for the unbalanced data set to reduce the proportion of the majority class, and the present invention adopts the weighted kernel limit
  • the learning machine weights each training sample, reduces the weight of the majority class and increases the weight of the minority class, thereby increasing the detection accuracy of the minority class. Therefore, it can solve the ineffectiveness of the existing intrusion detection methods based on unbalanced data. Solve technical problems of data imbalance;
  • the present invention adopts steps (2-1-1) to (2-1-12), it adopts a parallel improved differential evolution algorithm to optimize the parameters of the deep belief network model, optimize the iterative process of the algorithm, and improve the iterative efficiency , reduce the time consumed by the algorithm, therefore, the model parameters in the existing intrusion detection methods based on deep belief network dimensionality reduction need to consume a lot of computing resources and time to process, and when processing a large amount of data, there are problems such as time-consuming, iterative technical problems of inefficiency;
  • each base classifier is executed in parallel, which improves the speed of intrusion detection.
  • Adapt the weighted voting algorithm to increase the classification accuracy of intrusion detection by increasing the voting weight of the base classifier with high classification accuracy and low false positive rate. Therefore, it can solve the problem of existing intrusion detection methods based on extreme learning machine classification.
  • a single classifier is used for classification, and there are technical problems of bias and low classification accuracy in single classifier classification.
  • FIG. 1 is a flow chart of the parallel intrusion detection method based on the unbalanced data deep belief network of the present invention.
  • the present invention provides a parallel intrusion detection method based on unbalanced data deep belief network, comprising the following steps:
  • the imbalanced dataset is the KDDCUP99 intrusion detection dataset.
  • This step specifically includes the following sub-steps:
  • the value of the nearest neighbor parameter k is between 5 and 10, preferably 7.
  • the Euclidean distance is used to determine whether two samples are k-nearest neighbors.
  • Hypothetical sample point belongs to the n-dimensional space R n , where n is any natural number, k1 and k2 are both ⁇ [1, the total number of sample points in the imbalanced dataset D], Represents the e1th feature attribute value in the k1th sample point, where e1 ⁇ [1, the total number of feature attribute values in the k1th sample point].
  • the Euclidean distance between two sample points x k1 and x k2 is defined as:
  • step (1-3) Obtaining the set N k formed by all samples in the k-nearest neighbor data D k obtained in step (1-2) that are of different types from the sample point x, and the number of samples num in the set N k;
  • the categories of sample points include majority class samples and minority class samples.
  • the majority class samples refer to normal (Normal) behavior, detection and scanning (Probe) behavior, and denial of service ( Denial of service, referred to as DOS) behavior
  • minority samples refer to user to root (User to root, referred to as U2R) behavior and remote to local (Remote to local, referred to as R2L) behavior, in which behaviors other than Normal behavior are all Considered a type of intrusive behavior.
  • step (1-4) Determine whether the number of samples num obtained in step (1-3) is greater than or equal to k-1, if so, go to step (1-5), otherwise go to step (1-6);
  • step (1-8) Judge whether i is equal to the total number of sample points in the unbalanced data set DS, if so, enter step (1-14), otherwise enter step (1-9);
  • the gravitational force g between the sample point d i and the cluster C is calculated according to the following formula:
  • C num is the number of sample points in cluster C
  • ⁇ e2 represents the e2-th feature attribute value in the centroid ⁇ of cluster C
  • step (1-12) Determine whether the maximum gravitational force g max is less than the set threshold r, if so, return to step (1-10), otherwise merge the sample points d i into the cluster C max , and update the merged sample points the centroid ⁇ max of the cluster C max after d i , and then go to step (1-13);
  • C maxn is the number of sample points in the cluster C max after combining the sample points d i
  • d p is the p-th sample point in the cluster C max after combining the sample points d i, and there is p ⁇ [1 , C maxn ];
  • the value range of the threshold r is 95 to 143, preferably 100.
  • the value range of the sampling rate sr is 0.6 to 0.9, preferably 0.7.
  • step (2) Input the unbalanced data after clustering processing obtained in step (1) into the trained Deep Belief Network (DBN) model to extract features, and then input the extracted features into the trained DBN - Weighted Kernel Extreme Learning Machine (WKELM for short) multiple DBN-WKELM base classifiers in the multi-classifier model to obtain multiple preliminary classification results, and calculate each The weight of the DBN-WKELM base classifier, and obtain the final classification result and the intrusion behavior category corresponding to the classification result according to multiple weights and multiple preliminary classification results;
  • DBN Deep Belief Network
  • WKELM Weighted Kernel Extreme Learning Machine
  • the DBN model in this step is obtained through the steps of training:
  • the distributed memory computing platform is the Apache Spark platform.
  • This step specifically includes the following sub-steps:
  • the total number of hidden layers in the DBN model is equal to 3; the maximum value x max of the number of neurons in each hidden layer ranges from 500 to 1500, preferably 1000, and the minimum number of neurons in the hidden layer
  • the value x min can be in the range of 1 to 5, preferably 1.
  • the value range of the population size n ps is 1000 to 2000, preferably 1000.
  • step (2-1-3) Determine whether cnt is equal to the maximum number of iterations T or the global optimal solution x best has converged, if so, output the global optimal solution, and the process ends, otherwise go to step (2-4);
  • the value range of the maximum number of iterations T is 500 to 1000, preferably 500.
  • step (2-1-4) Determine whether cnt is 1, if so, read the file written in step (2-1-2) from HDFS, divide the file into n pa input shards, each Each input shard contains a subpopulation, then go to step (2-1-5), otherwise read the updated file from HDFS, divide the file into n pa input shards, each input shard contains A subpopulation, then go to step (2-1-5);
  • dividing the file into n pa input shards is implemented on the Spark platform through the Map stage .
  • the value of the number of input shards n pa ranges from 2 to 10, preferably 5.
  • step (2-1-4) For each sub-population obtained in step (2-1-4), obtain the j-th individual in the cnt-th generation in the sub-population As the number of neurons in each hidden layer in the DBN model , the classification results of n t classification points are obtained according to the corresponding DBN model, and the classification error CE of the DBN model is calculated according to the classification results, and the classification error CE is used as the subgroup.
  • the classification error in this step is the following formula:
  • n t is the number of classification points
  • the value range of the number of classification points nt is 30 to 100, preferably 50.
  • step (2-6) For each subpopulation obtained in step (2-1-4), obtain a fitness value set composed of the fitness values of all individuals in the cnt-th generation of the subpopulation where sn is the total number of fitness values in the fitness value set F, sort all fitness values in the fitness value set according to the order from small to large to obtain a new fitness value set the new fitness set.
  • the individual corresponding to the minimum fitness value is regarded as the optimal solution of the sub-population, and the minimum fitness value is regarded as the optimal fitness value of the sub-population,
  • step (2-1-9) according to the first in the set I cnt obtained in step (2-1-8) target individual Generate adaptive mutant individuals in
  • f is the initial variation factor, which ranges from 0.5 to 0.8, preferably 0.6.
  • step (2-1-10) For the adaptive mutant individual obtained in step (2-1-9) and the first in the set I cnt obtained in step (2-1-8) target individual perform crossover operations to generate experimental individuals
  • randn is a random integer randomly generated from ⁇ 1, 2, ..., D ⁇
  • rand is a random real number belonging to a uniform distribution between [0, 1]
  • CR is a crossover factor
  • D is the individual gene dimension, where h ⁇ [ 1, D];
  • the value range of the cross factor CR is 0.7 to 0.9, preferably 0.8, and the value range of the individual gene dimension D is 1 to 3, preferably 1.
  • step (2-1-11) Obtain the experimental individuals obtained in the step (2-1-10) The corresponding fitness value and the target individual obtained in step (2-1-9) The corresponding fitness value Use the smaller fitness value of the two to replace the corresponding individuals in the set I cnt , and add the individuals in the set E cnt obtained in step (2-8) to I cnt , thereby obtaining the updated set I cnt ;
  • step (2-2) train the DBN model optimized in step (2-1) to obtain a trained DBN model
  • This step specifically includes the following sub-steps:
  • step (2-3) According to the training set obtained in step (2-2-1), set the initial state of the input layer of the DBN model optimized in step (2) as the training samples in the training set, and set the The input layer and the first hidden layer are constructed as a Restricted Boltzmann Machine (RBM) network, and the weights W and the input layer between the input layer and the first hidden layer in the RBM network are initialized.
  • RBM Restricted Boltzmann Machine
  • W is a random value output using a normal distribution with a standard deviation of 0.1, and a and b are set to 0;
  • Step (2-2-6) Perform iterative training on the RBM network updated in step (2-2-5) until the reconstruction error of the RBM network reaches the minimum, so as to obtain the RBM model after the overall iterative training.
  • Step ( 2) The cnt2+1 hidden layer of the optimized DBN model is added to the RBM network after the overall iterative training to form a new RBM network.
  • the input layer in the new RBM network and the cnt2+1 hidden layer are combined
  • the weight W between is updated to the weight output by the RBM network after the overall iterative training, and the bias a of the input layer and the bias b of the cnt2+1th hidden layer are respectively updated to the RBM after the overall iterative training.
  • the offset value output by the network, and the output value of the RBM network after the overall iterative training is used as the input value of the new RBM network;
  • the reconstruction error RE of the RBM network is:
  • n e represents the number of neurons in the input layer of the RBM network
  • I represents the sample after the iterative training Training of RBM network input layer neurons are e.
  • the DBN-WKELM multi-classifier model of the present invention is composed of m DBN-WKELM base classifiers (in this embodiment, m is 4), and each DBN-WKELM base classifier includes an input layer, an output layer, three DBN hidden layer and 1 WKELM hidden layer, the number of nodes in the input layer and output layer are 122 and 5, respectively, and the number of nodes in each DBN hidden layer is 110, 70, and 30, respectively.
  • 4 DBN- The number of nodes in the WKELM hidden layer in the WKELM base classifier is 55, 65, 75 and 85 respectively:
  • the DBN-WKELM multi-classifier model of the present invention is obtained by training through the following process:
  • the trained DBN model open 4 sub-threads, set the output value of the trained DBN model in each sub-thread as the input value X in of the WKELM hidden layer, and weight the input value X in to obtain the cost-sensitive matrix W cs , the output weight ⁇ of the hidden layer of WKELM is obtained according to the cost-sensitive matrix W cs , and based on the output weight ⁇ , a DBN-WKELM base classifier based on DBN feature extraction is obtained, and four DBN-WKELM base classifiers based on DBN feature extraction are obtained.
  • the classifiers together constitute the trained DBN-WKELM multi-classifier model.
  • C r is the regularization coefficient
  • is the kernel matrix corresponding to the kernel function F k of the WKELM base classifier (in the present invention, the kernel function may be a polynomial kernel function or a Gaussian kernel function)
  • T l is the corresponding The data label for the input value X in.
  • the weight calculation formula of the adaptive weighted voting method is as follows:
  • W is q is the q-th voting DBN-WKELM group classifier DBN-WKELM multiple classifiers model weight, is the classification accuracy of the qth DBN-WKELM base classifier, is the classification false positive rate of the qth DBN-WKELM base classifier, and has q ⁇ [1, m];
  • the behavior type corresponding to the element in the preliminary classification result corresponding to the maximum value is taken from the total classification result as the final behavior type.
  • the preliminary classification results obtained by the four DBN-WKELM base classifiers are (0, 1, 0, 0, 0), (0, 0, 1, 0, 0), (0 , 1, 0, 0, 0), (0, 0, 1, 0, 0), the classification accuracy of each base classifier is 98.5%, 97.8%, 98.2%, 97.3%, and the classification false positive rate is 98.5%, 97.8%, 98.2%, 97.3% 2.3%, 2.8%, 2.7%, 2.0%
  • the weights of each base classifier can be calculated as 0.252, 0.249, 0.250, 0.249, and then use the first DBN-WKELM base classifier to obtain the preliminary classification results.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Security & Cryptography (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • Probability & Statistics with Applications (AREA)
  • Medical Informatics (AREA)
  • Physiology (AREA)
  • Virology (AREA)
  • Genetics & Genomics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本发明公开了一种基于不平衡数据深度信念网络的并行入侵检测方法,其读取不平衡数据集数据,对不平衡数据采用改进的NCL算法进行欠采样处理,降低多数类样本的比重,使数据集数据分布均衡;在分布式内存计算平台Spark平台上采用改进的差分进化算法对深度信念网络模型的参数进行优化,得到最优的模型参数;对数据集数据进行特征提取,然后采用加权后的核极限学***衡数据集缺乏针对性、训练时间过长的技术问题,并提高优化深度信念网络模型参数的速度。

Description

基于不平衡数据深度信念网络的并行入侵检测方法和*** 技术领域
本发明属于入侵检测技术领域,更具体地,涉及一种基于不平衡数据深度信念网络的并行入侵检测方法和***。
背景技术
随着社会的发展,网络安全问题越来越受到人们的重视。入侵检测方法是一种有效的、针对网络安全问题的主动防御方法,它通过检测网络中流量等信息来判断网络中是否有异常的入侵行为。相比于防火墙,入侵检测方法的安全性更好,它不仅所需的资源较少,基本不影响***的正常运转,而且能动态的进行调整。
目前主流的入侵检测方法主要包括:一、基于不平衡数据的入侵检测方法,其入侵检测方法主要针对不平衡数据集,通过数据优化或者算法优化来解决少数类相比于多数类的检测率较低的技术问题,数据优化是从数据层面出发,通过减少多数类样本的欠采样方法和增加少数类的过采样的方法来实现数据平衡。算法优化是从算法层面出发,分类时赋予少数类较大的权重,使得分类器在将少数类错误分类为多数类时错误代价增加,从而增加少数类的检测精度;二、基于深度信念网络降维的入侵检测方法,其入侵检测方法首先利用深度信念网络模型中的多层受限玻尔兹曼机来完成数据的特征提取,将复杂难处理的高维数据进行降维处理,然后利用深度信念网络模型中的反向传播神经网络模型负责完成数据的攻击分类;三、基于极限学习机分类的入侵检测方法,其入侵检测方法利用极限学习机模型完成分类工作,相比于反向传播神经网络模型,极限学习机模型结构简单,训练时无需反复迭代,具有运行速度较快和泛化性能好的优点。
然而,上述现有入侵检测方法均具有一些不可忽略的缺陷:首先,对于基于不平衡数据的入侵检测方法而言,其主流的针对不平衡数据的入侵检测方法往往只采用数据优化和算法优化方法中的一种,不能很有效的解决数据不平衡的技术问题;第二,对于基于深度信念网络降维的入侵检测方法而言,其深度信念网络模型的分类性能与初始参数紧密相关,一般来说,深度信念网络模型初始参数由人为指定,有一定的随机性。如果参数选择不当,会导致深度信念网络模型分类准确度下降,容易陷入局部最优,因此可以通过智能优化算法来优化初始参数,但是现有的优化算法往往采用串行化的标准算法,涉及到大量的迭代运算,需要消耗大量计算资源和时间来处理,并且在处理大量数据时存在如耗时过长、迭代效率低等问题;第三,对于基于极限学习机分类的入侵检测方法而言,其只采用单一的分类器来进行分类,单分类器在分类时总有一定的偏向性,存在分类精度低等问题。
发明内容
针对现有技术的以上缺陷或改进需求,本发明提供了一种基于不平衡数据深度信念网络的并行入侵检测方法和***,其目的在于,解决现有入侵检测方法对不平衡数据集缺乏针对性的技术问题,同时提高优化深度信念网络模型参数的速度,最后该发明方法能有效的提高入侵检测的检测精度和检测速度。
为实现上述目的,按照本发明的一个方面,提供了一种基于不平衡数据深度信念网络的并行入侵检测方法,包括以下步骤:
(1)获取不平衡数据集,利用领域清理规则算法对该不平衡数据集进行欠采样处理,并使用基于引力的聚类方法对欠采样处理后的不平衡数据集进行聚类处理,以得到聚类处理后的不平衡数据集;
(2)将步骤(1)获得的聚类处理后的不平衡数据输入训练好的深度信念网络DBN模型中,以提取特征,再将提取的特征输入训练好的 DBN-WKELM多分类器模型中的多个DBN-WKELM基分类器中,以得到多个初步分类结果,通过自适应加权投票法计算各个DBN-WKELM基分类器的权重,并根据多个权重和多个初步分类结果获取最终的分类结果、以及该分类结果对应的入侵行为类别。
优选地,步骤(1)具体包括以下子步骤:
(1-1)获取不平衡数据集DS;
(1-2)从步骤(1-1)得到的不平衡数据集DS中获取一个样本点x以及该样本点x的k近邻数据D k,其中k表示最近邻参数;
(1-3)获取步骤(1-2)得到的k近邻数据D k中与样本点x的类别不同的所有样本所构成的集合N k、以及该集合N k中的样本数目num;
(1-4)判断步骤(1-3)获取的样本数目num是否大于或等于k-1,如果是,则转入步骤(1-5),否则转入步骤(1-6);
(1-5)判断样本点x的类别是否为多数类样本,如果是,则更新不平衡数据集DS为DS=DS-x,然后进入步骤(1-6),否则更新不平衡数据集DS为DS=DS-N k,然后进入步骤(1-6);
(1-6)针对不平衡数据集DS中的剩余样本点,重复上述步骤(1-2)至(1-5),直到不平衡数据集DS中的所有样本点都被处理完毕为止,从而得到更新后的不平衡数据集DS;
(1-7)设置计数器i=1;
(1-8)判断i是否等于不平衡数据集DS中的样本点总数,如果是则进入步骤(1-14),否则进入步骤(1-9);
(1-9)从步骤(1-6)更新后的不平衡数据集DS中读入第i个新的样本点
Figure PCTCN2021094023-appb-000001
其中
Figure PCTCN2021094023-appb-000002
表示第i个样本中的第e2个特征属性值,并判断优选设置的聚类集合S是否为空,如果是则转入步骤(1-10),否则转入步骤(1-11),其中e2∈[1,n];
(1-10)将样本点d i初始化为一个新的类簇C new={d i},同时将类簇C new的质心μ设置为d i,并将类簇C new加入到聚类集合S中,转入到步骤(1-13);
(1-11)计算聚类集合S中的每个类簇对d i的引力,并得到引力集合G={g 1,g 2,…,g ng},并从引力集合G中得到最大引力g max及其对应的类簇C max,其中ng表示聚类集合S中类簇的总数;
(1-12)判断最大引力g max是否小于设定的阈值r,如果是,则返回步骤(1-10),否则将样本点d i合并到类簇C max中,并更新合并了样本点d i后的类簇C max的质心μ max,然后转入步骤(1-13);
(1-13)设置计数器i=i+1,并返回步骤(1-8);
(1-14)遍历聚类集合S中的所有类簇,并判断是否每个类簇中所有样本点的类型都是多数类样本,如果是,则根据采样率sr随机保存该类簇中的多数类样本,然后针对剩余类簇重复遍历过程,否则针对剩余类簇重复遍历过程。
优选地,样本点d i和类簇C之间的引力g是按照如下公式计算:
Figure PCTCN2021094023-appb-000003
其中C num为类簇C中样本点的个数,μ e2表示中类簇C的质心μ中的第e2个特征属性值;
更新类簇C max的质心μ max的公式如下:
Figure PCTCN2021094023-appb-000004
其中C maxn为合并了样本点d i后类簇C max中样本点的个数,d p为合并了样本点d i后类簇C max中的第p个样本点,且有p∈[1,C maxn]。
优选地,DBN模型是通过步骤训练得到:
(2-1)获取DBN模型,并在分布式内存计算平台上使用改进的差分 进化算法对该DBN模型进行优化,以得到优化后的DBN模型;
(2-2)对步骤(2-1)优化后的DBN模型进行训练,以得到训练好的DBN模型。
优选地,步骤(2-1)具体包括以下子步骤:
(2-1-1)获取DBN模型W dbn={W 1,W 2,…,W dep},其中dep表示DBN模型中隐含层的总数,W di表示DBN模型中第di个隐含层中神经元的数量,且有di∈[1,3];
(2-1-2)随机生成种群规模为n ps个结构向量的初始种群,从该初始种群中随机选取其中一个结构向量作为该初始种群的全局最优解x best,将初始种群以文件的形式写入到Hadoop分布式文件***(Hadoop Distributed File System,简称HDFS)中,并设置计数器cnt=1;
(2-1-3)判断cnt是否等于最大迭代次数T或者全局最优解x best已收敛,如果是,则输出全局最优解,过程结束,否则转入步骤(2-4);
(2-1-4)判断cnt是否为1,如果是,则从HDFS中读取步骤(2-1-2)中写入其中的文件,将该文件划分为n pa个输入分片,每个输入分片包含一个子种群,然后转入步骤(2-1-5),否则从HDFS中读取更新后的文件,将该文件划分为n pa个输入分片,每个输入分片包含一个子种群,然后转入步骤(2-1-5);
(2-1-5)针对步骤(2-1-4)得到的每个子种群而言,获取该子种群中第cnt代中第j个个体
Figure PCTCN2021094023-appb-000005
作为DBN模型中各隐含层的神经元数,根据相应DBN模型获取n t个分类点的分类结果,根据该分类结果计算DBN模型的分类误差CE,并将该分类误差CE作为该子种群中第cnt代中第j个个体的适应度值
Figure PCTCN2021094023-appb-000006
其中j∈[1,子种群中第cnt代中个体的总数],
Figure PCTCN2021094023-appb-000007
表示子种群中第cnt代中第j个个体中的第dep个元素;
(2-1-6)针对步骤(2-1-4)得到的每个子种群而言,获取该子种群第cnt代中所有个体的适应度值所构成的适应度值集合
Figure PCTCN2021094023-appb-000008
Figure PCTCN2021094023-appb-000009
其中sn为适应度值集合F中适应度值的总数,按照从小到大的顺序对该适应度值集合中的所有适应度值进行排序,以获取新的适应度值集合
Figure PCTCN2021094023-appb-000010
将该新的适应度集合
Figure PCTCN2021094023-appb-000011
中最小适应度值对应的个体作为该子种群的最优解,并将该最小适应度值作为该子种群的最佳适应度值,
(2-1-7)从所有子种群的最佳适应度值中选择最小值作为整体最佳适应度值,并将全局最优解x best更新为该整体最佳适应度值对应的个体;
(2-1-8)针对步骤(2-1-6)得到的适应度值集合F,取适应度值最小的两个个体
Figure PCTCN2021094023-appb-000012
Figure PCTCN2021094023-appb-000013
组成集合
Figure PCTCN2021094023-appb-000014
该适应度值集合F中剩余的个体组成集合
Figure PCTCN2021094023-appb-000015
(2-1-9)根据步骤(2-1-8)中得到的集合I cnt中的第
Figure PCTCN2021094023-appb-000016
个目标个体
Figure PCTCN2021094023-appb-000017
生成自适应变异个体
Figure PCTCN2021094023-appb-000018
其中
Figure PCTCN2021094023-appb-000019
(2-1-10)对步骤(2-1-9)得到的自适应变异个体
Figure PCTCN2021094023-appb-000020
和步骤(2-1-8)中得到的集合I cnt中的第
Figure PCTCN2021094023-appb-000021
个目标个体
Figure PCTCN2021094023-appb-000022
进行交叉操作,以生成实验个体
Figure PCTCN2021094023-appb-000023
(2-1-11)获取步骤(2-1-10)得到的实验个体
Figure PCTCN2021094023-appb-000024
对应的适应度值
Figure PCTCN2021094023-appb-000025
以及步骤(2-1-9)得到的目标个体
Figure PCTCN2021094023-appb-000026
对应的适应度值
Figure PCTCN2021094023-appb-000027
使用二者中较小的适应度值代替集合I cnt中的对应个体,并将步骤(2-8)中得到的集合E cnt中的个体加入到I cnt中,从而得到更新后的集合I cnt
(2-1-12)设置计数器cnt=cnt+1,将步骤(2-1-11)更新后的集合I cnt保存到HDFS中,并返回步骤(2-1-3);
优选地,分类误差是采用以下公式获得:
Figure PCTCN2021094023-appb-000028
其中,
Figure PCTCN2021094023-appb-000029
是真实结果,
Figure PCTCN2021094023-appb-000030
是分类结果,n t是分类点个数;
生成x适应变异个体的计算公式如下:
Figure PCTCN2021094023-appb-000031
其中,
Figure PCTCN2021094023-appb-000032
Figure PCTCN2021094023-appb-000033
均∈[3,sn],三者彼此不相同,且三者均不等于
Figure PCTCN2021094023-appb-000034
F c为x适应变异因子;
自适应变异因子F c的计算公式如下:
Figure PCTCN2021094023-appb-000035
其中,f是初始变异因子;
生成实验个体的计算公式如下:
Figure PCTCN2021094023-appb-000036
其中,randn是随机产生于{1,2,…,D}的随机整数,rand是属于[0,1]间的均匀分布的随机实数,CR是交叉因子,D为个体基因维度,其中h∈[1,D]。
优选地,步骤(2-2)具体包括以下子步骤:
(2-2-1)将步骤(1)聚类处理后的不平衡数据集按照6∶4的比例分为训练集和测试集;
(2-2-2)设置计数器cnt2=1;
(2-2-3)根据步骤(2-2-1)获得的训练集,将步骤(2)优化后的DBN模型的输入层的初始状态设置为训练集中的训练样本,并将DBN模型的输入层和第一个隐含层构建为受限玻尔兹曼机RBM网络,并初始化该RBM网络中输入层与第一隐含层之间的权重W、输入层的偏置量a、以及第一隐含层的偏置量b;
(2-2-4)判断cnt2是否等于3,如果是则过程结束,否则转入步骤(2-2-5);
(2-2-5)利用对比散度(Contrastive Divergence,简称CD)算法对RBM网络的输入值、RBM网络中输入层与第cnt2隐含层之间的权重W、输入层的偏置量a、以及第cnt2隐含层的偏置量b进行更新,以获得更新后的RBM网络;
(2-2-6)对步骤(2-2-5)更新后的RBM网络进行迭代训练,直到该RBM网络的重构误差达到最小为止,从而得到整体迭代训练后的RBM模型,将步骤(2)优化后的DBN模型的第cnt2+1隐含层加入到整体迭代训练后的RBM网络中,以构成新的RBM网络,同时将新的RBM网络中输入层与第cnt2+1隐含层之间的权重W更新为整体迭代训练后的RBM网络所输出的权重,将输入层的偏置量a、以及第cnt2+1隐含层的偏置量b分别更新为整体迭代训练后的RBM网络所输出的偏置值,并将整体迭代训练后的RBM网络的输出值作为新的的RBM网络的输入值;
(2-2-7)设置计数器cnt2=cnt2+1,并返回步骤(2-2-4)。
优选地,DBN-WKELM多分类器模型是通过以下过程训练得到的:获取训练好的DBN模型,开启4个子线程,在每个子线程中将训练好的DBN模型的输出值设置为WKELM隐含层的输入值X in,对该输入值X in进行加权得到成本敏感矩阵W cs,根据该成本敏感矩阵W cs获得WKELM隐含层的输出权重β,并基于该输出权重β得到基于DBN特征提取的DBN-WKELM基分类器,4个基于DBN特征提取的DBN-WKELM基分类器共同构成训练好的DBN-WKELM多分类器模型;
优选地,WKELM隐含层的输出权重β的公式为:
Figure PCTCN2021094023-appb-000037
其中,C r是正则化系数,Ω是对应于WKELM基分类器的核函数F k的 核矩阵,T l是对应于输入值X in的数据标签;
自适应加权投票法的权重计算公式如下:
Figure PCTCN2021094023-appb-000038
其中,Wq是第q个DBN-WKELM基分类器在DBN-WKELM多分类器模型中的投票权重,
Figure PCTCN2021094023-appb-000039
是第q个DBN-WKELM基分类器的分类准确率,
Figure PCTCN2021094023-appb-000040
是第q个DBN-WKELM基分类器的分类误报率,且有q∈[1,m];
第q个DBN-WKELM基分类器的分类准确率和分类误报率的计算公式如下:
Figure PCTCN2021094023-appb-000041
其中,
Figure PCTCN2021094023-appb-000042
是第q个DBN-WKELM基分类器中正确分类的样本数目,
Figure PCTCN2021094023-appb-000043
是第q个DBN-WKELM基分类器中总样本数目,
Figure PCTCN2021094023-appb-000044
是第q个DBN-WKELM基分类器中被错误当作入侵行为的正常样本数目,
Figure PCTCN2021094023-appb-000045
是第q个DBN-WKELM基分类器中的正常样本总数。
按照本发明的另一方面,提供了一种基于不平衡数据深度信念网络的并行入侵检测***,包括:
第一模块,用于获取不平衡数据集,利用领域清理规则算法对该不平衡数据集进行欠采样处理,并使用基于引力的聚类方法对欠采样处理后的不平衡数据集进行聚类处理,以得到聚类处理后的不平衡数据集;
第二模块,用于将第一模块获得的聚类处理后的不平衡数据输入训练好的深度信念网络DBN模型中,以提取特征,再将提取的特征输入训练好的DBN-WKELM多分类器模型中的多个DBN-WKELM基分类器中,以得到多个初步分类结果,通过自适应加权投票法计算各个DBN-WKELM基分类器的权重,并根据多个权重和多个初步分类结果获取最终的分类结果、 以及该分类结果对应的入侵行为类别。
总体而言,通过本发明所构思的以上技术方案与现有技术相比,能够取得下列有益效果:
(1)由于本发明采用了步骤(1-1)到步骤(1-14),其对不平衡数据集采用了改进欠采样算法来降低多数类的比例,同时本发明采用加权后的核极限学***衡数据的入侵检测方法存在的不能很有效的解决数据不平衡的技术问题;
(2)由于本发明采用了步骤(2-1-1)到步骤(2-1-12),其采用并行的改进差分进化算法来优化深度信念网络模型参数,优化算法迭代过程,提高迭代效率,降低算法消耗的时间,因此,能够解决现有基于深度信念网络降维的入侵检测方法中模型参数需要消耗大量计算资源和时间来处理,并且在处理大量数据时存在如耗时过长、迭代效率低的技术问题;
(3)由于本发明采用多个结构不同的DBN-WKELM基分类器组成DBN-WKELM多分类器,各个基分类器之间并行执行,提高了入侵检测的速度,同时其多分类器采用于自适应加权投票算法,通过增加分类准确率高和误报率低的基分类器投票权重,来增加入侵检测的分类准确率,因此,能够解决现有基于极限学习机分类的入侵检测方法中只采用单一的分类器来进行分类,单分类器分类时存在偏向性和分类精度低的技术问题。
附图说明
图1是本发明基于不平衡数据深度信念网络的并行入侵检测方法的流程图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。此外,下面所描述的 本发明各个实施方式中所涉及到的技术特征只要彼此之间未构成冲突就可以相互组合。
如图1所示,本发明提供了一种基于不平衡数据深度信念网络的并行入侵检测方法,包括以下步骤:
(1)获取不平衡数据集,利用领域清理规则(Neighborhood Cleaning Rule,简称NCL)算法对该不平衡数据集进行欠采样处理,并使用基于引力的聚类方法(Gravity-based Clustering Approach,简称GCA)算法对欠采样处理后的不平衡数据集进行聚类处理,以得到聚类处理后的不平衡数据集;
在本实施方式中,不平衡数据集是KDDCUP99入侵检测数据集。
本步骤具体包括以下子步骤:
(1-1)获取不平衡数据集DS;
(1-2)从步骤(1-1)得到的不平衡数据集DS中获取一个样本点x以及该样本点x的k近邻数据Dk。
具体而言,最近邻参数k的取值为5到10之间,优选为7。
一般来说,通过欧氏距离来判断两个样本是否为k近邻关系。假设样本点
Figure PCTCN2021094023-appb-000046
属于n维空间R n,其中n为任意自然数,k1和k2均∈[1,不平衡数据集D中样本点的总数],
Figure PCTCN2021094023-appb-000047
表示第k1个样本点中第e1个特征属性值,其中e1∈[1,第k1个样本点中特征属性值的总数]。那么两个样本点x k1和x k2之间的欧式距离定义为:
Figure PCTCN2021094023-appb-000048
(1-3)获取步骤(1-2)得到的k近邻数据D k中与样本点x的类别不同的所有样本所构成的集合N k、以及该集合N k中的样本数目num;
具体而言,样本点的类别包括多数类样本和少数类样本,针对 KDDCUP99入侵检测数据集来说,其中多数类样本指的是正常(Normal)行为、探测和扫描(Probe)行为和拒绝服务(Denial of service,简称DOS)行为,少数类样本指的是用户到根***(User to root,简称U2R)行为和远程到本地(Remote to local,简称R2L)行为,其中除了Normal行为以外的行为都被认为是入侵行为类型。
(1-4)判断步骤(1-3)获取的样本数目num是否大于或等于k-1,如果是,则转入步骤(1-5),否则转入步骤(1-6);
(1-5)判断样本点x的类别是否为多数类样本,如果是,则更新不平衡数据集DS为DS=DS-x,然后进入步骤(1-6),否则更新不平衡数据集DS为DS=DS-N k,然后进入步骤(1-6);
(1-6)针对不平衡数据集DS中的剩余样本点,重复上述步骤(1-2)至(1-5),直到不平衡数据集DS中的所有样本点都被处理完毕为止,从而得到更新后的不平衡数据集DS;
(1-7)设置计数器i=1;
(1-8)判断i是否等于不平衡数据集DS中的样本点总数,如果是则进入步骤(1-14),否则进入步骤(1-9);
(1-9)从步骤(1-6)更新后的不平衡数据集DS中读入第i个新的样本点
Figure PCTCN2021094023-appb-000049
其中
Figure PCTCN2021094023-appb-000050
表示第i个样本中的第e2个特征属性值,并判断优选设置的聚类集合S是否为空,如果是则转入步骤(1-10),否则转入步骤(1-11),其中e2∈[1,n];
(1-10)将样本点d i初始化为一个新的类簇C new={d i},同时将类簇C new的质心μ设置为d i,并将类簇C new加入到聚类集合S中,转入到步骤(1-13);
(1-11)计算聚类集合S中的每个类簇对d i的引力,并得到引力集合G={g 1,g 2,…,g ng},并从引力集合G中得到最大引力g max及其对应的类簇 C max,其中ng表示聚类集合S中类簇的总数;
所述样本点d i和类簇C之间的引力g是按照如下公式计算:
Figure PCTCN2021094023-appb-000051
其中C num为类簇C中样本点的个数,μ e2表示中类簇C的质心μ中的第e2个特征属性值;
(1-12)判断最大引力g max是否小于设定的阈值r,如果是,则返回步骤(1-10),否则将样本点d i合并到类簇C max中,并更新合并了样本点d i后的类簇C max的质心μ max,然后转入步骤(1-13);
所述更新类簇C max的质心μ max的公式如下:
Figure PCTCN2021094023-appb-000052
其中C maxn为合并了样本点d i后类簇C max中样本点的个数,d p为合并了样本点d i后类簇C max中的第p个样本点,且有p∈[1,C maxn];
具体而言,针对KDDCUP99入侵检测数据集来说,阈值r的取值范围是95到143,优选为100。
(1-13)设置计数器i=i+1,并返回步骤(1-8);
(1-14)遍历聚类集合S中的所有类簇,并判断是否每个类簇中所有样本点的类型都是多数类样本,如果是,则根据采样率sr随机保存该类簇中的多数类样本,然后针对剩余类簇重复遍历过程,否则针对剩余类簇重复遍历过程。
具体而言,采样率sr的取值范围是0.6到0.9,优选为0.7。
(2)将步骤(1)获得的聚类处理后的不平衡数据输入训练好的深度信念网络(Deep Belief Network,简称DBN)模型中,以提取特征,再将提取的特征输入训练好的DBN-加权后的核极限学习机(Weighted Kernel Extreme Learning Machine,简称WKELM)多分类器模型中的多个 DBN-WKELM基分类器中,以得到多个初步分类结果,通过自适应加权投票法计算各个DBN-WKELM基分类器的权重,并根据多个权重和多个初步分类结果获取最终的分类结果、以及该分类结果对应的入侵行为类别;
具体而言,本步骤中的DBN模型是通过步骤训练得到:
(2-1)获取深度信念网络(Deep Belief Network,简称DBN)模型,并在分布式内存计算平台上使用改进的差分进化算法对该DBN模型进行优化,以得到优化后的DBN模型;
在本实施方式中,分布式内存计算平台是Apache Spark平台。
本步骤具体包括以下子步骤:
(2-1-1)获取DBN模型W dbn={W 1,W 2,…,W dep},其中dep表示DBN模型中隐含层的总数,W di表示DBN模型中第di个隐含层中神经元的数量,且有di∈[1,3];
具体而言,DBN模型中隐含层的总数等于3;每个隐含层中神经元数的最大值x max的取值范围是500到1500,优选为1000,隐含层神经元数的最小值x min的取值范围是1到5,优选为1。
(2-1-2)随机生成种群规模为n ps个结构向量的初始种群,从该初始种群中随机选取其中一个结构向量作为该初始种群的全局最优解x best,将初始种群以文件的形式写入到Hadoop分布式文件***(Hadoop Distributed File System,简称HDFS)中,并设置计数器cnt=1;
具体而言,种群规模n ps的取值范围是1000到2000,优选为1000。
(2-1-3)判断cnt是否等于最大迭代次数T或者全局最优解x best已收敛,如果是,则输出全局最优解,过程结束,否则转入步骤(2-4);
具体而言,最大迭代次数T的取值范围是500到1000,优选为500。
(2-1-4)判断cnt是否为1,如果是,则从HDFS中读取步骤(2-1-2)中写入其中的文件,将该文件划分为n pa个输入分片,每个输入分片包含一 个子种群,然后转入步骤(2-1-5),否则从HDFS中读取更新后的文件,将该文件划分为n pa个输入分片,每个输入分片包含一个子种群,然后转入步骤(2-1-5);
具体而言,将文件划分为n pa个输入分片,是在Spark平台上通过Map阶段实现的,输入分片个数n pa的取值范围是2到10,优选为5。
(2-1-5)针对步骤(2-1-4)得到的每个子种群而言,获取该子种群中第cnt代中第j个个体
Figure PCTCN2021094023-appb-000053
作为DBN模型中各隐含层的神经元数,根据相应DBN模型获取n t个分类点的分类结果,根据该分类结果计算DBN模型的分类误差CE,并将该分类误差CE作为该子种群中第cnt代中第j个个体的适应度值
Figure PCTCN2021094023-appb-000054
其中j∈[1,子种群中第cnt代中个体的总数],
Figure PCTCN2021094023-appb-000055
表示子种群中第cnt代中第j个个体中的第dep个元素;
具体而言,本步骤中的分类误差是采用以下公式:
Figure PCTCN2021094023-appb-000056
其中,
Figure PCTCN2021094023-appb-000057
是真实结果,
Figure PCTCN2021094023-appb-000058
是分类结果,n t是分类点个数;
具体而言,分类点个数n t的取值范围是30到100,优选为50。
(2-1-6)针对步骤(2-1-4)得到的每个子种群而言,获取该子种群第cnt代中所有个体的适应度值所构成的适应度值集合
Figure PCTCN2021094023-appb-000059
Figure PCTCN2021094023-appb-000060
其中sn为适应度值集合F中适应度值的总数,按照从小到大的顺序对该适应度值集合中的所有适应度值进行排序,以获取新的适应度值集合
Figure PCTCN2021094023-appb-000061
将该新的适应度集合
Figure PCTCN2021094023-appb-000062
中最小适应度值对应的个体作为该子种群的最优解,并将该最小适应度值作为该子种群的最佳适应度值,
(2-1-7)从所有子种群的最佳适应度值中选择最小值作为整体最佳适 应度值,并将全局最优解x best更新为该整体最佳适应度值对应的个体;
(2-1-8)针对步骤(2-1-6)得到的适应度值集合F,取适应度值最小的两个个体
Figure PCTCN2021094023-appb-000063
Figure PCTCN2021094023-appb-000064
组成集合
Figure PCTCN2021094023-appb-000065
该适应度值集合F中剩余的个体组成集合
Figure PCTCN2021094023-appb-000066
(2-1-9)根据步骤(2-1-8)中得到的集合I cnt中的第
Figure PCTCN2021094023-appb-000067
个目标个体
Figure PCTCN2021094023-appb-000068
生成自适应变异个体
Figure PCTCN2021094023-appb-000069
其中
Figure PCTCN2021094023-appb-000070
具体而言,生成自适应变异个体的计算公式如下:
Figure PCTCN2021094023-appb-000071
其中,
Figure PCTCN2021094023-appb-000072
Figure PCTCN2021094023-appb-000073
均∈[3,sn],三者彼此不相同,且三者均不等于
Figure PCTCN2021094023-appb-000074
F c为自适应变异因子。
自适应变异因子F c的计算公式如下:
Figure PCTCN2021094023-appb-000075
其中,f是初始变异因子,其取值范围是0.5到0.8,优选为0.6。
(2-1-10)对步骤(2-1-9)得到的自适应变异个体
Figure PCTCN2021094023-appb-000076
和步骤(2-1-8)中得到的集合I cnt中的第
Figure PCTCN2021094023-appb-000077
个目标个体
Figure PCTCN2021094023-appb-000078
进行交叉操作,以生成实验个体
Figure PCTCN2021094023-appb-000079
具体而言,生成实验个体的计算公式如下:
Figure PCTCN2021094023-appb-000080
其中,randn是随机产生于{1,2,…,D}随机整数,rand是属于[0,1]间的均匀分布的随机实数,CR是交叉因子,D为个体基因维度,其中h∈[1,D];
具体而言,交叉因子CR的取值范围是0.7到0.9,优选为0.8,个体基 因维度D的取值范围是1到3,优选为1。
(2-1-11)获取步骤(2-1-10)得到的实验个体
Figure PCTCN2021094023-appb-000081
对应的适应度值
Figure PCTCN2021094023-appb-000082
以及步骤(2-1-9)得到的目标个体
Figure PCTCN2021094023-appb-000083
对应的适应度值
Figure PCTCN2021094023-appb-000084
使用二者中较小的适应度值代替集合I cnt中的对应个体,并将步骤(2-8)中得到的集合E cnt中的个体加入到I cnt中,从而得到更新后的集合I cnt
(2-1-12)设置计数器cnt=cnt+1,将步骤(2-1-11)更新后的集合I cnt保存到HDFS中,并返回步骤(2-1-3);
(2-2)对步骤(2-1)优化后的DBN模型进行训练,以得到训练好的DBN模型;
本步骤具体包括以下子步骤:
(2-2-1)将步骤(1)聚类处理后的不平衡数据集按照6∶4的比例分为训练集和测试集;
(2-2-2)设置计数器cnt2=1;
(2-2-3)根据步骤(2-2-1)获得的训练集,将步骤(2)优化后的DBN模型的输入层的初始状态设置为训练集中的训练样本,并将DBN模型的输入层和第一个隐含层构建为受限玻尔兹曼机(Restricted Boltzmann Machine,简称RBM)网络,并初始化该RBM网络中输入层与第一隐含层之间的权重W、输入层的偏置量a、以及第一隐含层的偏置量b;
具体而言,W是使用标准差为0.1的正态分布输出的随机值,a和b设为0;
(2-2-4)判断cnt2是否等于3,如果是则过程结束,否则转入步骤(2-2-5);
(2-2-5)利用对比散度(Contrastive Divergence,简称CD)算法对RBM网络的输入值、RBM网络中输入层与第cnt2隐含层之间的权重W、输入层的偏置量a、以及第cnt2隐含层的偏置量b进行更新,以获得更新后的RBM 网络;
(2-2-6)对步骤(2-2-5)更新后的RBM网络进行迭代训练,直到该RBM网络的重构误差达到最小为止,从而得到整体迭代训练后的RBM模型,将步骤(2)优化后的DBN模型的第cnt2+1隐含层加入到整体迭代训练后的RBM网络中,以构成新的RBM网络,同时将新的RBM网络中输入层与第cnt2+1隐含层之间的权重W更新为整体迭代训练后的RBM网络所输出的权重,将输入层的偏置量a、以及第cnt2+1隐含层的偏置量b分别更新为整体迭代训练后的RBM网络所输出的偏置值,并将整体迭代训练后的RBM网络的输出值作为新的的RBM网络的输入值;
所述RBM网络的重构误差RE为:
Figure PCTCN2021094023-appb-000085
其中,n e表示RBM网络的输入层的神经元个数,
Figure PCTCN2021094023-appb-000086
表示迭代训练前RBM网络的输入层第i e个神经元中的训练样本值,
Figure PCTCN2021094023-appb-000087
表示迭代训练后RBM网络的输入层第i e个神经元中的训练样本值。
(2-2-7)设置计数器cnt2=cnt2+1,并返回步骤(2-2-4);
通过以上的步骤(2-2-1)到(2-2-7),就能够实现DBN模型的训练过程。
本发明的DBN-WKELM多分类器模型由m个DBN-WKELM基分类器组成(在本实施方式中,m取值为4),各个DBN-WKELM基分类器包括输入层、输出层、3个DBN隐含层和1个WKELM隐含层,输入层和输出层的节点数分别为122和5,,其中各个DBN隐含层的节点数分别为110、70、以及30,同时4个DBN-WKELM基分类器中WKELM隐含层的节点数分别为55、65、75以及85:
本发明的DBN-WKELM多分类器模型是通过以下过程训练得到的:
获取训练好的DBN模型,开启4个子线程,在每个子线程中将训练好 的DBN模型的输出值设置为WKELM隐含层的输入值X in,对该输入值X in进行加权得到成本敏感矩阵W cs,根据该成本敏感矩阵W cs获得WKELM隐含层的输出权重β,并基于该输出权重β得到基于DBN特征提取的DBN-WKELM基分类器,4个基于DBN特征提取的DBN-WKELM基分类器共同构成训练好的DBN-WKELM多分类器模型。
本步骤中,对输入值X in进行加权得到成本敏感矩阵W cs这一过程具体是:
对X in中的第i x个样本点赋予权重
Figure PCTCN2021094023-appb-000088
以得到成本敏感矩阵W cs中的第i x个主对角元素
Figure PCTCN2021094023-appb-000089
其中i x∈[1,X in中样本点的总数],W cs是一个对角矩阵,其中权重
Figure PCTCN2021094023-appb-000090
等于:
Figure PCTCN2021094023-appb-000091
其中
Figure PCTCN2021094023-appb-000092
为第i x个样本点所属类别在训练集中的数量。
WKELM隐含层的输出权重β的公式为:
Figure PCTCN2021094023-appb-000093
其中,C r是正则化系数,Ω是对应于WKELM基分类器的核函数F k(在本发明中,该核函数可以是多项式核函数、或高斯核函数)的核矩阵,T l是对应于输入值X in的数据标签。
自适应加权投票法的权重计算公式如下:
Figure PCTCN2021094023-appb-000094
其中,W q是第q个DBN-WKELM基分类器在DBN-WKELM多分类器模型中的投票权重,
Figure PCTCN2021094023-appb-000095
是第q个DBN-WKELM基分类器的分类准确率,
Figure PCTCN2021094023-appb-000096
是第q个DBN-WKELM基分类器的分类误报率,且有q∈[1,m];
第q个DBN-WKELM基分类器的分类准确率和分类误报率的计算公式 如下:
Figure PCTCN2021094023-appb-000097
其中,
Figure PCTCN2021094023-appb-000098
是第q个DBN-WKELM基分类器中正确分类的样本数目,
Figure PCTCN2021094023-appb-000099
是第q个DBN-WKELM基分类器中总样本数目,
Figure PCTCN2021094023-appb-000100
是第q个DBN-WKELM基分类器中被错误当作入侵行为的正常样本数目,
Figure PCTCN2021094023-appb-000101
是第q个DBN-WKELM基分类器中的正常样本总数。
步骤(2)中,得到各基分类器的初步分类结果V=(v 1,v 2,v 3,v 4,v 5),分别对应Normal、Probe、Dos、U2R和R2L这五个行为类型,然后通过自适应加权投票法计算各基分类器的权重,最终根据各基分类器的初步分类结果V和权重得到DBN-WKELM多分类器模型的总分类结果
Figure PCTCN2021094023-appb-000102
Figure PCTCN2021094023-appb-000103
从该总分类结果中取最大值对应的初步分类结果中的元素相应的行为类型作为最终的行为类型。
假设针对测试集中一条数据而言,4个DBN-WKELM基分类器得到的初步分类结果分别为(0,1,0,0,0),(0,0,1,0,0),(0,1,0,0,0),(0,0,1,0,0),此时各基分类器的分类准确率为98.5%,97.8%,98.2%,97.3%,分类误报率为2.3%,2.8%,2.7%,2.0%,根据上述公式计算可得各基分类器的权重为0.252,0.249,0.250,0.249,然后用第一个DBN-WKELM基分类器得到的初步分类结果中的v 1(即0)*0.252+第二个DBN-WKELM基分类器得到的初步分类结果中的v 1(即0)*0.252+第三个DBN-WKELM基分类器得到的初步分类结果中的v 1(即0)*0.252+第四个DBN-WKELM基分类器得到的初步分类结果中的v 1(即0)*0.252=0,然后用第一个DBN-WKELM基分类器得到的初步分类结果中的v 2(即1)*0.252+第二个DBN-WKELM基分类器得到的初步分类结果中的v 2(即0)*0.252+第三个DBN-WKELM基分类器得到的初步分类结果中的v 2(即1)*0.252+第四个DBN-WKELM基分类器得到的初 步分类结果中的v 2(即0)*0.252=0.502...,以此类推,最后得到五个总分类结果(0,0.502,0.498,0,0),从中取最大值(就是0.502),其对应的初步分类结果中的元素(即v 2)相应的行为类型(Probe行为类型)作为最终的行为类型。
本领域的技术人员容易理解,以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种基于不平衡数据深度信念网络的并行入侵检测方法,其特征在于,包括以下步骤:
    (1)获取不平衡数据集,利用领域清理规则算法对该不平衡数据集进行欠采样处理,并使用基于引力的聚类方法对欠采样处理后的不平衡数据集进行聚类处理,以得到聚类处理后的不平衡数据集;
    (2)将步骤(1)获得的聚类处理后的不平衡数据输入训练好的深度信念网络DBN模型中,以提取特征,再将提取的特征输入训练好的DBN-WKELM多分类器模型中的多个DBN-WKELM基分类器中,以得到多个初步分类结果,通过自适应加权投票法计算各个DBN-WKELM基分类器的权重,并根据多个权重和多个初步分类结果获取最终的分类结果、以及该分类结果对应的入侵行为类别。
  2. 根据权利要求1所述的并行入侵检测方法,其特征在于,步骤(1)具体包括以下子步骤:
    (1-1)获取不平衡数据集DS;
    (1-2)从步骤(1-1)得到的不平衡数据集DS中获取一个样本点x以及该样本点x的k近邻数据D k,其中k表示最近邻参数;
    (1-3)获取步骤(1-2)得到的k近邻数据D k中与样本点x的类别不同的所有样本所构成的集合N k、以及该集合N k中的样本数目num;
    (1-4)判断步骤(1-3)获取的样本数目num是否大于或等于k-1,如果是,则转入步骤(1-5),否则转入步骤(1-6);
    (1-5)判断样本点x的类别是否为多数类样本,如果是,则更新不平衡数据集DS为DS=DS-x,然后进入步骤(1-6),否则更新不平衡数据集DS为DS=DS-N k,然后进入步骤(1-6);
    (1-6)针对不平衡数据集DS中的剩余样本点,重复上述步骤(1-2) 至(1-5),直到不平衡数据集DS中的所有样本点都被处理完毕为止,从而得到更新后的不平衡数据集DS;
    (1-7)设置计数器i=1;
    (1-8)判断i是否等于不平衡数据集DS中的样本点总数,如果是则进入步骤(1-14),否则进入步骤(1-9);
    (1-9)从步骤(1-6)更新后的不平衡数据集DS中读入第i个新的样本点
    Figure PCTCN2021094023-appb-100001
    其中
    Figure PCTCN2021094023-appb-100002
    表示第i个样本中的第e2个特征属性值,并判断优选设置的聚类集合S是否为空,如果是则转入步骤(1-10),否则转入步骤(1-11),其中e2∈[1,n];
    (1-10)将样本点d i初始化为一个新的类簇C new={d i},同时将类簇C new的质心μ设置为d i,并将类簇C new加入到聚类集合S中,转入到步骤(1-13);
    (1-11)计算聚类集合S中的每个类簇对d i的引力,并得到引力集合G={g 1,g 2,…,g ng},并从引力集合G中得到最大引力g max及其对应的类簇C max,其中ng表示聚类集合S中类簇的总数;
    (1-12)判断最大引力g max是否小于设定的阈值r,如果是,则返回步骤(1-10),否则将样本点d i合并到类簇C max中,并更新合并了样本点d i后的类簇C max的质心μ max,然后转入步骤(1-13);
    (1-13)设置计数器i=i+1,并返回步骤(1-8);
    (1-14)遍历聚类集合S中的所有类簇,并判断是否每个类簇中所有样本点的类型都是多数类样本,如果是,则根据采样率sr随机保存该类簇中的多数类样本,然后针对剩余类簇重复遍历过程,否则针对剩余类簇重复遍历过程。
  3. 根据权利要求1或2所述的并行入侵检测方法,其特征在于,
    样本点d i和类簇C之间的引力g是按照如下公式计算:
    Figure PCTCN2021094023-appb-100003
    其中C num为类簇C中样本点的个数,μ e2表示中类簇C的质心μ中的第e2个特征属性值;
    更新类簇C max的质心μ max的公式如下:
    Figure PCTCN2021094023-appb-100004
    其中C maxn为合并了样本点d i后类簇C max中样本点的个数,d p为合并了样本点d i后类簇C max中的第p个样本点,且有p∈[1,C maxn]。
  4. 根据权利要求1所述的并行入侵检测方法,其特征在于,DBN模型是通过步骤训练得到:
    (2-1)获取DBN模型,并在分布式内存计算平台上使用改进的差分进化算法对该DBN模型进行优化,以得到优化后的DBN模型;
    (2-2)对步骤(2-1)优化后的DBN模型进行训练,以得到训练好的DBN模型。
  5. 根据权利要求4所述的并行入侵检测方法,其特征在于,步骤(2-1)具体包括以下子步骤:
    (2-1-1)获取DBN模型W dbn={W 1,W 2,…,W dep},其中dep表示DBN模型中隐含层的总数,W di表示DBN模型中第di个隐含层中神经元的数量,且有di∈[1,3];
    (2-1-2)随机生成种群规模为n ps个结构向量的初始种群,从该初始种群中随机选取其中一个结构向量作为该初始种群的全局最优解x best,将初始种群以文件的形式写入到Hadoop分布式文件***(Hadoop Distributed File System,简称HDFS)中,并设置计数器cnt=1;
    (2-1-3)判断cnt是否等于最大迭代次数T或者全局最优解x best已收 敛,如果是,则输出全局最优解,过程结束,否则转入步骤(2-4);
    (2-1-4)判断cnt是否为1,如果是,则从HDFS中读取步骤(2-1-2)中写入其中的文件,将该文件划分为n pa个输入分片,每个输入分片包含一个子种群,然后转入步骤(2-1-5),否则从HDFS中读取更新后的文件,将该文件划分为n pa个输入分片,每个输入分片包含一个子种群,然后转入步骤(2-1-5);
    (2-1-5)针对步骤(2-1-4)得到的每个子种群而言,获取该子种群中第cnt代中第j个个体
    Figure PCTCN2021094023-appb-100005
    作为DBN模型中各隐含层的神经元数,根据相应DBN模型获取n t个分类点的分类结果,根据该分类结果计算DBN模型的分类误差CE,并将该分类误差CE作为该子种群中第cnt代中第j个个体的适应度值
    Figure PCTCN2021094023-appb-100006
    其中j∈[1,子种群中第cnt代中个体的总数],
    Figure PCTCN2021094023-appb-100007
    表示子种群中第cnt代中第j个个体中的第dep个元素;
    (2-1-6)针对步骤(2-1-4)得到的每个子种群而言,获取该子种群第cnt代中所有个体的适应度值所构成的适应度值集合
    Figure PCTCN2021094023-appb-100008
    Figure PCTCN2021094023-appb-100009
    其中sn为适应度值集合F中适应度值的总数,按照从小到大的顺序对该适应度值集合中的所有适应度值进行排序,以获取新的适应度值集合
    Figure PCTCN2021094023-appb-100010
    将该新的适应度集合
    Figure PCTCN2021094023-appb-100011
    中最小适应度值对应的个体作为该子种群的最优解,并将该最小适应度值作为该子种群的最佳适应度值,
    (2-1-7)从所有子种群的最佳适应度值中选择最小值作为整体最佳适应度值,并将全局最优解x best更新为该整体最佳适应度值对应的个体;
    (2-1-8)针对步骤(2-1-6)得到的适应度值集合F,取适应度值最小的两个个体
    Figure PCTCN2021094023-appb-100012
    Figure PCTCN2021094023-appb-100013
    组成集合
    Figure PCTCN2021094023-appb-100014
    该适应度值集合F中剩余的个体组成集合
    Figure PCTCN2021094023-appb-100015
    (2-1-9)根据步骤(2-1-8)中得到的集合I cnt中的第
    Figure PCTCN2021094023-appb-100016
    个目标个体
    Figure PCTCN2021094023-appb-100017
    生成自适应变异个体
    Figure PCTCN2021094023-appb-100018
    其中
    Figure PCTCN2021094023-appb-100019
    (2-1-10)对步骤(2-1-9)得到的自适应变异个体
    Figure PCTCN2021094023-appb-100020
    和步骤(2-1-8)中得到的集合I cnt中的第
    Figure PCTCN2021094023-appb-100021
    个目标个体
    Figure PCTCN2021094023-appb-100022
    进行交叉操作,以生成实验个体
    Figure PCTCN2021094023-appb-100023
    (2-1-11)获取步骤(2-1-10)得到的实验个体
    Figure PCTCN2021094023-appb-100024
    对应的适应度值
    Figure PCTCN2021094023-appb-100025
    以及步骤(2-1-9)得到的目标个体
    Figure PCTCN2021094023-appb-100026
    对应的适应度值
    Figure PCTCN2021094023-appb-100027
    使用二者中较小的适应度值代替集合I cnt中的对应个体,并将步骤(2-8)中得到的集合E cnt中的个体加入到I cnt中,从而得到更新后的集合I cnt
    (2-1-12)设置计数器cnt=cnt+1,将步骤(2-1-11)更新后的集合I cnt保存到HDFS中,并返回步骤(2-1-3)。
  6. 根据权利要求5所述的并行入侵检测方法,其特征在于,
    分类误差是采用以下公式获得:
    Figure PCTCN2021094023-appb-100028
    其中,
    Figure PCTCN2021094023-appb-100029
    是真实结果,
    Figure PCTCN2021094023-appb-100030
    是分类结果,n t是分类点个数;
    生成自适应变异个体的计算公式如下:
    Figure PCTCN2021094023-appb-100031
    其中,
    Figure PCTCN2021094023-appb-100032
    Figure PCTCN2021094023-appb-100033
    均∈[3,sn],三者彼此不相同,且三者均不等于
    Figure PCTCN2021094023-appb-100034
    F c为自适应变异因子;
    自适应变异因子F c的计算公式如下:
    Figure PCTCN2021094023-appb-100035
    其中,f是初始变异因子;
    生成实验个体的计算公式如下:
    Figure PCTCN2021094023-appb-100036
    其中,randn是随机产生于{1,2,…,D}的随机整数,rand是属于[0,1]间的均匀分布的随机实数,CR是交叉因子,D为个体基因维度,其中h∈[1,D]。
  7. 根据权利要求4所述的并行入侵检测方法,其特征在于,步骤(2-2)具体包括以下子步骤:
    (2-2-1)将步骤(1)聚类处理后的不平衡数据集按照6∶4的比例分为训练集和测试集;
    (2-2-2)设置计数器cnt2=1;
    (2-2-3)根据步骤(2-2-1)获得的训练集,将步骤(2)优化后的DBN模型的输入层的初始状态设置为训练集中的训练样本,并将DBN模型的输入层和第一个隐含层构建为受限玻尔兹曼机RBM网络,并初始化该RBM网络中输入层与第一隐含层之间的权重W、输入层的偏置量a、以及第一隐含层的偏置量b;
    (2-2-4)判断cnt2是否等于3,如果是则过程结束,否则转入步骤(2-2-5);
    (2-2-5)利用对比散度(Contrastive Divergence,简称CD)算法对RBM网络的输入值、RBM网络中输入层与第cnt2隐含层之间的权重W、输入层的偏置量a、以及第cnt2隐含层的偏置量b进行更新,以获得更新后的RBM网络;
    (2-2-6)对步骤(2-2-5)更新后的RBM网络进行迭代训练,直到该RBM网络的重构误差达到最小为止,从而得到整体迭代训练后的RBM模型,将步骤(2)优化后的DBN模型的第cnt2+1隐含层加入到整体迭代训 练后的RBM网络中,以构成新的RBM网络,同时将新的RBM网络中输入层与第cnt2+1隐含层之间的权重W更新为整体迭代训练后的RBM网络所输出的权重,将输入层的偏置量a、以及第cnt2+1隐含层的偏置量b分别更新为整体迭代训练后的RBM网络所输出的偏置值,并将整体迭代训练后的RBM网络的输出值作为新的的RBM网络的输入值;
    (2-2-7)设置计数器cnt2=cnt2+1,并返回步骤(2-2-4)。
  8. 根据权利要求1所述的并行入侵检测方法,其特征在于,DBN-WKELM多分类器模型是通过以下过程训练得到的:获取训练好的DBN模型,开启4个子线程,在每个子线程中将训练好的DBN模型的输出值设置为WKELM隐含层的输入值X in,对该输入值X in进行加权得到成本敏感矩阵W cs,根据该成本敏感矩阵W cs获得WKELM隐含层的输出权重β,并基于该输出权重β得到基于DBN特征提取的DBN-WKELM基分类器,4个基于DBN特征提取的DBN-WKELM基分类器共同构成训练好的DBN-WKELM多分类器模型。
  9. 根据权利要求8所述的并行入侵检测方法,其特征在于,WKELM隐含层的输出权重β的公式为:
    Figure PCTCN2021094023-appb-100037
    其中,C r是正则化系数,Ω是对应于WKELM基分类器的核函数F k的核矩阵,T l是对应于输入值X in的数据标签;
    自适应加权投票法的权重计算公式如下:
    Figure PCTCN2021094023-appb-100038
    其中,W q是第q个DBN-WKELM基分类器在DBN-WKELM多分类器模型中的投票权重,
    Figure PCTCN2021094023-appb-100039
    是第q个DBN-WKELM基分类器的分类准确率,
    Figure PCTCN2021094023-appb-100040
    是第q个DBN-WKELM基分类器的分类误报率,且有q∈[1,m];
    第q个DBN-WKELM基分类器的分类准确率和分类误报率的计算公式如下:
    Figure PCTCN2021094023-appb-100041
    其中,
    Figure PCTCN2021094023-appb-100042
    是第q个DBN-WKELM基分类器中正确分类的样本数目,
    Figure PCTCN2021094023-appb-100043
    是第q个DBN-WKELM基分类器中总样本数目,
    Figure PCTCN2021094023-appb-100044
    是第q个DBN-WKELM基分类器中被错误当作入侵行为的正常样本数目,
    Figure PCTCN2021094023-appb-100045
    是第q个DBN-WKELM基分类器中的正常样本总数。
  10. 一种基于不平衡数据深度信念网络的并行入侵检测***,其特征在于,包括:
    第一模块,用于获取不平衡数据集,利用领域清理规则算法对该不平衡数据集进行欠采样处理,并使用基于引力的聚类方法对欠采样处理后的不平衡数据集进行聚类处理,以得到聚类处理后的不平衡数据集;
    第二模块,用于将第一模块获得的聚类处理后的不平衡数据输入训练好的深度信念网络DBN模型中,以提取特征,再将提取的特征输入训练好的DBN-WKELM多分类器模型中的多个DBN-WKELM基分类器中,以得到多个初步分类结果,通过自适应加权投票法计算各个DBN-WKELM基分类器的权重,并根据多个权重和多个初步分类结果获取最终的分类结果、以及该分类结果对应的入侵行为类别。
PCT/CN2021/094023 2020-07-17 2021-05-17 基于不平衡数据深度信念网络的并行入侵检测方法和*** WO2022012144A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/626,684 US11977634B2 (en) 2020-07-17 2021-05-17 Method and system for detecting intrusion in parallel based on unbalanced data Deep Belief Network

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010689950.5 2020-07-17
CN202010689950.5A CN111860638B (zh) 2020-07-17 2020-07-17 基于不平衡数据深度信念网络的并行入侵检测方法和***

Publications (1)

Publication Number Publication Date
WO2022012144A1 true WO2022012144A1 (zh) 2022-01-20

Family

ID=72984602

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/094023 WO2022012144A1 (zh) 2020-07-17 2021-05-17 基于不平衡数据深度信念网络的并行入侵检测方法和***

Country Status (3)

Country Link
US (1) US11977634B2 (zh)
CN (1) CN111860638B (zh)
WO (1) WO2022012144A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114638555A (zh) * 2022-05-18 2022-06-17 国网江西综合能源服务有限公司 基于多层正则化极限学习机的用电行为检测方法及***
CN115086070A (zh) * 2022-07-20 2022-09-20 山东省计算中心(国家超级计算济南中心) 工业互联网入侵检测方法及***
CN115102909A (zh) * 2022-06-15 2022-09-23 大连大学 一种基于ihho-fcm算法的网络流量分类方法
CN115204324A (zh) * 2022-09-16 2022-10-18 西安热工研究院有限公司 基于ifoa-dbn-elm的设备耗电异常检测方法和装置
CN115277151A (zh) * 2022-07-21 2022-11-01 国网山西省电力公司信息通信分公司 一种基于鲸鱼提升算法的网络入侵检测方法
CN115473672A (zh) * 2022-08-03 2022-12-13 广西电网有限责任公司电力科学研究院 一种基于在线交互式web动态防御的防漏洞探测方法
CN116668085A (zh) * 2023-05-05 2023-08-29 山东省计算中心(国家超级计算济南中心) 基于lightGBM的流量多进程入侵检测方法及***
CN117892102A (zh) * 2024-03-14 2024-04-16 山东省计算中心(国家超级计算济南中心) 基于主动学习的入侵行为检测方法、***、设备及介质

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860638B (zh) 2020-07-17 2022-06-28 湖南大学 基于不平衡数据深度信念网络的并行入侵检测方法和***
CN112364942B (zh) * 2020-12-09 2021-05-28 润联软件***(深圳)有限公司 信贷数据样本均衡方法、装置、计算机设备及存储介质
CN113141357B (zh) * 2021-04-19 2022-02-18 湖南大学 一种用于优化网络入侵检测性能的特征选择方法和***
CN113139598B (zh) * 2021-04-22 2022-04-22 湖南大学 一种基于改进智能优化算法的入侵检测方法和***
CN114037091B (zh) * 2021-11-11 2024-05-28 哈尔滨工业大学 一种基于专家联合评价的网络安全信息共享***、方法、电子设备及存储介质
US11741252B1 (en) * 2022-07-07 2023-08-29 Sas Institute, Inc. Parallel and incremental processing techniques for data protection
CN115545111B (zh) * 2022-10-13 2023-05-30 重庆工商大学 一种基于聚类自适应混合采样的网络入侵检测方法及***
CN116015932B (zh) * 2022-12-30 2024-06-14 湖南大学 入侵检测网络模型生成方法以及数据流量入侵检测方法
CN116846688B (zh) * 2023-08-30 2023-11-21 南京理工大学 基于cnn的可解释流量入侵检测方法
CN117459250A (zh) * 2023-09-22 2024-01-26 广州大学 一种基于海鸥优化极限学习机的网络入侵检测方法、***、介质及设备
CN117272116B (zh) * 2023-10-13 2024-05-17 西安工程大学 一种基于loras平衡数据集的变压器故障诊断方法
CN117579397B (zh) * 2024-01-16 2024-03-26 杭州海康威视数字技术股份有限公司 基于小样本集成学习的物联网隐私泄露检测方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103716204A (zh) * 2013-12-20 2014-04-09 中国科学院信息工程研究所 一种基于维纳过程的异常入侵检测集成学习方法及装置
CN103927874A (zh) * 2014-04-29 2014-07-16 东南大学 基于欠抽样面向不平衡数据集的交通事件自动检测方法
US9798960B2 (en) * 2014-12-17 2017-10-24 Amazon Technologies, Inc. Identification of item attributes using artificial intelligence
CN107895171A (zh) * 2017-10-31 2018-04-10 天津大学 一种基于k均值与深度置信网络的入侵检测方法
CN108234500A (zh) * 2018-01-08 2018-06-29 重庆邮电大学 一种基于深度学习的无线传感网入侵检测方法
CN111860638A (zh) * 2020-07-17 2020-10-30 湖南大学 基于不平衡数据深度信念网络的并行入侵检测方法和***

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050289089A1 (en) * 2004-06-28 2005-12-29 Naoki Abe Methods for multi-class cost-sensitive learning
US10650508B2 (en) * 2014-12-03 2020-05-12 Kla-Tencor Corporation Automatic defect classification without sampling and feature selection
CN106650806B (zh) * 2016-12-16 2019-07-26 北京大学深圳研究生院 一种用于行人检测的协同式深度网络模型方法
EP3552013A4 (en) * 2017-10-09 2019-12-04 BL Technologies, Inc. INTELLIGENT SYSTEMS AND METHODS FOR DIAGNOSIS OF THE HEALTH STATUS OF PROCESSES AND ASSETS, DETECTION AND CONTROL OF ANOMALIES IN WASTEWATER OR DRINKING WATER SYSTEMS
US11444957B2 (en) * 2018-07-31 2022-09-13 Fortinet, Inc. Automated feature extraction and artificial intelligence (AI) based detection and classification of malware
DE102018129871A1 (de) * 2018-11-27 2020-05-28 Valeo Schalter Und Sensoren Gmbh Trainieren eins tiefen konvolutionellen neuronalen Netzwerks zum Verarbeiten von Sensordaten zur Anwendung in einem Fahrunterstützungssystem
US11816562B2 (en) * 2019-04-04 2023-11-14 Adobe Inc. Digital experience enhancement using an ensemble deep learning model
CN110300095A (zh) * 2019-05-13 2019-10-01 江苏大学 一种基于改进学习率的深度学习网络入侵检测方法
US11893111B2 (en) * 2019-11-26 2024-02-06 Harman International Industries, Incorporated Defending machine learning systems from adversarial attacks

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103716204A (zh) * 2013-12-20 2014-04-09 中国科学院信息工程研究所 一种基于维纳过程的异常入侵检测集成学习方法及装置
CN103927874A (zh) * 2014-04-29 2014-07-16 东南大学 基于欠抽样面向不平衡数据集的交通事件自动检测方法
US9798960B2 (en) * 2014-12-17 2017-10-24 Amazon Technologies, Inc. Identification of item attributes using artificial intelligence
CN107895171A (zh) * 2017-10-31 2018-04-10 天津大学 一种基于k均值与深度置信网络的入侵检测方法
CN108234500A (zh) * 2018-01-08 2018-06-29 重庆邮电大学 一种基于深度学习的无线传感网入侵检测方法
CN111860638A (zh) * 2020-07-17 2020-10-30 湖南大学 基于不平衡数据深度信念网络的并行入侵检测方法和***

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YANG BIAO, XIUWEI SHANG: "Weighted random forest algorithm", INFORMATION AND NETWORK SECURITY, vol. 35, no. 3, 31 March 2016 (2016-03-31), pages 28 - 30, XP055887529, ISSN: 2096-5133, DOI: 10.19358/j.issn.1674-7720.2016.03.010 *
YANG WANG, ZHONGDONG WU, JING ZHU: "Intrusion detection algorithm based on depth sequence weighted kernel extreme learning", APPLICATION RESEARCH OF COMPUTERS, CHENGDU, CN, vol. 37, no. 3, 31 March 2020 (2020-03-31), CN , pages 829 - 832, XP055887532, ISSN: 1001-3695, DOI: 10.19734/j.issn.1001-3695.2018.08.0653 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114638555A (zh) * 2022-05-18 2022-06-17 国网江西综合能源服务有限公司 基于多层正则化极限学习机的用电行为检测方法及***
CN115102909A (zh) * 2022-06-15 2022-09-23 大连大学 一种基于ihho-fcm算法的网络流量分类方法
CN115102909B (zh) * 2022-06-15 2023-06-27 大连大学 一种基于ihho-fcm算法的网络流量分类方法
CN115086070A (zh) * 2022-07-20 2022-09-20 山东省计算中心(国家超级计算济南中心) 工业互联网入侵检测方法及***
CN115086070B (zh) * 2022-07-20 2022-11-15 山东省计算中心(国家超级计算济南中心) 工业互联网入侵检测方法及***
CN115277151A (zh) * 2022-07-21 2022-11-01 国网山西省电力公司信息通信分公司 一种基于鲸鱼提升算法的网络入侵检测方法
CN115473672B (zh) * 2022-08-03 2024-03-29 广西电网有限责任公司电力科学研究院 一种基于在线交互式web动态防御的防漏洞探测方法
CN115473672A (zh) * 2022-08-03 2022-12-13 广西电网有限责任公司电力科学研究院 一种基于在线交互式web动态防御的防漏洞探测方法
CN115204324A (zh) * 2022-09-16 2022-10-18 西安热工研究院有限公司 基于ifoa-dbn-elm的设备耗电异常检测方法和装置
CN116668085B (zh) * 2023-05-05 2024-02-27 山东省计算中心(国家超级计算济南中心) 基于lightGBM的流量多进程入侵检测方法及***
CN116668085A (zh) * 2023-05-05 2023-08-29 山东省计算中心(国家超级计算济南中心) 基于lightGBM的流量多进程入侵检测方法及***
CN117892102A (zh) * 2024-03-14 2024-04-16 山东省计算中心(国家超级计算济南中心) 基于主动学习的入侵行为检测方法、***、设备及介质
CN117892102B (zh) * 2024-03-14 2024-05-24 山东省计算中心(国家超级计算济南中心) 基于主动学习的入侵行为检测方法、***、设备及介质

Also Published As

Publication number Publication date
US11977634B2 (en) 2024-05-07
CN111860638A (zh) 2020-10-30
US20220382864A1 (en) 2022-12-01
CN111860638B (zh) 2022-06-28

Similar Documents

Publication Publication Date Title
WO2022012144A1 (zh) 基于不平衡数据深度信念网络的并行入侵检测方法和***
Zhao et al. A weighted hybrid ensemble method for classifying imbalanced data
Fernández-Navarro et al. A dynamic over-sampling procedure based on sensitivity for multi-class problems
CN112087447B (zh) 面向稀有攻击的网络入侵检测方法
CN112613552B (zh) 一种结合情感类别注意力损失的卷积神经网络情感图像分类方法
CN111988329B (zh) 一种基于深度学习的网络入侵检测方法
CN115048988B (zh) 基于高斯混合模型的不平衡数据集分类融合方法
CN110460605A (zh) 一种基于自动编码的异常网络流量检测方法
CN105512675B (zh) 一种基于记忆性多点交叉引力搜索的特征选择方法
Luengo et al. Domains of competence of fuzzy rule based classification systems with data complexity measures: A case of study using a fuzzy hybrid genetic based machine learning method
Yuan et al. Recent Advances in Concept Drift Adaptation Methods for Deep Learning.
CN115422995A (zh) 一种改进社交网络和神经网络的入侵检测方法
CN114549897A (zh) 一种分类模型的训练方法、装置及存储介质
CN117076871B (zh) 一种基于不平衡半监督对抗训练框架的电池故障分类方法
Bhowmik et al. Dbnex: Deep belief network and explainable ai based financial fraud detection
CN111178897B (zh) 在不平衡数据上快速特征学习的代价敏感的动态聚类方法
Zhang et al. Ensemble classification for skewed data streams based on neural network
CN114265954B (zh) 基于位置与结构信息的图表示学习方法
CN115879030A (zh) 一种针对配电网的网络攻击分类方法和***
CN111556018B (zh) 一种基于cnn的网络入侵检测方法及电子装置
CN113283530A (zh) 基于级联特征块的图像分类***
CN113609480A (zh) 基于大规模网络流的多路学习入侵检测方法
Chen et al. Improving neural network classification using further division of recognition space
Cao et al. Detection and fine-grained classification of malicious code using convolutional neural networks and swarm intelligence algorithms
Cuicui et al. Data mining algorithm based on particle swarm optimized K-means

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21842841

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21842841

Country of ref document: EP

Kind code of ref document: A1