CN110213788B - WSN (Wireless sensor network) anomaly detection and type identification method based on data flow space-time characteristics - Google Patents

WSN (Wireless sensor network) anomaly detection and type identification method based on data flow space-time characteristics Download PDF

Info

Publication number
CN110213788B
CN110213788B CN201910518513.4A CN201910518513A CN110213788B CN 110213788 B CN110213788 B CN 110213788B CN 201910518513 A CN201910518513 A CN 201910518513A CN 110213788 B CN110213788 B CN 110213788B
Authority
CN
China
Prior art keywords
state
time
space
model
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910518513.4A
Other languages
Chinese (zh)
Other versions
CN110213788A (en
Inventor
邬群勇
邓丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN201910518513.4A priority Critical patent/CN110213788B/en
Publication of CN110213788A publication Critical patent/CN110213788A/en
Application granted granted Critical
Publication of CN110213788B publication Critical patent/CN110213788B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/06Testing, supervising or monitoring using simulated traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/08Testing, supervising or monitoring using real traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/18Self-organising networks, e.g. ad-hoc networks or sensor networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Testing Or Calibration Of Command Recording Devices (AREA)

Abstract

The invention relates to a WSN anomaly detection and type identification method based on data flow space-time characteristics, which comprises the following steps: step S1, acquiring real-time data streams of a target node and neighbor nodes of the sensor to be detected by adopting a time-space sliding window; step S2: mapping the acquired real-time data stream to a corresponding state space, constructing a Markov chain form, and extracting the space-time characteristics of the real-time data stream based on a Markov chain calculation state transition probability matrix and a cross state transition probability matrix; step S3: constructing a multi-classification convolutional neural network model, and training to obtain a trained multi-classification convolutional neural network model; step S4: inputting the space-time characteristics of the real-time data stream into a trained multi-classification convolutional neural network model, and calculating an output result through forward propagation; step S5: and judging whether the data flow is abnormal or not according to the output result of the model, and distinguishing fault abnormality and event abnormality. The invention can realize real-time abnormity monitoring and abnormity type identification of the wireless sensor network.

Description

WSN (Wireless sensor network) anomaly detection and type identification method based on data flow space-time characteristics
Technical Field
The invention relates to a WSN anomaly detection and type identification method based on data flow space-time characteristics.
Background
A Wireless Sensor Network (WSN) is affected by factors such as severe weather, natural events, and instrument failures, and sensor data is abnormal. Due to the fact that similar abnormal values can be generated by the sensor due to abnormal events and instrument faults, the abnormal values of the sensor can be timely and accurately detected, abnormal types can be distinguished, and the method and the device have important significance for controlling data quality and judging abnormal sources.
The WSN anomaly detection method is a research hotspot at home and abroad in recent years. The data spatio-temporal feature-based anomaly detection method and the classification-based anomaly detection method are classical algorithms in the field. The method based on the data space-time characteristics can judge the data type to a certain extent, but does not combine the time characteristics and the space characteristics of the data, and excessively depends on the assumed data distribution, and the assumption cannot reflect the real distribution of the data. The classification-based approach is not limited by data distribution and data range assumptions, but due to the limited labeled data, the classifier cannot capture all the features of the anomalous data, making it difficult to further distinguish the anomalous data types.
Disclosure of Invention
In view of this, the present invention provides a method for detecting and identifying WSN anomalies based on data stream space-time characteristics, so as to implement real-time anomaly monitoring and anomaly type identification of a wireless sensor network.
In order to achieve the purpose, the invention adopts the following technical scheme:
a WSN anomaly detection and type identification method based on data flow space-time characteristics comprises the following steps:
step S1, acquiring real-time data streams of a target node and neighbor nodes of the wireless sensor network by adopting a time-space sliding window;
step S2: mapping the acquired real-time data stream to a corresponding state space, constructing a Markov chain form, and extracting the space-time characteristics of the real-time data stream based on a Markov chain calculation state transition probability matrix and a cross state transition probability matrix;
step S3: constructing a multi-classification convolutional neural network model, and extracting a sample training multi-classification convolutional neural network model from historical data of a sensor to be tested to obtain a trained multi-classification convolutional neural network model;
step S4: inputting the space-time characteristics of the real-time data stream into a trained multi-classification convolutional neural network model, and calculating an output result through forward propagation;
step S5: and judging whether the data flow is abnormal or not according to the output result of the model, and distinguishing fault abnormality and event abnormality.
Furthermore, the space-time sliding window is composed of the detection data of the target node and the neighbor node at the nearest W moments, and the real-time data streams of the target node and the neighbor node are obtained through the space-time sliding window model.
Further, the mapping the acquired real-time data stream to a corresponding state space specifically includes:
let the sensor data sequence be { u }1,u2,...,ut},utMeasured by the sensor at time t, utIs characterized by Δ ut=ut-ut-1(ii) a Firstly, difference characteristics of each data in the sequence are calculated, and the original sequence is converted into a difference sequence { delta u1,Δu2,...,ΔutMapping the sensor data sequence to a corresponding state space according to the difference characteristic of the sensor data sequence according to a 3 sigma criterion; the state space S contains 9 states, a, b, c, d, e, f, g, h, k, for any value u in the sensor data sequencetIf u istCharacteristic of the difference Δ utIf the corresponding condition in the mapping function is satisfied, u will betMapping to a corresponding state in a state space S ═ { a, b, c, d, e, f, g, h, k }, where a specific mapping function is:
Figure BDA0002095819480000031
wherein σ is a difference sequence { Δ u }1,Δu2,...,ΔutStandard deviation of, sensor data sequence u1,u2,...,utThe state mapping is carried out to convert the state mapping into a state sequence s1,s2,...,st}。
Further, the time characteristic of the data stream is extracted by calculating the state transition probability matrix based on the Markov chain, and the spatial characteristic of the data stream is extracted by calculating the cross state transition probability matrix, which specifically includes the following steps:
(1) the calculation method of the state transition probability matrix is as follows:
A. target node state sequence s after state mapping1,s2,...,stConstructed as a first order Markov chain model X ═ X1,X2,...,XtThe state at the time t in the X model is only related to the state at the time t-1. The state space of the X model is S ═ { a, b, c, d, e, f, g, h, k };
B. calculating the state transition probability p of the X model constructed by the target node state sequenceij,pijRepresenting the appearance state s of the model X at the moment t-1iAfter that, at time t, a state s occursjA probability of where si,sj∈S。pijThe calculation formula of (a) is as follows:
Figure BDA0002095819480000041
where N (-) is used to calculate the total number of occurrences of the state;
C. calculating all possible state transition probabilities p of the target node state sequence in the state space S ═ { a, b, c, d, e, f, g, h, k }ijA state transition probability matrix P is formed, the size of the matrix P is 9 × 9, and the matrix is:
Figure BDA0002095819480000042
wherein p isijIs not less than 0 and
Figure BDA0002095819480000043
(2) the calculation method of the cross state transition probability matrix is as follows:
A. respectively constructing a target node state sequence and a neighbor node state sequence after state mapping into a Markov chain model; respectively constructing A, B node state sequences into Markov chain models for a target node A and a neighbor node B of the node A
Figure BDA0002095819480000044
And
Figure BDA0002095819480000045
the state spaces of the models are respectively SA={a,b,c,d,e,f,g,h,k}、SB={a,b,c,d,e,f,g,h,k}。
B. Then calculating the state sequence of the target node andcross state transition probability of neighbor node state sequence
Figure BDA0002095819480000046
Figure BDA0002095819480000051
Representation model XAState occurs at time t-1
Figure BDA0002095819480000052
Then, model XBThe state occurring at time t
Figure BDA0002095819480000053
Wherein, the probability of
Figure BDA0002095819480000054
Figure BDA0002095819480000055
The calculation formula of (a) is as follows:
Figure BDA0002095819480000056
where N (-) is used to calculate the total number of occurrences of the state;
C. finally, calculating the state sequences of the target node and the neighbor nodes in the state space SA={a,b,c,d,e,f,g,h,k}、SBAll possible cross-state transition probabilities within { a, b, c, d, e, f, g, h, k }
Figure BDA0002095819480000057
Forming a cross-state transition probability matrix PABThe matrix PABIs 9 × 9, the matrix is:
Figure BDA0002095819480000058
wherein the content of the first and second substances,
Figure BDA0002095819480000059
and is
Figure BDA00020958194800000510
Further, the step S3 is specifically:
s31, constructing a multi-classification convolutional neural network model, wherein the model consists of an input layer, a convolutional layer C1, a pooling layer S1, a convolutional layer C2, a pooling layer S2, a full-connection layer FC1, a full-connection layer FC2 and an output layer, the total number of the layers is 8, and an activation function in the model is a ReLU function;
the role and specific parameter settings of each layer in the multi-class convolutional neural network model are as follows: an input layer: the input data is the space-time characteristics, space-time, of the space-time sliding window data extracted based on Markov
Characterized in that a space-time characteristic matrix set pi of the target site is composed of n 9 multiplied by 9 matrixes and comprises a state transition probability matrix pi of the target site11And a cross state transition probability matrix pi of the target station and the neighbor station12,Π13,...,Π1n
Alternating convolution and pooling layers: alternating convolutional and pooling layers namely convolutional layer (C1), pooling layer (S1), convolutional layer (C2), pooling layer (S2), alternating convolutional and pooling layers for extracting feature maps of spatio-temporal feature matrix set Π at different local regions. The convolutional layer (C1) performs convolution operation on the convolutional check matrix set pi with the size of 3 × 3 and the step length of 1 × 1 through 64 convolutional check matrixes, and obtains an output characteristic diagram through a ReLU activation function. The pooling layer (S1) compresses the extracted feature map through a pooling window having a size of 2 × 2 and a step size of 1 × 1, reducing redundant features. Convolutional layer (C2) continues to extract higher level features by 128 convolutional kernels of size 3 × 3. The pooling layer (S2) compresses the feature map through pooling windows of size 2 × 2 in steps of 1 × 1 to obtain the final features.
Full connection layer: the model integrates and reduces dimensions of the feature maps finally obtained by the convolutional layers and the pooling layers by adopting 2 full-connection layers in a multilayer artificial neural network mode, the number of the neurons of the full-connection layers is 128 and 64 respectively, and after the feature maps pass through the 2 full-connection layers, the feature maps are converted into 64-dimensional vectors.
An output layer: and connecting the 64-dimensional vectors output by the full-connection layer to the output layer through a Softmax classifier, and calculating the output probability of each class, wherein the class with the highest output probability is the calculated class of the sample. The output layer comprises 4 neurons which represent the types of target node data streams, namely normal, event abnormity, fixed fault abnormity and drift fault abnormity. And judging whether the data stream is abnormal or not according to the calculation result of the output layer, and distinguishing fault abnormality and event abnormality.
Step S32: normal, abnormal event, drift fault and fixed value fault samples are extracted from historical data of the sensor to form a training sample set and a test sample set;
step S33: mapping the training samples to corresponding state spaces to construct a Markov chain form, and extracting the space-time characteristics of the samples based on the Markov chain calculation state transition probability matrix and the cross state transition probability matrix;
step S34: and inputting the calculated space-time characteristics of the training samples into a multi-classification convolutional neural network model for training until the model is converged, and storing model structure information and model parameter information.
Furthermore, the training process of the multi-classification convolutional neural network model comprises forward propagation and backward propagation, different features of input layer data are extracted through a convolutional layer and a pooling layer in the forward propagation process, the features are integrated through a full-connection layer, classification results are obtained through Softmax, cross entropy loss is calculated, gradient values are calculated according to a chain rule in the backward propagation process, and weights of all layers are updated through a random gradient descent method.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention adopts a time-space sliding window model to obtain the real-time data flow of the sensor, and can realize the real-time abnormal monitoring and abnormal type identification of the wireless sensor network.
2. The method fully reserves the time characteristic and the space characteristic of the sensor data stream, and compared with a classical anomaly detection method based on the space-time characteristic, the method does not depend on the assumed distribution of the sensor data stream in the anomaly detection process, and is more in line with the practical situation.
3. The invention designs an 8-layer multi-classification convolutional neural network model to classify and identify the space-time characteristic matrix of the data stream, and compared with a classical classification model, the multi-classification convolutional neural network model can extract more information beneficial to classification from limited sample data, has higher classification precision, and can effectively detect the abnormality and distinguish the abnormal type.
Drawings
FIG. 1 is a schematic flow chart of a method of an embodiment of the present invention;
FIG. 2 is a view of a space-time sliding window mold in an embodiment of the present invention.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
Referring to fig. 1, the present invention provides a method for WSN anomaly detection and type identification based on data stream space-time characteristics, and is implemented on a SensorScope wireless sensor network and a LWSNDR wireless sensor network with labels, and the specific implementation manner is as follows:
step S1: the method comprises the following steps of constructing and training a multi-classification convolutional neural network model:
step S11: firstly, normal, abnormal event, drift fault and fixed value fault samples are extracted from historical data of a sensor, and the condition of a data set is shown in a table 1; then, 80% of the total samples were randomly selected from each sample set as training samples and 20% as test samples.
(1) In the implementation case of the SensorScope wireless sensor network, sensors No. 2, No. 7 and No. 9 are selected as target nodes, the number of neighbor nodes is 4, temperature data collected by the nodes are used as detection variables, normal, environmental abnormality, drift fault and fixed value fault samples are selected from historical data, and 3 sample sets are collected in total. The dimension of the samples of the SensorScope sample set is 30 × 5, the rows of the samples represent different time instants, and the columns represent the target node and its neighbor nodes.
(2) In the marked LWSNDR wireless sensor network implementation case, sensor nodes of indoor No. 1 and outdoor No. 4 nodes interfered by a heating kettle in a single-hop wireless sensor network are selected as target nodes, and indoor No. 2 and outdoor No. 3 nodes not interfered are selected as respective neighbor nodes, event abnormal samples and normal samples are selected from collected humidity (H) and temperature (T) data, and 4 sample sets are collected. The LWSNDR sample set has a sample dimension of 34 × 2, rows of samples represent different time instants, and columns represent a target node and its neighbor nodes.
Table 1 sample set case
Figure BDA0002095819480000091
Step S12: and mapping the training samples to corresponding state spaces to construct a Markov chain form, and then calculating a state transition probability matrix and a cross state transition probability matrix based on the Markov chain to extract the space-time characteristics of the training samples.
(1) The state transition probability matrix and the cross state transition probability matrix for each sample in the SensorScope-2, SensorScope-7, SensorScope-9 training sample set are calculated. For each sample, a state transition probability matrix is obtained by calculating a target node state sequence, and the size of the matrix is 9 multiplied by 9; the cross state transition probability matrix is obtained by calculating a target node state sequence and a neighbor node state sequence, and the size of the matrix is 4 multiplied by 9. The two matrixes form a space-time characteristic matrix set pi, the data dimension of pi is 5 multiplied by 9, and pi is the space-time characteristic of the sample.
(2) And calculating a state transition probability matrix and a cross state transition probability matrix of each sample in the LWSNDR-1-H, LWSNDR-1-T, LWSNDR-2-H and LWSNDR-2-T training sample sets. For each sample, a state transition probability matrix is obtained by calculating a target node state sequence, and the size of the matrix is 9 multiplied by 9; the cross state transition probability matrix is obtained by calculating a target node state sequence and a neighbor node state sequence, and the size of the matrix is 9 multiplied by 9. The two matrixes form a space-time characteristic matrix set pi, the data dimension of pi is 2 multiplied by 9, and pi is the space-time characteristic of the sample.
Step S13: and constructing an 8-layer multi-classification convolution neural network model consisting of an input layer, alternate convolution and pooling layers, a full connection layer and an output layer, wherein the input of the model is the space-time characteristics of the sample, and the output is the abnormal type corresponding to the sample.
The model hyper-parameter settings are: the learning rate (learning _ rate) is 0.001, the learning rate dynamic attenuation rate (decay _ rate) is 0.9, the number of iterations (epoch) is 200, and the random deactivation rate (drop) is 0.2. Since the number of samples in the SensorScope sample set and the LWSNDR sample set are different, the batch size (batch _ size) of the SensorScope sample set is set to 100 and the batch size (batch _ size) of the LWSNDR sample set model is set to 60 in this embodiment.
Step S14: and (3) sequentially inputting the space-time characteristics of the training samples into the multi-classification convolutional neural network model for training to obtain 7 trained models.
(1) In the embodiment of the SensorScope wireless sensor network, a SensorScope-2, SensorScope-7 and SensorScope-9 sample set is input into a multi-class convolutional neural network model for training, and well-trained S2, S7 and S9 models are obtained.
(2) In the implementation case of the LWSNDR wireless sensor network, LWSNDR-1-H, LWSNDR-1-T, LWSNDR-2-H and LWSNDR-2-T sample sets are input into a multi-class convolutional neural network model for training to obtain well-trained models L1-H, L1-T, L2-H and L2-T.
The model training comprises the following contents:
(1) forward propagation (Forward). The layers in the model are sequentially subjected to forward propagation from front to back, and the output of the L-th layer is the input of the L + 1-th layer. At the end of forward propagation, a loss function is defined for measuring the difference between the classification result output by the network model and the sample real label, and the cross entropy loss function is used as the loss function in the embodiment.
(2) Back propagation (Backward). Firstly, calculating the weight gradient from back to front by utilizing a chain rule and a random gradient descent algorithm, and then updating the corresponding weight of each layer by combining the learning rate and the weight gradient.
(3) And (5) circularly and iteratively training the network. The multi-class convolutional neural network model is trained through multiple iterations, wherein each iteration comprises forward propagation and backward propagation. And circularly iterating until the model converges, and storing the model structure information and the model parameter information.
Step S2: and acquiring real-time data streams of a target node and neighbor nodes of the wireless sensor network by adopting a time-space sliding window.
(1) In the implementation case of the SensorScope wireless sensor network, sensor data which does not participate in model training is taken as real-time data, real-time temperature data streams of No. 2, No. 7 and No. 9 target nodes and neighbor nodes thereof are obtained by adopting a space-time sliding window, 3 real-time data streams are obtained, and the dimensionality of each data stream is 30 multiplied by 5.
(2) In the implementation case of the LWSNDR wireless sensor network, sensor data which does not participate in model training is regarded as real-time data, and a time-space sliding window is adopted to obtain real-time temperature data streams and real-time humidity data streams of indoor number 1 and outdoor number 4 target nodes and neighbor nodes thereof, so that 4 real-time data streams are obtained, and the dimensionality of each data stream is 34 × 2.
The constructed spatio-temporal sliding window model is as follows:
the space-time sliding window is composed of the detection data of the target node and the nearest W moments of the neighbor nodes, the established space-time sliding window is shown in figure 2, the window size is n multiplied by W, the rows represent different nodes, wherein S1Representing target node, { S2,...SnExpressing a neighbor node set in the communication range of the target node; columns represent the sensed values of the sensor nodes at different times, { ui1,ui2,...,uiWDenotes the corresponding node SiDetection sequences within the last W moments. When new data is generated, the whole space-time sliding window slides forwards by one position, the data at the tail end of the original window is deleted, and new data is added, so that the updating of the space-time sliding window is realized.
Step S3: mapping the acquired real-time data stream to a corresponding state space to construct a Markov chain form, and calculating a state transition probability matrix and a cross state transition probability matrix based on the Markov chain to extract the space-time characteristics of the real-time data stream;
(1) in the implementation case of the SensorScope wireless sensor network, the calculated time-space characteristics of the real-time data stream are a time-space characteristic matrix set Π formed by a state transition probability matrix and a cross state transition probability matrix, and the data dimensions of the time-space characteristic matrix set Π are 5 multiplied by 9.
(2) In an implementation case of the LWSNDR wireless sensor network, the calculated time-space characteristics of the real-time data stream are a time-space characteristic matrix set pi formed by a state transition probability matrix and a cross state transition probability matrix, and the data dimension of pi is 2 × 9 × 9.
Step S4: and loading the stored model and the trained model parameters, respectively inputting the acquired space-time characteristic matrix of the real-time data stream into the corresponding trained classification convolution neural network model, and calculating an output result through forward propagation.
(1) In the implementation case of the SensorScope wireless sensor network, the spatio-temporal characteristics of the real-time data streams of the target node No. 2, the target node No. 7 and the target node No. 9 and the neighbor nodes are respectively input into the models S2, S7 and S9, and the output result is calculated.
(2) In the LWSNDR wireless sensor network implementation case, real-time temperature data streams and real-time humidity data streams of an indoor No. 1 target node, an outdoor No. 4 target node and neighbor nodes thereof are respectively input into L1-H, L1-T, L2-H and L2-T models, and output results are calculated.
Step S5: and judging whether the real-time data flow is abnormal according to the model result, and distinguishing fault abnormality and event abnormality. If the real-time data stream has fault abnormality or event abnormality, the target node corresponding to the real-time data stream has fault or abnormal events exist near the target node, so that the abnormality detection and type identification of the wireless sensor network are realized.
The above description is only a preferred embodiment of the present invention, and all equivalent changes and modifications made in accordance with the claims of the present invention should be covered by the present invention.

Claims (5)

1. A WSN anomaly detection and type identification method based on data flow space-time characteristics is characterized by comprising the following steps:
step S1, acquiring real-time data streams of a target node and neighbor nodes of the wireless sensor network by adopting a time-space sliding window;
step S2: mapping the acquired real-time data stream to a corresponding state space, constructing a Markov chain form, extracting the time characteristic of the real-time data stream based on a Markov chain calculation state transition probability matrix, and extracting the space characteristic of the real-time data stream based on a Markov chain calculation cross state transition probability matrix;
step S3: constructing a multi-classification convolutional neural network model, extracting a sample from historical data of the wireless sensor to train the multi-classification convolutional neural network model, and obtaining a trained multi-classification convolutional neural network model;
the step S3 specifically includes:
s31, constructing a multi-classification convolutional neural network model, wherein the model consists of an input layer, a convolutional layer C1, a pooling layer S1, a convolutional layer C2, a pooling layer S2, a full-connection layer FC1, a full-connection layer FC2 and an output layer, the total number of the layers is 8, and an activation function in the model is a ReLU function;
step S32: normal, abnormal event, drift fault and fixed value fault samples are extracted from historical data of the sensor to form a training sample set and a test sample set;
step S33: mapping the training samples to corresponding state spaces to construct a Markov chain form, and extracting the space-time characteristics of the samples based on the Markov chain calculation state transition probability matrix and the cross state transition probability matrix;
step S34: inputting the calculated space-time characteristics of the training samples into a multi-classification convolutional neural network model for training until the model is converged, and storing model structure information and model parameter information;
step S4: inputting the space-time characteristics of the real-time data stream into a trained multi-classification convolutional neural network model, and calculating an output result through forward propagation;
step S5: and judging whether the data flow is abnormal or not according to the output result of the model, and distinguishing fault abnormality and event abnormality.
2. The method for WSN anomaly detection and type identification based on data stream spatio-temporal features according to claim 1, wherein: the space-time sliding window is composed of the detection data of the target node and the neighbor node at the nearest W moments, and the real-time data streams of the target node and the neighbor node are obtained through a space-time sliding window model.
3. The method for WSN anomaly detection and type identification based on data stream spatio-temporal features according to claim 1, wherein: the mapping of the acquired real-time data stream to the corresponding state space specifically includes:
let the sensor data sequence be { u }1,u2,...,ut},utMeasured by the sensor at time t, utIs characterized by Δ ut=ut-ut-1(ii) a Firstly, difference characteristics of each data in the sequence are calculated, and the original sequence is converted into a difference sequence { delta u1,Δu2,...,ΔutMapping the sensor data sequence to a corresponding state space according to the difference characteristic of the sensor data sequence according to a 3 sigma criterion; the state space S contains 9 states in total, a, b, c, d, e, f, g, h, k, respectively, for any value u in the sensor data sequencetIf u istCharacteristic of the difference Δ utIf the corresponding condition in the mapping function is satisfied, u will betMapping to a corresponding state in state space S ═ { a, b, c, d, e, f, g, h, k }, the mapping function is:
Figure FDA0002948352790000031
wherein σ is a difference sequence { Δ u }1,Δu2,...,ΔutStandard deviation of, sensor data sequence u1,u2,...,utThe state mapping is carried out to convert the state mapping into a state sequence s1,s2,...,st}。
4. The WSN anomaly detection and type identification method based on data stream spatio-temporal features according to claim 3, wherein: the method for extracting the time characteristics of the data stream based on the Markov chain calculation state transition probability matrix and the cross state transition probability matrix comprises the following steps:
(1) the calculation method of the state transition probability matrix is as follows:
A. target node state sequence s after state mapping1,s2,...,stConstructed as a first order Markov chain model X ═ X1,X2,...,XtThe state at the time t in the X model is only related to the state at the time t-1; the state space of the X model is S ═ { a, b, c, d, e, f, g, h, k };
B. calculating the state transition probability p of the X model constructed by the target node state sequenceij,pijRepresenting the appearance state s of the model X at the moment t-1iAfter that, at time t, a state s occursjA probability of where si,sj∈S;pijThe calculation formula of (a) is as follows:
Figure FDA0002948352790000032
where N (-) is used to calculate the total number of occurrences of the state;
C. calculating all possible state transition probabilities p of the target node state sequence in the state space S ═ { a, b, c, d, e, f, g, h, k }ijA state transition probability matrix P is formed, the size of the matrix P is 9 × 9, and the matrix is:
Figure FDA0002948352790000041
wherein p isijIs not less than 0 and
Figure FDA0002948352790000042
(2) the calculation method of the cross state transition probability matrix is as follows:
A. respectively constructing a target node state sequence and a neighbor node state sequence after state mapping into a Markov chain model; respectively constructing A, B node state sequences into Markov chain models for a target node A and a neighbor node B of the node A
Figure FDA0002948352790000043
And
Figure FDA0002948352790000044
the state spaces of the models are respectively SA={a,b,c,d,e,f,g,h,k}、SB={a,b,c,d,e,f,g,h,k};
B. Then, the cross state transition probability of the target node state sequence and the neighbor node state sequence is calculated
Figure FDA0002948352790000045
Figure FDA0002948352790000046
Representation model XAState occurs at time t-1
Figure FDA0002948352790000047
Then, model XBThe state occurring at time t
Figure FDA0002948352790000048
Wherein, the probability of
Figure FDA0002948352790000049
Figure FDA00029483527900000410
The calculation formula of (a) is as follows:
Figure FDA00029483527900000411
where N (-) is used to calculate the total number of occurrences of the state;
C. finally, calculating the state sequences of the target node and the neighbor nodes in the state space SA={a,b,c,d,e,f,g,h,k}、SBAll possible cross-state transition probabilities within { a, b, c, d, e, f, g, h, k }
Figure FDA0002948352790000054
Forming a cross-state transition probability matrix PABThe matrix PABIs 9 × 9, the matrix is:
Figure FDA0002948352790000051
wherein the content of the first and second substances,
Figure FDA0002948352790000052
and is
Figure FDA0002948352790000053
5. The method for WSN anomaly detection and type identification based on data stream spatio-temporal features according to claim 1, wherein: the training process of the multi-classification convolutional neural network model comprises forward propagation and backward propagation, different features of input layer data are extracted through a convolutional layer and a pooling layer in the forward propagation process, the features are integrated in a full-connection layer, classification results are obtained through Softmax, cross entropy loss is calculated, gradient values are calculated according to a chain rule in the backward propagation process, and weights of all layers are updated through a random gradient descent method.
CN201910518513.4A 2019-06-15 2019-06-15 WSN (Wireless sensor network) anomaly detection and type identification method based on data flow space-time characteristics Active CN110213788B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910518513.4A CN110213788B (en) 2019-06-15 2019-06-15 WSN (Wireless sensor network) anomaly detection and type identification method based on data flow space-time characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910518513.4A CN110213788B (en) 2019-06-15 2019-06-15 WSN (Wireless sensor network) anomaly detection and type identification method based on data flow space-time characteristics

Publications (2)

Publication Number Publication Date
CN110213788A CN110213788A (en) 2019-09-06
CN110213788B true CN110213788B (en) 2021-07-13

Family

ID=67792875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910518513.4A Active CN110213788B (en) 2019-06-15 2019-06-15 WSN (Wireless sensor network) anomaly detection and type identification method based on data flow space-time characteristics

Country Status (1)

Country Link
CN (1) CN110213788B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110716843B (en) * 2019-09-09 2022-11-22 深圳壹账通智能科技有限公司 System fault analysis processing method and device, storage medium and electronic equipment
CN111356108B (en) * 2020-03-06 2022-04-19 山东交通学院 Neural network-based underwater wireless sensor network anomaly diagnosis method
CN111669373B (en) * 2020-05-25 2022-04-01 山东理工大学 Network anomaly detection method and system based on space-time convolutional network and topology perception
CN113946758B (en) * 2020-06-30 2023-09-19 腾讯科技(深圳)有限公司 Data identification method, device, equipment and readable storage medium
CN111880998B (en) * 2020-07-30 2022-09-02 平安科技(深圳)有限公司 Service system anomaly detection method and device, computer equipment and storage medium
CN112784896A (en) * 2021-01-20 2021-05-11 齐鲁工业大学 Time series flow data anomaly detection method based on Markov process
CN113469228A (en) * 2021-06-18 2021-10-01 国网山东省电力公司淄博供电公司 Power load abnormal value identification method based on data flow space-time characteristics
CN113590654B (en) * 2021-06-22 2022-09-09 中国人民解放军国防科技大学 Spacecraft attitude system anomaly detection method and device based on space-time mode network
CN113899809B (en) * 2021-08-20 2024-02-27 中海石油技术检测有限公司 In-pipeline detector positioning method based on CNN classification and RNN prediction
CN114338853B (en) * 2021-12-31 2022-09-20 西南民族大学 Block chain flow monitoring and detecting method under industrial internet
US20230244946A1 (en) * 2022-01-28 2023-08-03 International Business Machines Corporation Unsupervised anomaly detection of industrial dynamic systems with contrastive latent density learning
CN114781441B (en) * 2022-04-06 2024-01-26 电子科技大学 EEG motor imagery classification method and multi-space convolution neural network model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106782504A (en) * 2016-12-29 2017-05-31 百度在线网络技术(北京)有限公司 Audio recognition method and device
CN109640335A (en) * 2019-02-28 2019-04-16 福建师范大学 Wireless sensor fault diagnosis algorithm based on convolutional neural networks

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101442807B (en) * 2008-12-30 2012-09-05 北京邮电大学 Method and system for distribution of communication system resource
US8838520B2 (en) * 2010-04-06 2014-09-16 University Of Notre Dame Du Lac Sequence detection methods, devices, and systems for spectrum sensing in dynamic spectrum access networks
CN102323049B (en) * 2011-07-18 2013-07-03 福州大学 Structural abnormality detection method based on consistent data replacement under incomplete data
CN102612065B (en) * 2012-03-19 2014-05-28 中国地质大学(武汉) Quick fault-tolerance detection method for monitoring abnormal event by wireless sensor network
CN102655685B (en) * 2012-05-29 2014-12-03 福州大学 Task fault-tolerance allocation method for wireless sensor networks
CN103856966B (en) * 2012-11-28 2017-06-09 华为技术有限公司 The localization method and device of remote wireless network failure
WO2015154089A1 (en) * 2014-04-04 2015-10-08 Parkervision, Inc. An optimization of thermodynamic efficiency vs. capacity for communications systems
US10037025B2 (en) * 2015-10-07 2018-07-31 Business Objects Software Ltd. Detecting anomalies in an internet of things network
CN105205475B (en) * 2015-10-20 2019-02-05 北京工业大学 A kind of dynamic gesture identification method
CN105491614A (en) * 2016-01-22 2016-04-13 中国地质大学(武汉) Wireless sensor network abnormal event detection method and system based on secondary mixed compression
CN105760529B (en) * 2016-03-03 2018-12-25 福州大学 A kind of spatial index of mobile terminal vector data and caching construction method
US20180018970A1 (en) * 2016-07-15 2018-01-18 Google Inc. Neural network for recognition of signals in multiple sensory domains
CN106709511A (en) * 2016-12-08 2017-05-24 华中师范大学 Urban rail transit panoramic monitoring video fault detection method based on depth learning
CN106658590B (en) * 2016-12-28 2023-08-01 南京航空航天大学 Design and implementation of multi-person indoor environment state monitoring system based on WiFi channel state information
CN106960457B (en) * 2017-03-02 2020-06-26 华侨大学 Color painting creation method based on image semantic extraction and doodling
CN109447263B (en) * 2018-11-07 2021-07-30 任元 Space abnormal event detection method based on generation of countermeasure network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106782504A (en) * 2016-12-29 2017-05-31 百度在线网络技术(北京)有限公司 Audio recognition method and device
CN109640335A (en) * 2019-02-28 2019-04-16 福建师范大学 Wireless sensor fault diagnosis algorithm based on convolutional neural networks

Also Published As

Publication number Publication date
CN110213788A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN110213788B (en) WSN (Wireless sensor network) anomaly detection and type identification method based on data flow space-time characteristics
CN108334936B (en) Fault prediction method based on migration convolutional neural network
CN110334765B (en) Remote sensing image classification method based on attention mechanism multi-scale deep learning
CN109141847B (en) Aircraft system fault diagnosis method based on MSCNN deep learning
CN110213244A (en) A kind of network inbreak detection method based on space-time characteristic fusion
CN111967486A (en) Complex equipment fault diagnosis method based on multi-sensor fusion
Yin et al. Wasserstein generative adversarial network and convolutional neural network (WG-CNN) for bearing fault diagnosis
CN106991666B (en) A kind of disease geo-radar image recognition methods suitable for more size pictorial informations
CN108958217A (en) A kind of CAN bus message method for detecting abnormality based on deep learning
CN111046961B (en) Fault classification method based on bidirectional long-time and short-time memory unit and capsule network
CN116757534A (en) Intelligent refrigerator reliability analysis method based on neural training network
CN112132430B (en) Reliability evaluation method and system for distributed state sensor of power distribution main equipment
CN112200121A (en) Hyperspectral unknown target detection method based on EVM and deep learning
CN110837865A (en) Domain adaptation method based on representation learning and transfer learning
CN115659174A (en) Multi-sensor fault diagnosis method, medium and equipment based on graph regularization CNN-BilSTM
CN117272196A (en) Industrial time sequence data anomaly detection method based on time-space diagram attention network
CN114818579A (en) Analog circuit fault diagnosis method based on one-dimensional convolution long-short term memory network
WO2022188425A1 (en) Deep learning fault diagnosis method integrating prior knowledge
CN109583456B (en) Infrared surface target detection method based on feature fusion and dense connection
CN111079348A (en) Method and device for detecting slowly-varying signal
CN115659258B (en) Power distribution network fault detection method based on multi-scale graph roll-up twin network
CN112699782A (en) Radar HRRP target identification method based on N2N and Bert
CN109389313B (en) Fault classification diagnosis method based on weighted neighbor decision
CN108898157B (en) Classification method for radar chart representation of numerical data based on convolutional neural network
CN116400168A (en) Power grid fault diagnosis method and system based on depth feature clustering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Wu Qunyong

Inventor after: Deng Li

Inventor before: Wu Qunyong

Inventor before: Deng Liping

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant