CN114511007B - Non-invasive electrical fingerprint identification method based on multi-scale feature perception - Google Patents

Non-invasive electrical fingerprint identification method based on multi-scale feature perception Download PDF

Info

Publication number
CN114511007B
CN114511007B CN202210049795.XA CN202210049795A CN114511007B CN 114511007 B CN114511007 B CN 114511007B CN 202210049795 A CN202210049795 A CN 202210049795A CN 114511007 B CN114511007 B CN 114511007B
Authority
CN
China
Prior art keywords
layer
data
network
input
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210049795.XA
Other languages
Chinese (zh)
Other versions
CN114511007A (en
Inventor
张珊珊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Mengxiang Intelligent Technology Co ltd
Original Assignee
Shanghai Mengxiang Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Mengxiang Intelligent Technology Co ltd filed Critical Shanghai Mengxiang Intelligent Technology Co ltd
Priority to CN202210049795.XA priority Critical patent/CN114511007B/en
Publication of CN114511007A publication Critical patent/CN114511007A/en
Application granted granted Critical
Publication of CN114511007B publication Critical patent/CN114511007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Strategic Management (AREA)
  • Computational Linguistics (AREA)
  • Human Resources & Organizations (AREA)
  • Biophysics (AREA)
  • Marketing (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Primary Health Care (AREA)
  • Evolutionary Biology (AREA)
  • Water Supply & Treatment (AREA)
  • Public Health (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of electrical fingerprint identification, and particularly relates to a non-invasive electrical fingerprint identification method based on multi-scale feature perception. The invention comprises the following steps: acquiring electrical data of a target electrical appliance, carrying out block processing according to a specific length, marking, and dividing a training set, a verification set and a test set; preprocessing the acquired data; constructing a multi-scale perception convolutional neural network, and determining the input and output formats of the network; training the network, selecting proper super parameters to obtain the best training effect; accessing the trained network model into an electrical fingerprint identification task, and performing real-time electrical fingerprint monitoring by taking polymerization current and voltage as input; the invention detects the working state of the electric appliance of the corresponding circuit by constructing and training the multi-scale feature perception convolutional neural network. The invention realizes end-to-end integrated work, reduces error accumulation of cooperative work of all parts after task decomposition, and achieves better electrical fingerprint identification effect.

Description

Non-invasive electrical fingerprint identification method based on multi-scale feature perception
Technical Field
The invention belongs to the technical field of electrical fingerprint identification, and particularly relates to a non-invasive electrical fingerprint identification method.
Background
Everyone has a fingerprint and is considered a unique "identification card", as is the case with electrical fingerprints, each appliance having its own "representation". The electric fingerprint identifies the working state of certain equipment by various characteristics of voltage and current of the electric appliance during power utilization and combining a deep learning method, and even warns dangerous power supply (illegal charging of a battery car, illegal use of electric appliances in a dormitory and the like), so that the power utilization safety is ensured. The convolution neural network based on multi-scale perception provided by the invention performs convolution operation under multi-scale based on original current and voltage data so as to obtain multi-scale and multi-dimensional time sequence data characteristics, and then performs 'portrait' on the data characteristics by utilizing the high-dimensional characteristic extraction capability of the convolution neural network, so that the characteristic information belonging to specific equipment in a data segment is identified, and the corresponding electric appliance identity is identified. When the method is applied to real-time circuit monitoring, the detailed working state of equipment in the circuit can be obtained, and the detailed information monitoring of the circuit power consumption or the warning and reminding of dangerous power utilization are realized.
Disclosure of Invention
The invention aims to provide a non-invasive electrical fingerprint identification method with high accuracy for identifying the working state of electrical equipment.
The non-invasive electrical fingerprint identification method provided by the invention is based on a multi-scale feature perception neural network model technology, and identifies the working state of an electrical appliance in the current circuit by acquiring the current and voltage data of the circuit in real time through a non-invasive sensor so as to achieve the effect of identifying the working state of the electrical appliance with high accuracy.
The neural network model provided by the invention improves the existing neural network for Non-Intrusive Load Monitoring (NILM) tasks, and has stronger distinguishing and identifying capability for special characteristics belonging to different electrical appliances and different working states by extracting characteristics of different scales of the acquired raw data after preprocessing. The working state of the electrical equipment in the circuit can be identified with higher accuracy, and the circuit is monitored and dangerous power utilization is warned.
The invention provides a non-invasive electrical fingerprint identification method based on multi-scale feature perception, which comprises the following specific steps:
step 1: acquiring electrical data of a target electrical appliance, carrying out block processing according to a specific length, marking, and dividing a training set, a verification set and a test set;
and 2, step: preprocessing the data obtained in the previous step;
and 3, step 3: constructing a multi-scale perception convolutional neural network, and determining the input and output formats of the network;
and 4, step 4: training the network, selecting proper super parameters to obtain the best training effect;
and 5: and (3) accessing the trained network model into an electrical fingerprint identification task, and performing real-time electrical fingerprint monitoring by taking the aggregation current and voltage as input.
The following is a further detailed description of the various steps:
step 1: and (3) carrying out electrical data acquisition on the target electrical appliance, carrying out block processing according to a specific length, marking, and dividing a training set, a verification set and a test set.
The existing public data sets in the non-intrusive load monitoring field are not many, the problems of lack of unified standardization, less maximum concurrent load and the like exist, and therefore the data set is automatically acquired by taking the own requirements as targets according to the network characteristics of the invention. The acquisition mode is as follows: firstly, defining specific quantity and types of electric appliances to be identified, fixing sampling frequency by taking the electric appliances as targets, and firstly, respectively acquiring current and voltage data of single equipment in each working state; and then all the devices are arranged and combined, and data acquisition is carried out on the multiple devices which are not started simultaneously but are superposed in working states. Then, the acquired data are segmented: dividing the acquired continuous time sequence into a plurality of data segments in a sliding window mode, wherein the window size is fixed (W), the step is fixed (S), and the number of the obtained segmented data blocks is as follows under the assumption that the length of the time sequence data is L: (L-W)/S. For the divided data, labeling is performed according to the electric appliance state contained in the divided data, for example: no load, no load + blower start + blower first gear, etc. And then dividing the data set according to a specific proportion to obtain a training set, a verification set and a test set.
And 2, step: preprocessing the data obtained in the last step
The content of the data set obtained in the step one only comprises original continuous current and voltage values and a label corresponding to the data block, and in order to carry out deeper mining on data characteristics, multi-angle and multi-level preprocessing is carried out on data. The method specifically comprises the following steps: calculating active power, performing spectrum decomposition (fourier transform) and time series decomposition (STL decomposition) seasonal trend decomposition, and the like. And training and predicting by using the preprocessed results and the original data as input data of the network.
The data preprocessing specifically comprises the following steps:
(1) Calculating corresponding active power according to the real-time current and the voltage;
(2) Taking a section of data as a unit, and carrying out Fourier transform on the current and voltage data;
(3) And taking a section of data as a unit, and performing frequency domain information extraction and time series decomposition.
And step 3: constructing a multi-scale perception convolutional neural network and determining the input and output formats of the network
The network model constructed in the invention comprises three parts in total, wherein the first part is a multi-scale feature perception network module and is used for realizing multi-scale feature perception; the second part is a global feature perception network module used for realizing global feature perception; and the third part is a multi-label classification network and is used for finishing classification tasks, specifically, mapping the extracted features of the previous part to finally obtain the prediction of the identity of the electric appliance.
The specific structure and input/output format of each part of the network are as follows:
(I) multiscale feature aware network module
The module has a two-layer structure:
a first layer: the method comprises the following steps of (1) forming 3 one-dimensional convolution blocks, wherein convolution kernels of the blocks are different in size and are respectively 1, 5 and 9, and filling lengths of the corresponding convolution kernels are different and are respectively 0, 2 and 4; the convolution step length of each block is 1, the number of input channels is 6, and the number of output channels is 8;
a second layer: and (3) two-dimensional convolution, wherein the size of a convolution kernel is 3*3, the step size is 1, the filling is 1*1, the number of input channels is 24, the number of output channels is 32, and then a ReLU layer is connected.
(II) Global feature aware network Module
The module has nine layers of networks:
a first layer: two-dimensional convolution, the size of a convolution kernel is 3*3, the step length is 1, the filling is 1*1, the number of input channels is 32, the number of output channels is 64, and then a ReLU layer is connected;
a second layer: two-dimensional convolution, the size of a convolution kernel is 3*3, the step length is 1, the filling is 1*1, the number of input channels is 64, the number of output channels is 64, and then a ReLU layer is connected;
and a third layer: two-dimensional convolution, the size of a convolution kernel is 5*5, the step length is 1, the filling is 2*2, the number of input channels is 64, the number of output channels is 64, and then a ReLU layer is connected;
a fourth layer: two-dimensional convolution, the size of a convolution kernel is 5*5, the step length is 1, the filling is 2*2, the number of input channels is 64, the number of output channels is 64, and then a ReLU layer is connected;
and a fifth layer: two-dimensional convolution, the convolution kernel size is 5*5, the step length is 1, the filling is 2*2, the number of input channels is 64, the number of output channels is 128, then the global maximum pooling layer is connected, and then the ReLU layer is connected;
a sixth layer: two-dimensional convolution, the size of a convolution kernel is 5*5, the step length is 1, the filling is 2*2, the number of input channels is 128, the number of output channels is 64, and then a ReLU layer is connected;
a seventh layer: two-dimensional convolution, the size of a convolution kernel is 5*5, the step length is 1, the filling is 2*2, the number of input channels is 64, the number of output channels is 32, and then a ReLU layer is connected;
an eighth layer: two-dimensional convolution, the size of a convolution kernel is 5*5, the step length is 1, the filling is 2*2, the number of input channels is 32, the number of output channels is 16, and then a ReLU layer is connected;
a ninth layer: and (3) two-dimensional convolution, wherein the size of a convolution kernel is 5*5, the step size is 1, the filling is 2*2, the number of input channels is 16, the number of output channels is 4, and then a ReLU layer is connected.
(III) Multi-Label Classification network
The network maps the extracted characteristics of the previous part to finally obtain the prediction of the identity of the electric appliance; the network comprises two layers:
a first layer: fully connecting layers, wherein the number of input neuron nodes is 900 (4 × 15), the number of output neuron nodes is 256, and then a ReLU activation function is connected;
a second layer: and in the full connection layer, the number of input neuron nodes is 256, the number of output neuron nodes is the length of an array of onehot results, and then a Sigmoid activation function is connected.
And 4, step 4: training network, selecting proper super parameter to obtain best training effect
In the training of the network, a random gradient descent algorithm SGD is selected as an optimizer of the network, a cosine annealing algorithm is used for dynamically adjusting the learning rate of the network, and a cross entropy loss function is selected for carrying out loss calculation and back propagation on a prediction result of the network;
Loss = CrossEntropyLoss(
Figure DEST_PATH_IMAGE002
, y)
wherein the content of the first and second substances,
Figure 401928DEST_PATH_IMAGE002
is the prediction result of the model, y is the true label corresponding to the sample, and Cross EntrophyLoss is the cross entropy loss calculation function of the self-contained in the nn.
And 5: and (3) accessing the trained network model into an electrical fingerprint identification task, and performing real-time electrical fingerprint monitoring by taking the aggregation current and voltage as input.
Example is carried out by taking the identification of the working state of an electric appliance of a building as a target: non-invasive current and voltage sensing equipment is installed on a general power supply line of the building, the sensor acquires current and voltage equipment in a circuit at a specific frequency, acquired data are transmitted to a data cache module in real time according to a time sequence, the module maintains latest time sequence data within two seconds in real time, and the data are transmitted to a data processing module of the model at a specific interval. After receiving data with a specific length, a data processing module of the model performs non-repeated division on the data according to the size of a window consistent with a training set, then performs preprocessing on the divided data, finally sends the data into a prediction module of the model for prediction, sends the obtained prediction result to a visualization module in real time, converts the predicted equipment state label into a corresponding equipment state and displays the equipment state, and therefore real-time monitoring on the state of a working electric appliance is achieved.
The invention detects the working state of the electric appliance of the corresponding circuit by constructing and training a multi-scale feature perception convolutional neural network. The invention realizes the integrated work of end to end (from the original data of the sensor to the corresponding electric appliance identity), reduces the error accumulation of the cooperative work of all parts after the task decomposition, and achieves better electric fingerprint identification effect.
Drawings
Fig. 1 is a flowchart of a non-intrusive load monitoring electrical fingerprint identification method based on multi-scale feature sensing according to the present invention.
Fig. 2 is a diagram of a network model architecture of the present invention.
Detailed Description
The technical scheme of the invention is explained in detail by combining the drawings and the embodiment.
Examples
The invention provides a method for extracting electrical fingerprint characteristics by using current and voltage polymerization data obtained in a non-invasive load monitoring mode as input, which comprises the following specific steps:
step 1: acquiring electrical data of a target electrical appliance, carrying out block processing according to a specific length, marking, and dividing a training set, a verification set and a test set;
and 2, step: preprocessing the data obtained in the last step in a specific mode;
and 3, step 3: constructing a multi-scale perception convolutional neural network, and determining the input and output formats of the network;
and 4, step 4: training the network, selecting proper hyper parameters to obtain the best training effect;
and 5: and (3) accessing the trained network model into an electrical fingerprint identification task, and performing real-time electrical fingerprint monitoring by taking the aggregation current and voltage as input.
The following is a further detailed description of the various steps:
step 1: and (3) carrying out electrical data acquisition on the target electrical appliance, carrying out block processing according to a specific length, marking, and dividing a training set, a verification set and a test set.
Firstly, defining specific quantity and types of electric appliances to be identified, specifically comprising: the electric hair drier, the electric iron, the dust collector, the small electric cooker, the warm air blower and the blower are used as targets, firstly, current and voltage data of single equipment in each working state are respectively collected, the collection frequency is 1KHz, the equipment to be collected is started from a closed state, then works in a stable state, and is then closed, and the whole process is about 20s to 40 s; and then all the devices are arranged and combined, and data acquisition with different starting of the devices and overlapping of working states is carried out, wherein the time duration is different from 40s to 140 s. Then, the acquired data are segmented: the collected continuous time sequence is divided into a plurality of data segments in a sliding window mode, the window size is fixed to 900, and the pace is fixed to 3. For the divided data, labeling is performed according to the electric appliance state contained in the divided data, for example: no load ([ 1, 0, 0, 0, 0, 0, 0 ]), no load + blower activation ([ 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]), no load + blower activation + blower first gear, etc ([ 1, 1, 0, 0, 0, 0, 0, 1, 1, 0 ]). And then dividing the data set according to the ratio of 6: 1: 3 to obtain a training set, a verification set and a test set.
Step 2: and preprocessing the data obtained in the last step in a specific mode.
The content of the data set obtained in the step one only comprises original continuous current and voltage values and a label corresponding to the data block, and in order to carry out deeper mining on data characteristics, multi-angle and multi-level preprocessing is carried out on data. The method specifically comprises the following steps: calculating corresponding active power according to the data segments to obtain an active power data sequence; respectively carrying out Fourier transform on the current data and the voltage data to obtain corresponding frequency spectrum information; and respectively carrying out STL decomposition on the current data and the voltage data to obtain corresponding season, trend and residual error information. And training and predicting by taking the preprocessed results and the original data as input data of the network, wherein the final format of the data is as follows: (8, 900).
And step 3: and constructing a multi-scale perception convolutional neural network, and determining the input and output formats of the network.
The network model constructed in the invention totally comprises three parts, wherein the first part is a one-dimensional convolution part for realizing multi-scale feature perception, the second part is a two-dimensional convolution neural network for realizing global feature perception, and the third part is a fully-connected network for completing classification tasks. The input length 900 and the number of channels of the network are 8, and the specific implementation and input and output formats of each part of the network are as follows:
(I) multiscale feature aware network module
The module has a two-layer structure:
a first layer: the method comprises the following steps that (1) each block consists of 3 one-dimensional convolution blocks, convolution kernels among the blocks are different in size and are respectively 1, 5 and 9, and filling lengths of the corresponding convolution kernels are different and are respectively 0, 2 and 4; the convolution step of each block is 1, the number of input channels is 6, the number of output channels is 8 (if the layer input is (8, 900), the output of each block is (8, 900), the outputs of the three blocks are spliced to obtain (24, 900), and then the output is deformed into a feature matrix with the size of 30 × 30 of 24 channels, and finally the output is (24, 30, 30));
a second layer: two-dimensional convolution, convolution kernel size 3*3, step size 1, padding 1*1, input lane number 24, output lane number 32, followed by the ReLU layer (final output (32, 30, 30)).
(II) Global feature aware network Module
The module has nine layers of networks:
a first layer: two-dimensional convolution, the convolution kernel size is 3*3, the step length is 1, the padding is 1*1, the number of input channels is 32, the number of output channels is 64, and the subsequent ReLU layer (final output (64, 30, 30));
a second layer: two-dimensional convolution, the convolution kernel size is 3*3, the step size is 1, the padding is 1*1, the number of input channels is 64, the number of output channels is 64, and the subsequent ReLU layer (final output (64, 30, 30));
and a third layer: two-dimensional convolution, the convolution kernel size is 5*5, the step size is 1, the padding is 2*2, the number of input channels is 64, the number of output channels is 64, and the subsequent ReLU layer (final output (64, 30, 30));
a fourth layer: two-dimensional convolution, the convolution kernel size is 5*5, the step size is 1, the padding is 2*2, the number of input channels is 64, the number of output channels is 64, and the subsequent ReLU layer (final output (64, 30, 30));
a fifth layer: two-dimensional convolution, the size of a convolution kernel is 5*5, the step length is 1, the filling is 2*2, the number of input channels is 64, the number of output channels is 128, then the global maximum pooling layer is connected, and then the ReLU layer is connected (final output is (128, 15, 15));
a sixth layer: two-dimensional convolution, the convolution kernel size is 5*5, the step size is 1, the padding is 2*2, the number of input channels is 128, the number of output channels is 64, and the subsequent ReLU layer (final output (64, 15, 15));
a seventh layer: two-dimensional convolution, the convolution kernel size is 5*5, the step size is 1, the filling is 2*2, the number of input channels is 64, the number of output channels is 32, and then a ReLU layer (final output (32, 15, 15));
an eighth layer: two-dimensional convolution, the convolution kernel size is 5*5, the step size is 1, the filling is 2*2, the number of input channels is 32, the number of output channels is 16, and then a ReLU layer (final output (16, 15, 15));
ninth layer: and (3) two-dimensional convolution, wherein the size of a convolution kernel is 5*5, the step size is 1, the padding is 2*2, the number of input channels is 16, the number of output channels is 4, and then a ReLU layer is connected (finally output (4, 15, 15)).
(III) Multi-Label Classification network
The network maps the extracted characteristics of the previous part to finally obtain the prediction of the identity of the electric appliance; the network comprises two layers:
a first layer: a fully-connected layer, wherein the number of input neuron nodes is 900 (4 × 15), the number of output neuron nodes is 256, and then a ReLU activation function is connected;
a second layer: and in the full connection layer, the number of input neuron nodes is 256, the number of output neuron nodes is the length of an array of onehot results, and then a Sigmoid activation function is connected.
And 4, step 4: training network, selecting proper super parameter to obtain best training effect
In the training of the network, a random gradient descent algorithm SGD is selected as an optimizer of the network, the learning rate of the network is dynamically adjusted by a cosine annealing algorithm, and a cross entropy loss function is selected to perform loss calculation and back propagation on a prediction result of the network;
Loss = CrossEntropyLoss(
Figure 53489DEST_PATH_IMAGE002
, y)
wherein, the first and the second end of the pipe are connected with each other,
Figure 426702DEST_PATH_IMAGE002
the cross entropy loss calculation function is a self-contained cross entropy loss calculation function in an nn.functional module of the pytorech; through experiments, the model provided by the invention gradually converges when 150 EPOCHs, the initial learning rate is set to be 0.1, and the batch size is 64.
And 5: and connecting the trained network model into an electrical fingerprint identification task, and performing real-time electrical fingerprint monitoring by taking the polymerization current and the polymerization voltage as input.
In the invention, the power supply of a room is taken as a detection target: non-invasive current and voltage sensing equipment is installed on a main power supply line of the house, the sensors acquire current and voltage data in the circuit at the frequency of 1KHz, the acquired data are transmitted to a data cache module in real time according to the time sequence, the module maintains the latest time sequence data within two seconds in real time, and 1100 time sequence data (100 repeated data) are transmitted to a data processing module of the model at the interval of 1 s. After receiving data with a specific length, a data processing module of the model divides the data according to the window size 900, the step length is 11, then the divided data is preprocessed and finally sent to a prediction module of the model for prediction, voting is carried out on a plurality of obtained predictions, a final prediction result is selected, and the result is converted into a corresponding equipment state through a visualization module.
In the invention, the accuracy of the detection result of the equipment state in the plurality of data in the detection time period is used as an index for evaluation. The detection accuracy of the model for each selected device in the validation set, the test set and the actual experiment is given in table 1.
TABLE 1 detection accuracy (%) of each apparatus
Figure DEST_PATH_IMAGE004

Claims (1)

1. A non-invasive electrical fingerprint identification method based on multi-scale feature perception is characterized by comprising the following specific steps:
step 1: acquiring electrical data of a target electrical appliance, carrying out block processing according to a certain length, marking, and dividing a training set, a verification set and a test set;
and 2, step: pre-processing the collected electrical data, comprising:
calculating active power, and performing frequency spectrum decomposition and time sequence decomposition; using the preprocessed results and the original data as input data of the network together for training and predicting;
and step 3: constructing a multi-scale perception convolutional neural network and determining the input and output formats of the network
The constructed multi-scale perception convolutional neural network model comprises three parts: the multi-scale feature perception network module is used for realizing multi-scale feature perception; the global feature perception network module is used for realizing global feature perception; the multi-label classification network is used for completing classification tasks, and specifically maps the extracted features of the previous part to finally obtain the prediction of the identity of the electric appliance;
and 4, step 4: training network, selecting proper super parameter to obtain best training effect
Selecting a random gradient descent algorithm SGD as an optimizer of the network, dynamically adjusting the learning rate of the network by using a cosine annealing algorithm, and performing loss calculation and back propagation on a prediction result of the network by using a cross entropy loss function;
Loss = CrossEntropyLoss(
Figure 930126DEST_PATH_IMAGE002
, y)
wherein the content of the first and second substances,
Figure 427360DEST_PATH_IMAGE003
the cross entropy loss calculation function is a self-contained cross entropy loss calculation function in an nn.functional module of the pytorech;
and 5: accessing the trained network model into an electrical fingerprint identification task, and performing real-time electrical fingerprint monitoring by taking polymerization current and voltage as input;
the process of the step 1 is as follows:
data are collected in the following mode: firstly, defining specific quantity and types of electric appliances to be identified, taking the electric appliances as targets, fixing the sampling frequency to be 1KHz, and firstly, respectively collecting current and voltage data of single equipment in each working state; then all the devices are arranged and combined, and data acquisition is carried out, wherein the multiple devices are not started at the same time, but the working states are overlapped;
then, the acquired data are segmented: dividing the collected continuous time sequence into a plurality of data segments in a sliding window mode, wherein the window size W is fixed, the step S is fixed, and the length of the time sequence data is L, so that the number of the obtained segmented data blocks is as follows: (L-W)/S; marking the divided data according to the electric appliance state contained in the divided data;
then dividing the data set according to a specific proportion to obtain a training set, a verification set and a test set;
the data preprocessing in the step 2 comprises the following steps: calculating active power, and performing frequency spectrum decomposition, namely Fourier transform and time sequence decomposition, namely STL seasonal trend decomposition; the method specifically comprises the following steps:
(1) Calculating corresponding active power according to the real-time current and voltage;
(2) Taking a section of data as a unit, and carrying out Fourier transform on the current and voltage data;
(3) Taking a section of data as a unit, and performing frequency domain information extraction and time series decomposition;
the specific structure of the multi-scale perception convolutional neural network and the input and output format of the network in the step 3 are as follows:
the multi-scale feature perception network module has a two-layer structure:
a first layer: the system is composed of 3 one-dimensional convolution blocks, convolution kernels among the blocks are respectively 1, 5 and 9, and filling lengths corresponding to the convolution kernels are respectively 0, 2 and 4; the convolution step length of each block is 1, the number of input channels is 6, and the number of output channels is 8;
a second layer: two-dimensional convolution, the size of a convolution kernel is 3*3, the step length is 1, the filling is 1*1, the number of input channels is 24, the number of output channels is 32, and then a ReLU layer is connected;
(II) the global feature perception network module comprises nine layers of networks:
a first layer: two-dimensional convolution, the size of a convolution kernel is 3*3, the step length is 1, the filling is 1*1, the number of input channels is 32, the number of output channels is 64, and then a ReLU layer is connected;
a second layer: two-dimensional convolution, the size of a convolution kernel is 3*3, the step length is 1, the filling is 1*1, the number of input channels is 64, the number of output channels is 64, and then a ReLU layer is connected;
and a third layer: two-dimensional convolution, the size of a convolution kernel is 5*5, the step length is 1, the filling is 2*2, the number of input channels is 64, the number of output channels is 64, and then a ReLU layer is connected;
a fourth layer: two-dimensional convolution, the size of a convolution kernel is 5*5, the step length is 1, the filling is 2*2, the number of input channels is 64, the number of output channels is 64, and then a ReLU layer is connected;
a fifth layer: two-dimensional convolution, the convolution kernel size is 5*5, the step length is 1, the filling is 2*2, the number of input channels is 64, the number of output channels is 128, then the global maximum pooling layer is connected, and then the ReLU layer is connected;
a sixth layer: two-dimensional convolution, the size of a convolution kernel is 5*5, the step length is 1, the filling is 2*2, the number of input channels is 128, the number of output channels is 64, and then a ReLU layer is connected;
a seventh layer: two-dimensional convolution, the size of a convolution kernel is 5*5, the step length is 1, the filling is 2*2, the number of input channels is 64, the number of output channels is 32, and then a ReLU layer is connected;
an eighth layer: two-dimensional convolution, the size of a convolution kernel is 5*5, the step length is 1, the filling is 2*2, the number of input channels is 32, the number of output channels is 16, and then a ReLU layer is connected;
ninth layer: two-dimensional convolution, the size of a convolution kernel is 5*5, the step length is 1, the filling is 2*2, the number of input channels is 16, the number of output channels is 4, and then a ReLU layer is connected;
(III) the multi-label classification network comprises two layers:
a first layer: the full connection layer, the number of input neuron nodes is 900, the number of output neuron nodes is 256, and then a ReLU activation function is connected;
a second layer: and in the full connection layer, the number of input neuron nodes is 256, the number of output neuron nodes is the length of an array of onehot results, and then a Sigmoid activation function is connected.
CN202210049795.XA 2022-01-17 2022-01-17 Non-invasive electrical fingerprint identification method based on multi-scale feature perception Active CN114511007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210049795.XA CN114511007B (en) 2022-01-17 2022-01-17 Non-invasive electrical fingerprint identification method based on multi-scale feature perception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210049795.XA CN114511007B (en) 2022-01-17 2022-01-17 Non-invasive electrical fingerprint identification method based on multi-scale feature perception

Publications (2)

Publication Number Publication Date
CN114511007A CN114511007A (en) 2022-05-17
CN114511007B true CN114511007B (en) 2022-12-09

Family

ID=81550560

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210049795.XA Active CN114511007B (en) 2022-01-17 2022-01-17 Non-invasive electrical fingerprint identification method based on multi-scale feature perception

Country Status (1)

Country Link
CN (1) CN114511007B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2020103901A4 (en) * 2020-12-04 2021-02-11 Chongqing Normal University Image Semantic Segmentation Method Based on Deep Full Convolutional Network and Conditional Random Field
CN112435142A (en) * 2020-12-16 2021-03-02 北京航空航天大学 Power load identification method and load power utilization facility knowledge base construction method thereof
CN113807225A (en) * 2021-09-07 2021-12-17 中国海洋大学 Load identification method based on feature fusion

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU1330501A (en) * 1999-10-07 2001-05-10 Veridicom, Inc. Spoof detection for biometric sensing systems
US9104189B2 (en) * 2009-07-01 2015-08-11 Mario E. Berges Gonzalez Methods and apparatuses for monitoring energy consumption and related operations
CN106097353B (en) * 2016-06-15 2018-06-22 北京市商汤科技开发有限公司 Method for segmenting objects and device, computing device based on the fusion of multi-level regional area
CN106199347B (en) * 2016-06-23 2019-02-22 福州大学 Fault arc detection method and detection device based on interference fingerprint identification
CN109376753B (en) * 2018-08-31 2022-06-28 南京理工大学 Probability calculation method for three-dimensional spatial spectrum space dimension pixel generic
CN109815339B (en) * 2019-01-02 2022-02-08 平安科技(深圳)有限公司 Knowledge extraction method and device based on TextCNN, computer equipment and storage medium
CN111582007A (en) * 2019-02-19 2020-08-25 富士通株式会社 Object identification method, device and network
CN113033633B (en) * 2021-03-12 2022-12-09 贵州电网有限责任公司 Equipment type identification method combining power fingerprint knowledge and neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2020103901A4 (en) * 2020-12-04 2021-02-11 Chongqing Normal University Image Semantic Segmentation Method Based on Deep Full Convolutional Network and Conditional Random Field
CN112435142A (en) * 2020-12-16 2021-03-02 北京航空航天大学 Power load identification method and load power utilization facility knowledge base construction method thereof
CN113807225A (en) * 2021-09-07 2021-12-17 中国海洋大学 Load identification method based on feature fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"基于数据增强和深度学习的非侵入式负荷分解方法";刘睿迪;《中国优秀高级论文全文数据库工程科技Ⅱ辑》;20210815(第08期);1-64页 *
"基于机器学习理论的智能电网数据分析及算法研究";杨延东;《中国博士学位论文全文数据库工程科技Ⅱ辑》;20210115(第01期);1-14、30-81页 *

Also Published As

Publication number Publication date
CN114511007A (en) 2022-05-17

Similar Documents

Publication Publication Date Title
CN111612650B (en) DTW distance-based power consumer grouping method and system
CN108681747B (en) Rotary machine fault diagnosis and state monitoring system and method based on deep learning
CN107229602B (en) Method for identifying electricity consumption behavior of intelligent building microgrid
CN109685314B (en) Non-intrusive load decomposition method and system based on long-term and short-term memory network
CN110188920A (en) A kind of lithium battery method for predicting residual useful life
CN111368904B (en) Electrical equipment identification method based on electric power fingerprint
CN115170000B (en) Remote monitoring method and system based on electric energy meter communication module
CN111950596A (en) Training method for neural network and related equipment
CN112396087B (en) Method and device for analyzing power consumption data of solitary old people based on intelligent ammeter
CN104346503A (en) Human face image based emotional health monitoring method and mobile phone
CN103268519A (en) Electric power system short-term load forecast method and device based on improved Lyapunov exponent
CN112308124B (en) Intelligent electricity larceny prevention method for electricity consumption information acquisition system
CN111242276B (en) One-dimensional convolutional neural network construction method for load current signal identification
CN114201989A (en) Alternating current motor bearing fault diagnosis method adopting convolutional neural network and bidirectional long-time and short-time memory network
CN116502175A (en) Method, device and storage medium for diagnosing fault of graph neural network
CN112396098A (en) Non-embedded apartment electrical appliance load identification and analysis method, system and application
CN114693624A (en) Image detection method, device and equipment and readable storage medium
CN114970841A (en) Training method of battery state prediction model and related device
CN113554361A (en) Comprehensive energy system data processing and calculating method and processing system
CN114511007B (en) Non-invasive electrical fingerprint identification method based on multi-scale feature perception
CN116523681A (en) Load decomposition method and device for electric automobile, electronic equipment and storage medium
CN117055726A (en) Micro-motion control method for brain-computer interaction
CN113780457B (en) Abnormality detection method, device, equipment and medium for traditional Chinese medicine resource consumption
CN115169405A (en) Hotel guest room equipment fault diagnosis method and system based on support vector machine
CN110033082B (en) Method for identifying deep learning model in AI (Artificial intelligence) equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant