CN112528880B - WiFi CSI-based small sample countermeasure learning action recognition method, system and terminal - Google Patents

WiFi CSI-based small sample countermeasure learning action recognition method, system and terminal Download PDF

Info

Publication number
CN112528880B
CN112528880B CN202011481268.3A CN202011481268A CN112528880B CN 112528880 B CN112528880 B CN 112528880B CN 202011481268 A CN202011481268 A CN 202011481268A CN 112528880 B CN112528880 B CN 112528880B
Authority
CN
China
Prior art keywords
data
model
small sample
action
target position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011481268.3A
Other languages
Chinese (zh)
Other versions
CN112528880A (en
Inventor
尹君豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202011481268.3A priority Critical patent/CN112528880B/en
Publication of CN112528880A publication Critical patent/CN112528880A/en
Application granted granted Critical
Publication of CN112528880B publication Critical patent/CN112528880B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Psychiatry (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a WiFi CSI-based small sample countermeasure learning action recognition method, a WiFi CSI-based small sample countermeasure learning action recognition system and a WiFi-based small sample countermeasure learning action recognition terminal, which comprise the following steps: preprocessing the received WiFi CSI data to be identified to obtain small sample input data corresponding to the WiFi CSI data to be identified; and inputting the small sample input data into a small sample anti-learning action model so as to identify the target position action. The method solves the problem that in the prior art, when a large amount of data cannot be acquired at a target position, particularly when WiFi CSI is interfered by noise and the data amount is very small, a method for performing target position action recognition by using small sample data to obtain an algorithm model with excellent performance only at a target recognition position is needed to be provided. According to the method, different human actions can be accurately identified under the conditions of intelligent space and the like, such as the noise of other objects on the WiFi CSI data propagation path and the small acquired data quantity, and the convenience and privacy protection of capturing action features on the WiFi CSI can be improved.

Description

WiFi CSI-based small sample countermeasure learning action recognition method, system and terminal
Technical Field
The invention relates to the field of data processing, in particular to a small sample anti-learning action recognition method, a system and a terminal based on WiFi CSI.
Background
In recent years, the development of the Internet and intelligent equipment is very rapid, the life of people is greatly facilitated, and the intelligent equipment is widely applied to daily activities of people. People routinely make various actions in daily life, through which they can interact with smart devices or perform daily activities.
The intelligent device-based motion recognition technology can recognize human motions by utilizing the data containing human motions collected by the intelligent device, and further analyze or monitor the motions. WiFi technology is a wireless network transmission technology, and CSI is channel state information during WiFi transmission, and describes the attenuation degree of a WiFi signal on each propagation path. WiFi devices are a commonly used smart device that is capable of sending and receiving WiFi CSI signals. WiFi CSI signals, when propagated, can be affected by human action in the environment, changing the propagation path. Therefore, actions performed by a human can be analyzed through WiFi CSI signal changes collected at a receiving end, and the actions are identified by further inputting algorithm models after amplitude and phase difference features are extracted from the CSI signals.
When a human being acts at different positions in the environment, the influence on the propagation path of the WiFi is different, and the change mode on the collected WiFi CSI data is also different. Thus, it is generally necessary to build an algorithm model at each location where motion recognition is required to perform motion recognition. And obtaining an algorithm model with good performance often requires a large amount of data. When large amounts of data cannot be collected at a target location, particularly when WiFi CSI is subject to noise interference and at locations where the amount of data is extremely small, it is desirable to provide a method that can use only small sample data at the target identification location to obtain an algorithm model that is excellent in performance.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, an object of the present invention is to provide a small sample anti-learning action recognition method, system and terminal based on WiFi CSI, which are used for solving the problem in the prior art that when a large amount of data cannot be collected at a target location, especially when the WiFi CSI is interfered by noise and at a location with a small amount of data, a method capable of obtaining an algorithm model with excellent performance only at the target recognition location by using small sample data needs to be provided for target location action recognition.
To achieve the above and other related objects, the present invention provides a small sample anti-learning action recognition method based on WiFi CSI, including preprocessing received WiFi CSI data to be recognized to obtain small sample input data corresponding to the WiFi CSI data to be recognized; and inputting the small sample input data into a small sample anti-learning action model so as to identify the target position action.
In one embodiment of the present invention, the small sample challenge learning action model includes: an encoder model for encoding the small sample input data; wherein the small sample input data comprises: source location data and target location data; a source position decoder model for decoding the encoded source position data to obtain source position action recognition class probabilities; the target position decoder model is used for decoding the encoded target position data to obtain target position action recognition category probability; and the discriminator model is used for decoding the encoded combined data spliced by the source position data and the target position data to obtain the combined recognition category probability.
In an embodiment of the present invention, the training process of the small sample countermeasure learning action model includes: training only parameters of the encoder model and the source position decoder model using the source position dataset; freezing parameters of the encoder model and training only parameters of the discriminator model using a combined dataset; alternately training parameters of the encoder model and source position decoder model, the encoder model and target position decoder model, and a discriminator model using a source position dataset, a target position dataset, and a combined dataset, respectively; the combined data set is obtained by splicing source position data and target position data.
In one embodiment of the present invention, the alternately training parameters of the encoder model and source position decoder model, the encoder model and target position decoder model, and discriminator model using the source position dataset, the target position dataset, and the combined dataset, respectively, comprises: training parameters of the encoder model and the source position decoder model and parameters of the encoder model and the target position decoder model, respectively, using the source position dataset and the target position dataset; in the event that parameters of the encoder model freeze, parameters of the discriminator model are trained using the combined dataset.
In an embodiment of the present invention, the preprocessing the received WiFi CSI data to be identified to obtain small sample input data corresponding to the WiFi CSI data to be identified includes: extracting amplitude and phase difference data in the WiFi CSI data to be identified; dividing the amplitude and phase difference data corresponding to each dividing action from the extracted amplitude and phase difference data; and respectively filling and normalizing the amplitude and phase difference data of each dividing action to obtain identification input data containing the amplitude and phase difference data of each dividing action after filling and normalizing.
In an embodiment of the present invention, the method for dividing the extracted amplitude and phase difference data into the amplitude and phase difference data corresponding to each dividing action includes: based on a sliding window method, performing segmentation action detection on the extracted amplitude and phase difference data to obtain start and stop time of each segmentation action; based on the start-stop time of each dividing operation, the amplitude and phase difference data corresponding to each dividing operation are divided from the extracted amplitude and phase difference data.
In one embodiment of the present invention, the encoder model comprises: 3 convolution structures and 1 indication structure; and/or the structure of the source position decoder model, the target position decoder model and the discriminator model comprises: 3 full connection layers.
To achieve the above and other related objects, the present invention provides a small sample countermeasure learning action recognition system based on WiFi CSI, the system comprising: the preprocessing module is used for preprocessing the received WiFi CSI data to be identified so as to obtain small sample input data corresponding to the WiFi CSI data to be identified; the recognition module is connected with the preprocessing module and is used for inputting the small sample input data into a small sample anti-learning action model so as to recognize the target position action; wherein the small sample challenge learning action model comprises: an encoder model for encoding the small sample input data; wherein the small sample input data comprises: source location data and target location data; a source position decoder model for decoding the encoded source position data to obtain source position action recognition class probabilities; the target position decoder model is used for decoding the encoded target position data to obtain target position action recognition category probability; and the discriminator model is used for decoding the encoded combined data spliced by the source position data and the target position data to obtain the combined recognition category probability.
In an embodiment of the present invention, the training process of the small sample countermeasure learning action model includes: training only parameters of the encoder model and the source position decoder model using the source position dataset; freezing parameters of the encoder model and training only parameters of the discriminator model using a combined dataset; alternately training parameters of the encoder model and source position decoder model, the encoder model and target position decoder model, and a discriminator model using a source position dataset, a target position dataset, and a combined dataset, respectively; the combined data set is obtained by splicing source position data and target position data.
To achieve the above and other related objects, the present invention provides a small sample countermeasure learning action recognition terminal based on WiFi CSI, including: a memory for storing a computer program; and the processor is used for executing the WiFi CSI-based small sample countermeasure learning action recognition method.
As described above, the invention relates to a small sample anti-learning action recognition method, a system and a terminal based on WiFi CSI, which have the following beneficial effects: preprocessing received WiFi CSI data to be identified to obtain small sample input data corresponding to the WiFi CSI data to be identified; inputting the small sample input data into a small sample anti-learning action model so as to identify a target position action; according to the method, different human actions can be accurately identified under the conditions of intelligent space and the like, such as the noise of other objects on the WiFi CSI data propagation path and the small acquired data quantity, and the convenience and privacy protection of capturing action features on the WiFi CSI can be improved.
Drawings
Fig. 1 is a flowchart illustrating a small sample anti-learning action recognition method based on WiFi CSI according to an embodiment of the invention.
Fig. 2 is a flowchart illustrating a small sample anti-learning action recognition method based on WiFi CSI according to an embodiment of the invention.
Fig. 3 is a schematic diagram of an encoder model according to an embodiment of the present invention.
Fig. 4 is a schematic diagram showing the structure of a source location decoder model and a target location decoder model according to an embodiment of the invention.
Fig. 5 is a schematic diagram showing the structure of a discriminator model in an embodiment of the invention.
Fig. 6 is a schematic diagram of a small sample countermeasure learning action recognition system based on WiFi CSI according to an embodiment of the invention.
Fig. 7 is a schematic diagram of a small sample countermeasure learning action recognition terminal based on WiFi CSI according to an embodiment of the invention.
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
In the following description, reference is made to the accompanying drawings, which illustrate several embodiments of the invention. It is to be understood that other embodiments may be utilized and that mechanical, structural, electrical, and operational changes may be made without departing from the spirit and scope of the present invention. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present invention is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. Spatially relative terms, such as "upper," "lower," "left," "right," "lower," "below," "lower," "above," "upper," and the like, may be used herein to facilitate a description of one element or feature as illustrated in the figures relative to another element or feature.
Throughout the specification, when a portion is said to be "connected" to another portion, this includes not only the case of "direct connection" but also the case of "indirect connection" with other elements interposed therebetween. In addition, when a certain component is said to be "included" in a certain section, unless otherwise stated, other components are not excluded, but it is meant that other components may be included.
The first, second, and third terms are used herein to describe various portions, components, regions, layers and/or sections, but are not limited thereto. These terms are only used to distinguish one portion, component, region, layer or section from another portion, component, region, layer or section. Thus, a first portion, component, region, layer or section discussed below could be termed a second portion, component, region, layer or section without departing from the scope of the present invention.
Furthermore, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including" specify the presence of stated features, operations, elements, components, items, categories, and/or groups, but do not preclude the presence, presence or addition of one or more other features, operations, elements, components, items, categories, and/or groups. The terms "or" and/or "as used herein are to be construed as inclusive, or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; A. b and C). An exception to this definition will occur only when a combination of elements, functions or operations are in some way inherently mutually exclusive.
The invention provides a WiFi CSI-based small sample anti-learning action recognition method, which solves the problem that in the prior art, when a large amount of data cannot be acquired at a target position, particularly when WiFi CSI is interfered by noise and the WiFi CSI is positioned at a position with extremely small data quantity, a method for performing target position action recognition by using small sample data to obtain an algorithm model with excellent performance only at a target recognition position is needed to be provided. Preprocessing received WiFi CSI data to be identified to obtain small sample input data corresponding to the WiFi CSI data to be identified; inputting the small sample input data into a small sample anti-learning action model so as to identify a target position action; according to the method, different human actions can be accurately identified under the conditions of intelligent space and the like, such as the noise of other objects on the WiFi CSI data propagation path and the small acquired data quantity, and the convenience and privacy protection of capturing action features on the WiFi CSI can be improved.
The invention deploys a group of WiFi transmitting end equipment and receiving end equipment to transmit and receive the CSI data. When a user acts in the environment, interference is caused to the propagation path of the WiFi CSI signal, so that the WiFi CSI signal is changed. The WiFi CSI signal sequence collected by the receiving end comprises action data information. According to the invention, the mode of each action can be captured by analyzing the change mode of the WiFi CSI signal sequence, so that the action recognition is realized.
The embodiments of the present invention will be described in detail below with reference to the attached drawings so that those skilled in the art to which the present invention pertains can easily implement the present invention. This invention may be embodied in many different forms and is not limited to the embodiments described herein.
As shown in fig. 1, a flow chart of a small sample countermeasure learning action recognition method based on WiFi CSI in an embodiment of the invention is shown.
The method comprises the following steps:
step S11: and preprocessing the received WiFi CSI data to be identified to obtain small sample input data corresponding to the WiFi CSI data to be identified.
Optionally, the preprocessing the received WiFi CSI data to be identified to obtain small sample input data corresponding to the WiFi CSI data to be identified includes: extracting amplitude and phase difference data in the WiFi CSI data to be identified; dividing the amplitude and phase difference data corresponding to each dividing action from the extracted amplitude and phase difference data; and respectively filling and normalizing the amplitude and phase difference data of each dividing action to obtain identification input data containing the filled and normalized amplitude and phase difference data of each dividing action, wherein the identification input data is used as the small sample anti-learning action model.
Optionally, the method for dividing the amplitude and phase difference data corresponding to each dividing action from the extracted amplitude and phase difference data includes: based on a sliding window method, performing segmentation action detection on the extracted amplitude and phase difference data to obtain start and stop time of each segmentation action; based on the start-stop time of each dividing operation, the amplitude and phase difference data corresponding to each dividing operation are divided from the extracted amplitude and phase difference data.
For example, in a scenario where WiFi transmitting and receiving devices are deployed in an experimental room, motion acquisition experiments are performed at a plurality of different locations, 400 (40 per class of motion) experimental data are acquired at each location. The WiFi CSI stream received from the WiFi receiving end is a time sequence of a 90-dimensional complex sequence containing a plurality of actions, and firstly, a 90-dimensional amplitude and a 90-dimensional phase difference data sequence are obtained through preprocessing. And then performing motion detection on the data sequence by using a sliding window method to obtain the starting time and the stopping time of each motion. And then dividing each action from the data sequence by utilizing the start and stop time of each action, filling the length of each action to be equal, and carrying out normalization operation on the data of each dimension to finally obtain the amplitude and phase difference sequence of each action, wherein the amplitude and phase difference sequence is used as the input of the small sample anti-learning action model.
Step S12: and inputting the small sample input data into a small sample anti-learning action model so as to identify the target position action.
Optionally, the small sample countermeasure learning action model includes: an encoder model for encoding the small sample input data; wherein the small sample input data comprises: source location data and target location data; a source position decoder model for decoding the encoded source position data to obtain source position action recognition class probabilities; the target position decoder model is used for decoding the encoded target position data to obtain target position action recognition category probability; and the discriminator model is used for decoding the encoded combined data spliced by the source position data and the target position data to obtain the combined recognition category probability.
Specifically, the process of inputting the small sample input data into the small sample antagonistic learning action model to identify the target position action includes: as shown in fig. 2, after the small sample input data is encoded by the encoder model, the source position data in the encoded small sample input data enters the source position encoder model to obtain the source position action recognition category probability; target position data in the small sample input data coded by the coder enters a target position coder model to obtain target position action category probability. And inputting the combined data spliced by the source position data and the target position data in the small sample input data into the discriminator model to obtain the combined recognition category probability.
It should be noted that the combined data is formed by splicing all or part of the target position data in the small sample data and all or part of the randomly extracted data in the source position data.
Optionally, the training process of the small sample countermeasure learning action model includes: training only parameters of the encoder model and the source position decoder model using the source position dataset; freezing parameters of the encoder model and training only parameters of the discriminator model using a combined dataset; alternately training parameters of the encoder model and source position decoder model, the encoder model and target position decoder model, and a discriminator model using a source position dataset, a target position dataset, and a combined dataset, respectively; the combined data set is obtained by splicing source position data and target position data.
The small sample is aimed at the encoder-decoder model against the learning action model to make the predicted action class accurate, and the discriminator model is aimed at making the predicted packet class accurate. However, since the packet class includes information about whether the action is from the source location or the target location and information about whether the action is from the same class, the action class includes only information about which class the action is from. By setting a proper loss function, the targets of the encoder-decoder and the targets of the discriminator collide, so that the encoder-decoder and the discriminator are in opposition in the training process, and the recognition performance is improved in the opposition.
Optionally, the training of only parameters of the encoder model and the source position decoder model using the source position data set comprises:
training only parameters of the encoder model and the source position decoder model with a source position dataset based on a first loss function; wherein the first loss function comprises:
Figure BDA0002837655340000071
wherein m is s For the number of samples in the source location dataset, n is the number of categories. y is s Is the actual class probability, y ', of the source location action' s The class probabilities are identified for the source position actions output by the source position decoder model, i representing the ith data sample, and c representing the c-th class.
Optionally, all or part (a certain proportion) of the random samples in the source position data set are divided into a source position training set, and the rest are divided into a source position test set. Preferably, the source location dataset is randomly sampled 80% divided into a source location training set and 20% divided into a source location test set.
Optionally, freezing parameters of the encoder model and training only parameters of the discriminator model using a combined dataset; training only parameters of the discriminator model using the combined dataset based on the second loss function; wherein the second loss function comprises:
Figure BDA0002837655340000072
Wherein m is g Is the number of samples of the combined dataset. y is g Is the actual class probability of the combination, y' g The class probabilities are identified for the combinations output by the discriminator model, i representing the ith data sample, c representing the c-th class.
Optionally, the one or more data of each type of action randomly sampled in the target position data set is divided into a target position training set, and the rest is divided into a target position testing set. For example, a total of 10 classes of actions are set, 2 data per class of actions are randomly sampled in the target location dataset divided into a target location training set, and the rest divided into a target location test set.
Optionally, the alternately training parameters of the encoder model and the source position decoder model, the encoder model and the target position decoder model, and the discriminator model using the source position dataset, the target position dataset, and the combined dataset, respectively, includes: training parameters of the encoder model and the source position decoder model and parameters of the encoder model and the target position decoder model, respectively, using the source position dataset and the target position dataset; in the event that parameters of the encoder model freeze, parameters of the discriminator model are trained using the combined dataset.
Optionally, the means for training parameters of the encoder model and the source position decoder model and parameters of the encoder model and the target position decoder model using the source position data set and the target position data set, respectively, comprises:
based on the third loss function, the encoder model is trained, along with the source position and the target position decoder model, and the target position decoder model parameters are initialized using the source position decoder parameters. The encoder and source location decoder are trained using source location data in the packet data, and the target location data trains the encoder and target location decoder. The third loss function is as follows:
Figure BDA0002837655340000081
wherein m is t For the number of samples of source position data, y t Is the actual category of target position actions, y' t The predicted action class probabilities output for the target position decoder,
Figure BDA0002837655340000082
for the probability of actually being a j-th class combination, < >>
Figure BDA0002837655340000083
For the prediction output by the discriminator to be the j-th class combination probability, i represents the i-th data sample and c represents the c-th class.
Optionally, in the event that parameters of the encoder model freeze, training parameters of the discriminator model using the combined dataset includes: based on the second loss function, only parameters of the discriminator model are trained using the combined dataset.
Alternatively, by performing multiple iterative training on the small sample anti-learning action model, action recognition at a target position with a very small data volume can be realized. In the training process of the small sample countermeasure learning action model, the first step is: training only parameters of the encoder model and the source position decoder model using the source position dataset; step two: freezing parameters of the encoder model and training only parameters of the discriminator model using a combined dataset; step three: alternately training parameters of the encoder model and source position decoder model, the encoder model and target position decoder model, and a discriminator model using a source position dataset, a target position dataset, and a combined dataset, respectively; and respectively setting the maximum iteration times and the batch size of the first step, the second step and the third step so as to train the small sample countermeasure learning action model.
For example, the maximum iteration number in step one is 10000 times, the batch size is 32, the maximum iteration number in step two is 5000 times, the batch size is 10, the maximum iteration number in step three is 20000 times, and the batch size is 10. As shown in Table one, steps 1-7 are training of step one, in which the parameters of the encoder model and the source position decoder model are updated, steps 8-14 are training of step two, in which the parameters of the discriminator model are updated, steps 15-27 are training of step three, in which the first part is 17-22 steps, in which the parameters of the encoder model, the source position decoder model and the target position decoder model are updated, the second part is 23-26 steps, in which the parameters of the discriminator model are updated.
Table 1: training method for small sample countermeasure learning action model
Figure BDA0002837655340000091
After three steps of training, the learning of the small sample countermeasure learning action model is completed, and the small sample countermeasure learning action model can be applied to an action recognition task, and can recognize the action of the target position under the condition that a small amount of data is used.
Optionally, the structure of the encoder model includes: 3 convolution structures and 1 indication structure; and/or the structure of the source position decoder model, the target position decoder model and the discriminator model comprises: 3 full connection layers.
Optionally, as shown in fig. 3, the structure of the encoder model includes: 3 convolution structures and 1 indication structure. The input of the encoder model is small sample input data (such as phase difference and amplitude data) of the WiFi CSI data to be identified through preprocessing, and the output is the input of the decoder model. The encoder uses 3 convolution structures and 1 index structure to carry out multidimensional feature extraction on the input amplitude and phase difference data, and outputs the extracted features.
The convolution structure comprises a convolution layer, a batch normalization layer, a relu activation layer and a pooling layer. The convolution layer in the convolution neural network structure extracts local features of data in a convolution mode, the batch normalization layer normalizes the features to adjust distribution, the relu activation layer activates linear features into nonlinear features by using a relu activation function, and the pooling layer extracts global features in a maximum pooling mode and reduces the dimensions of the features. The method comprises the steps of processing an index structure by using a parallel convolution and pooling processing structure, wherein the first path is processed by using a continuous 1x1 convolution kernel and a continuous 3x3 convolution kernel, the second path is processed by using the continuous 1x1 convolution kernel and two continuous 3x3 convolution kernels, the third path is processed by using a maximum pooling method, and finally, the processed results of the three paths are spliced during output. In each path, each convolution kernel is followed by activation using the relu activation layer.
Optionally, the structure of the source location decoder model, the target location decoder model and the discriminator model includes: 3 full connection layers.
As shown in fig. 4, the source position decoder model and the target position decoder model have the same structure, and the source position decoder model uses the source position data encoded by the encoder as input, and outputs the source position data as a predicted motion type. The target position decoder uses the target position data encoded by the encoder as input and outputs the target position data as a predicted motion class. The source position decoder model and the target position decoder model use 3 full connection structures to further conduct high-order feature extraction on the features extracted by the input encoder, and output predicted action category probabilities.
The fully-connected structure comprises a fully-connected layer and an activation layer. The fully connected layer is a neural network of global connections for extracting global features. The activation layer uses an activation function to introduce nonlinear characteristics into the network. The first two active layers in the three-layer full-connection structure of the encoder and the discriminator are the relu active layers, and the last active layer is the softmax active layer. The softmax activation layer uses the softmax function to convert the features of the last fully connected layer output to a predicted classification probability output.
As shown in fig. 5, the input of the discriminator is the combined data obtained by encoding by the encoder and then splicing, and the output is the predicted combined class. Each data in the combined data is a binary group containing two input data, and is classified according to the source of the input data and whether the input data is similar or not, and four categories are all provided. The first class is two tuples of input data from a source location and of the same action class, and the second class is one tuple of input data from a source location and of the same action class from a target location. The third class is two tuples of input data from source locations but with different action categories. The fourth class is that the second class is a different input data tuple of action class from the source location and from the destination location. The discriminator model uses 3 full connection structures to perform high-order feature extraction, as with the decoder model, but the input is the combined data of the encoder model output splice, and the output is the combined recognition class probability.
Similar to the principles of the above embodiments, the present invention provides a small sample countermeasure learning action recognition system based on WiFi CSI.
Specific embodiments are provided below with reference to the accompanying drawings:
Fig. 6 shows a schematic structural diagram of a small sample countermeasure learning action recognition system based on WiFi CSI in an embodiment of the invention.
The system comprises:
the preprocessing module 61 is configured to preprocess the received WiFi CSI data to be identified, so as to obtain small sample input data corresponding to the WiFi CSI data to be identified;
an identification module 62, connected to the preprocessing module 61, for inputting the small sample input data into a small sample anti-learning action model to identify a target position action;
wherein the small sample challenge learning action model comprises: an encoder model for encoding the small sample input data; wherein the small sample input data comprises: source location data and target location data; a source position decoder model for decoding the encoded source position data to obtain source position action recognition class probabilities; the target position decoder model is used for decoding the encoded target position data to obtain target position action recognition category probability; and the discriminator model is used for decoding the encoded combined data spliced by the source position data and the target position data to obtain the combined recognition category probability.
Optionally, the preprocessing module 61 is configured to extract amplitude and phase difference data in the WiFi CSI data to be identified; dividing the amplitude and phase difference data corresponding to each dividing action from the extracted amplitude and phase difference data; and respectively filling and normalizing the amplitude and phase difference data of each dividing action to obtain identification input data containing the filled and normalized amplitude and phase difference data of each dividing action, wherein the identification input data is used as the small sample anti-learning action model.
Optionally, the preprocessing module 61 is configured to perform a segmentation action detection on the extracted amplitude and phase difference data based on a sliding window method, so as to obtain start and stop time of each segmentation action; based on the start-stop time of each dividing operation, the amplitude and phase difference data corresponding to each dividing operation are divided from the extracted amplitude and phase difference data.
Optionally, the training process of the small sample countermeasure learning action model includes: training only parameters of the encoder model and the source position decoder model using the source position dataset; freezing parameters of the encoder model and training only parameters of the discriminator model using a combined dataset; alternately training parameters of the encoder model and source position decoder model, the encoder model and target position decoder model, and a discriminator model using a source position dataset, a target position dataset, and a combined dataset, respectively; the combined data set is obtained by splicing source position data and target position data.
Optionally, the alternately training parameters of the encoder model and the source position decoder model, the encoder model and the target position decoder model, and the discriminator model using the source position dataset, the target position dataset, and the combined dataset, respectively, includes: training parameters of the encoder model and the source position decoder model and parameters of the encoder model and the target position decoder model, respectively, using the source position dataset and the target position dataset; in the event that parameters of the encoder model freeze, parameters of the discriminator model are trained using the combined dataset.
Alternatively, by performing multiple iterative training on the small sample anti-learning action model, action recognition at a target position with a very small data volume can be realized.
Optionally, the structure of the encoder model includes: 3 convolution structures and 1 indication structure; and/or the structure of the source position decoder model, the target position decoder model and the discriminator model comprises: 3 full connection layers.
It should be noted that the process of inputting the small sample input data into the small sample anti-learning action model by the recognition module 62 to recognize the target position action and the training process of the small sample anti-learning action model are similar to those described in the small sample anti-learning action recognition method based on WiFi CSI, and therefore will not be described herein.
Fig. 7 shows a schematic diagram of a small sample challenge learning action recognition terminal 70 based on WiFi CSI in an embodiment of the invention.
The small sample countermeasure learning action recognition terminal 70 based on WiFi CSI includes: a memory 71 and a processor 72, the memory 71 for storing a computer program; the processor 72 runs a computer program to implement the small sample challenge-learning action recognition method based on WiFi CSI as described in fig. 1.
Alternatively, the number of the memories 71 may be one or more, and the number of the processors 72 may be one or more, and one is taken as an example in fig. 7.
Optionally, the processor 72 in the WiFi CSI-based small sample countermeasure learning action recognition terminal 70 loads one or more instructions corresponding to the process of the application program into the memory 71 according to the steps as shown in fig. 1, and the processor 72 executes the application program stored in the first memory 71, so as to implement various functions in the WiFi CSI-based small sample countermeasure learning action recognition method as shown in fig. 3.
Optionally, the memory 71 may include, but is not limited to, high speed random access memory, nonvolatile memory. Such as one or more disk storage devices, flash memory devices, or other non-volatile solid-state storage devices; the processor 72 may include, but is not limited to, a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processing, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
Alternatively, the processor 72 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processing, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
The invention also provides a computer readable storage medium storing a computer program which, when run, implements the small sample anti-learning action recognition method based on WiFi CSI as shown in FIG. 1. The computer-readable storage medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs (compact disk-read only memories), magneto-optical disks, ROMs (read-only memories), RAMs (random access memories), EPROMs (erasable programmable read only memories), EEPROMs (electrically erasable programmable read only memories), magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing machine-executable instructions. The computer readable storage medium may be an article of manufacture that is not accessed by a computer device or may be a component used by an accessed computer device.
In summary, the method, the system and the terminal for recognizing the anti-learning action of the small sample based on the WiFi CSI are used for solving the problem that in the prior art, when a large amount of data cannot be acquired at a target position, particularly when the WiFi CSI is interfered by noise and the data amount is extremely small, the method for recognizing the target position action by using the small sample data to obtain an algorithm model with excellent performance is needed to be provided at the target recognition position. Preprocessing received WiFi CSI data to be identified to obtain small sample input data corresponding to the WiFi CSI data to be identified; inputting the small sample input data into a small sample anti-learning action model so as to identify a target position action; according to the method, different human actions can be accurately identified under the conditions of intelligent space and the like, such as the noise of other objects on the WiFi CSI data propagation path and the small acquired data quantity, and the convenience and privacy protection of capturing action features on the WiFi CSI can be improved. Therefore, the invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The above embodiments are merely illustrative of the principles of the present invention and its effectiveness, and are not intended to limit the invention. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the invention. It is therefore intended that all equivalent modifications and changes made by those skilled in the art without departing from the spirit and technical spirit of the present invention shall be covered by the appended claims.

Claims (7)

1. A small sample challenge learning action recognition method based on WiFi CSI, the method comprising:
preprocessing the received WiFi CSI data to be identified to obtain small sample input data corresponding to the WiFi CSI data to be identified;
inputting the small sample input data into a small sample anti-learning action model so as to identify a target position action;
wherein the small sample challenge learning action model comprises:
an encoder model for encoding the small sample input data; wherein the small sample input data comprises: source location data and target location data;
a source position decoder model for decoding the encoded source position data to obtain source position action recognition class probabilities;
the target position decoder model is used for decoding the encoded target position data to obtain target position action recognition category probability;
the discriminator model is used for decoding the encoded combined data spliced by the source position data and the target position data to obtain combined recognition category probability;
the training process of the small sample countermeasure learning action model comprises the following steps: training only parameters of the encoder model and the source position decoder model using the source position dataset; freezing parameters of the encoder model and training only parameters of the discriminator model using a combined dataset; alternately training parameters of the encoder model and source position decoder model, the encoder model and target position decoder model, and a discriminator model using a source position dataset, a target position dataset, and a combined dataset, respectively; the combined data set is obtained by splicing source position data and target position data.
2. The WiFi CSI based small sample countermeasure learning action recognition method according to claim 1, wherein the alternately training parameters of the encoder model and source location decoder model, the encoder model and target location decoder model, and discriminator model with a source location data set, a target location data set, and a combined data set, respectively, comprises:
training parameters of the encoder model and the source position decoder model and parameters of the encoder model and the target position decoder model, respectively, using the source position dataset and the target position dataset;
in the event that parameters of the encoder model freeze, parameters of the discriminator model are trained using the combined dataset.
3. The WiFi CSI based small sample countermeasure learning action recognition method according to claim 1, wherein the preprocessing of the received WiFi CSI data to be recognized to obtain small sample input data corresponding to the WiFi CSI data to be recognized includes:
extracting amplitude and phase difference data in the WiFi CSI data to be identified;
dividing the amplitude and phase difference data corresponding to each dividing action from the extracted amplitude and phase difference data;
And respectively filling and normalizing the amplitude and phase difference data of each dividing action to obtain identification input data containing the amplitude and phase difference data of each dividing action after filling and normalizing.
4. The WiFi CSI based small sample countermeasure learning action recognition method according to claim 3, wherein the dividing the amplitude and phase difference data corresponding to each divided action from the extracted amplitude and phase difference data includes:
based on a sliding window method, performing segmentation action detection on the extracted amplitude and phase difference data to obtain start and stop time of each segmentation action;
based on the start-stop time of each dividing operation, the amplitude and phase difference data corresponding to each dividing operation are divided from the extracted amplitude and phase difference data.
5. The WiFi CSI based small sample countermeasure learning action recognition method according to claim 1, wherein the encoder model structure comprises: 3 convolution structures and 1 indication structure; and/or the structure of the source position decoder model, the target position decoder model and the discriminator model comprises: 3 full connection layers.
6. A small sample challenge-learning action recognition system based on WiFi CSI, the system comprising:
The preprocessing module is used for preprocessing the received WiFi CSI data to be identified so as to obtain small sample input data corresponding to the WiFi CSI data to be identified;
the recognition module is connected with the preprocessing module and is used for inputting the small sample input data into a small sample anti-learning action model so as to recognize the target position action;
wherein the small sample challenge learning action model comprises:
an encoder model for encoding the small sample input data; wherein the small sample input data comprises: source location data and target location data;
a source position decoder model for decoding the encoded source position data to obtain source position action recognition class probabilities;
the target position decoder model is used for decoding the encoded target position data to obtain target position action recognition category probability;
the discriminator model is used for decoding the encoded combined data spliced by the source position data and the target position data to obtain combined recognition category probability;
wherein the training process of the small sample countermeasure learning action model comprises the following steps:
training only parameters of the encoder model and the source position decoder model using the source position dataset;
Freezing parameters of the encoder model and training only parameters of the discriminator model using a combined dataset;
alternately training parameters of the encoder model and source position decoder model, the encoder model and target position decoder model, and a discriminator model using a source position dataset, a target position dataset, and a combined dataset, respectively;
the combined data set is obtained by splicing source position data and target position data.
7. A small sample challenge learning action recognition terminal based on WiFi CSI, comprising:
a memory for storing a computer program;
a processor configured to perform the WiFi CSI-based small sample countermeasure learning action recognition method of any of claims 1 to 5.
CN202011481268.3A 2020-12-15 2020-12-15 WiFi CSI-based small sample countermeasure learning action recognition method, system and terminal Active CN112528880B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011481268.3A CN112528880B (en) 2020-12-15 2020-12-15 WiFi CSI-based small sample countermeasure learning action recognition method, system and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011481268.3A CN112528880B (en) 2020-12-15 2020-12-15 WiFi CSI-based small sample countermeasure learning action recognition method, system and terminal

Publications (2)

Publication Number Publication Date
CN112528880A CN112528880A (en) 2021-03-19
CN112528880B true CN112528880B (en) 2023-07-07

Family

ID=75000376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011481268.3A Active CN112528880B (en) 2020-12-15 2020-12-15 WiFi CSI-based small sample countermeasure learning action recognition method, system and terminal

Country Status (1)

Country Link
CN (1) CN112528880B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414600A (en) * 2019-07-27 2019-11-05 西安电子科技大学 A kind of extraterrestrial target small sample recognition methods based on transfer learning
CN110598585A (en) * 2019-08-27 2019-12-20 南京理工大学 Sit-up action recognition method based on convolutional neural network
WO2020037313A1 (en) * 2018-08-17 2020-02-20 The Regents Of The University Of California Device-free-human identification and device-free gesture recognition
CN110929242A (en) * 2019-11-20 2020-03-27 上海交通大学 Method and system for carrying out attitude-independent continuous user authentication based on wireless signals
CN112036433A (en) * 2020-07-10 2020-12-04 天津城建大学 CNN-based Wi-Move behavior sensing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020037313A1 (en) * 2018-08-17 2020-02-20 The Regents Of The University Of California Device-free-human identification and device-free gesture recognition
CN110414600A (en) * 2019-07-27 2019-11-05 西安电子科技大学 A kind of extraterrestrial target small sample recognition methods based on transfer learning
CN110598585A (en) * 2019-08-27 2019-12-20 南京理工大学 Sit-up action recognition method based on convolutional neural network
CN110929242A (en) * 2019-11-20 2020-03-27 上海交通大学 Method and system for carrying out attitude-independent continuous user authentication based on wireless signals
CN112036433A (en) * 2020-07-10 2020-12-04 天津城建大学 CNN-based Wi-Move behavior sensing method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CsiGAN: Robust Channel State Information-Based Activity Recognition With GANs;Chunjing Xiao etal;《IEEE INTERNET OF THINGS JOURNAL》;20190821;10191-10204 *
Harmony_Exploiting_coarse-grained_received_signal_strength_from_IoT_devices_for_human_activity_recognition;Zicheng Chi etal;《2016 IEEE 24th International Conference on Network Protocols (ICNP)》;20161215;1-10 *
Using GAN to Enhance the Accuracy of Indoor Human Activity Recognition;Parisa Fard Moshiri;《arXiv》;20200423;1-5 *

Also Published As

Publication number Publication date
CN112528880A (en) 2021-03-19

Similar Documents

Publication Publication Date Title
US20200242424A1 (en) Target detection method and apparatus
CN111027487B (en) Behavior recognition system, method, medium and equipment based on multi-convolution kernel residual error network
CN108846835B (en) Image change detection method based on depth separable convolutional network
Zhang et al. Deep neural networks for wireless localization in indoor and outdoor environments
AU2018363299A1 (en) Time invariant classification
CN113095370B (en) Image recognition method, device, electronic equipment and storage medium
CN109446804B (en) Intrusion detection method based on multi-scale feature connection convolutional neural network
CN104700100A (en) Feature extraction method for high spatial resolution remote sensing big data
CN114330522A (en) Training method, device and equipment of image recognition model and storage medium
Kotenko et al. An approach for intelligent evaluation of the state of complex autonomous objects based on the wavelet analysis
CN116343261A (en) Gesture recognition method and system based on multi-modal feature fusion and small sample learning
CN112101114A (en) Video target detection method, device, equipment and storage medium
CN112528880B (en) WiFi CSI-based small sample countermeasure learning action recognition method, system and terminal
CN117221816A (en) Multi-building floor positioning method based on Wavelet-CNN
EP4270250A1 (en) Methods and systems for time-series classification using reservoir-based spiking neural network
Bourjandi et al. Predicting user's movement path in indoor environments using the stacked deep learning method and the fuzzy soft‐max classifier
CN109326324B (en) Antigen epitope detection method, system and terminal equipment
CN111191475A (en) Passive behavior identification method based on UHF RFID
Aubrun et al. Unsupervised learning of robust representations for change detection on sentinel-2 earth observation images
El Zein et al. Intelligent Real-time Human Activity Recognition Using Wi-Fi Signals
CN113989560A (en) Online semi-supervised learning classifier for radar gesture recognition and classification method thereof
CN113221709A (en) Method and device for recognizing user movement and water heater
CN111797783A (en) Intelligent pulsar screening system based on two-channel convolutional neural network
Sagduyu et al. Joint Sensing and Task-Oriented Communications with Image and Wireless Data Modalities for Dynamic Spectrum Access
Wan et al. Object‐based method for optical and SAR images change detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant