CN111329474A - Electroencephalogram identity recognition method and system based on deep learning and information updating method - Google Patents

Electroencephalogram identity recognition method and system based on deep learning and information updating method Download PDF

Info

Publication number
CN111329474A
CN111329474A CN202010143355.1A CN202010143355A CN111329474A CN 111329474 A CN111329474 A CN 111329474A CN 202010143355 A CN202010143355 A CN 202010143355A CN 111329474 A CN111329474 A CN 111329474A
Authority
CN
China
Prior art keywords
electroencephalogram
data
deep learning
layer
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010143355.1A
Other languages
Chinese (zh)
Other versions
CN111329474B (en
Inventor
赵恒�
汪旭震
董明皓
陈博武
吕倩茹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202010143355.1A priority Critical patent/CN111329474B/en
Publication of CN111329474A publication Critical patent/CN111329474A/en
Application granted granted Critical
Publication of CN111329474B publication Critical patent/CN111329474B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/117Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • A61B5/378Visual stimuli
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Psychiatry (AREA)
  • Artificial Intelligence (AREA)
  • Psychology (AREA)
  • Evolutionary Computation (AREA)
  • Signal Processing (AREA)
  • Physiology (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Fuzzy Systems (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a brain electrical identity recognition method, a brain electrical identity recognition system and an information updating method based on deep learning, wherein the recognition method comprises the following steps: stimulating a person to be recorded into the system by using a steady-state visual evoked paradigm, and collecting an electroencephalogram signal generated by stimulation; preprocessing the electroencephalogram data by adopting a band-pass filtering and combining an independent component analysis method, and performing single cutting on the preprocessed electroencephalogram data to expand an electroencephalogram data set; training a deep learning multi-classification network model through the cut time sequence electroencephalogram signals; calculating the output of a plurality of cut samples, modifying an optimization function of the weight parameter by adding a penalty function, and extracting the common characteristics of the electroencephalogram signals of the cut samples; identity recognition is carried out through the trained network model, and the purpose of rejecting the intruder is achieved through setting a threshold value. The invention improves the signal-to-noise ratio of the electroencephalogram signal, enhances the time domain characteristics of the electroencephalogram signal, accelerates the system operation by data cutting and combining cutting training and function transformation, and improves the recognition efficiency.

Description

Electroencephalogram identity recognition method and system based on deep learning and information updating method
Technical Field
The invention belongs to the technical field of identity information identification, and relates to a brain electrical identity identification method and system based on deep learning and an information updating method.
Background
At present, electroencephalogram signals are proved to have unique characteristics and can be used for biological identification. In recent years, the electroencephalogram identification technology attracts attention of researchers again due to the advantages of confidentiality, high safety and the like. There are many ways for the brain electrical signal to identify and authenticate, and the brain electrical signal can be roughly divided into: the method comprises the steps of electroencephalogram identification based on resting potential, electroencephalogram identification based on visual evoked potential, electroencephalogram identification based on motor imagery and electroencephalogram identification based on event-related potential. The motor imagery electroencephalogram is an electroencephalogram mode when a certain limb is imagined, and identification by utilizing motor imagery electroencephalogram data has certain limitation. In the process of acquiring the electroencephalogram signals, the testee needs to be highly matched, the motor imagery type given to the testee is very important, and different motor imagery types have great influence on the identity recognition of the testee. An event-related potential (ERP) is a special evoked potential, which is a brain potential recorded from the surface of the skull by average superposition when a person performs cognitive processing (such as attention, memory, thinking) on an object, and reflects the neuroelectrophysiological changes of the brain in the cognitive process, and obtains a high accuracy rate in the current relevant experimental research based on the electroencephalogram recognition of the event-related potential, but needs a subject to cooperatively perform an additional cognitive task in the data acquisition process, and therefore, the ERP is not suitable for the subject with cognitive dysfunction. Visual Evoked Potentials (VEPs) refer to specific active visual evoked potentials generated by a nervous system receiving visual stimuli (such as graphics or flash stimuli) occurring at specific times and locations, which are relatively easy to detect and suitable for brain-computer interfaces. The requirement on the testee is low, the visual evoked potential signals can be used for realizing the purpose as long as the visual function of the testee is normal, and the testee does not need to train or only needs to train a small amount. The steady visual evoked brain electricity means that the fixed flash or graph frequency is used to stimulate the vision of the testee, and the brain electricity signal generated by the testee is collected after a certain time. On the frequency spectrum of the brain electrical signal, the stimulation frequency used, and the amplitude corresponding to the multiple of the frequency, will also be very high. The waves generated by the stimulation frequency and its multiples are called fundamental wave (first harmonic), second harmonic (2 times frequency), third harmonic (3 times frequency), etc., respectively. The identity recognition based on the steady-state vision-induced electroencephalogram is that after a plurality of tests, electroencephalogram signals generated by each tested person on a plurality of electrodes under the stimulation of the same frequency are different in amplitude of fundamental waves and harmonic waves on a frequency spectrum and have difference; for the same frequency, the amplitude change of the fundamental wave and the harmonic wave of the same subject is not obvious under multiple tests. In the prior art, classification is carried out by using a support vector machine and linear discriminant analysis, and the identification accuracy is 75% and 91% respectively. Wavelet packet decomposition is adopted, an artificial neural network is used as a classifier for identity characteristic identification, and the average classification accuracy is 94.4%.
However, there are some limitations to the use of electroencephalogram identification for practical applications. For example, electroencephalogram signals are very sensitive to endogenous and exogenous noise during acquisition, often resulting in artifacts in the recorded data. Therefore, in a biometric identification system based on electroencephalogram signals, it is difficult to perform correct feature extraction and classification identification. For this reason, machine learning techniques based on models such as neural networks, hidden markov models, support vector machines, etc. have been proposed for the identification of different types of brain signals. The deep Neural network based on back propagation provides another effective means for electroencephalogram-based biological identification, such as Convolutional Neural Networks (CNN). In fact, one of the most useful features of CNN is that the network can feature extraction and simultaneously train neuron weight distribution features for classification recognition.
The first consideration in applying deep learning to electroencephalogram signals is how to organize the input data. Bashivan converted brain electrical activity to a time series of topologically multi-channel images in 2016, i.e., a single image represents the voltage distribution on the scalp surface at a certain point in time. The 2006 article by Nunez and Srinivas mentions that electroencephalographic signals are assumed to approximate the linear superposition of spatial global voltages caused by multiple dipole current sources in the brain, and therefore, in many successful cases, these global information are processed, usually using multiple spatial filters, and then combined with all electrodes to perform subsequent operations [ Ang, 2008; blankertz, 2008; rivet,2009 ].
Based on the physical characteristics of non-invasive brain electricity, all the collected brain electricity signals related to the non-invasive brain electricity signals are global in nature, so that no obvious hierarchy exists in space. In contrast, there is a great deal of evidence that electroencephalograms are organized across multiple time dimensions, such as nested oscillations that primarily involve information in both local and full time dimensions [ cantty, 2006; monto, 2008; schacket, 2002; vanhatalo,2004 ].
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:
(1) the signal-to-noise ratio of the electroencephalogram signal is low, a test needs to be modified in an electroencephalogram signal induction stage so as to improve the signal-to-noise ratio, and how to reasonably design an induction test enables the induced electroencephalogram signal to have higher signal-to-noise ratio and enables a testee to be more comfortable during the test, so that certain difficulty exists and a reliable method does not exist at present;
(2) abundant information contained in the electroencephalogram signals mainly comes from time dimension, however, how to better enhance time information and fully and effectively extract the time information needs to be solved urgently;
(3) the electroencephalogram signal data volume is small, so that the problem of how to train in a small sample and enable the system to achieve a good effect is solved;
(4) how to extract the common characteristics of the individual electroencephalogram signals so as to ensure that the learned characteristics have better stability and further achieve the aim of identity recognition.
Disclosure of Invention
In order to solve the problems, the invention provides an electroencephalogram identity recognition method based on deep learning, which improves the signal-to-noise ratio of an electroencephalogram signal, enhances the time domain characteristics of the electroencephalogram signal, cuts the electroencephalogram signal, accelerates system operation by combining cutting training and function transformation, and improves the recognition efficiency; the electroencephalogram collection can be carried out across multiple time intervals, the recognition rejection is realized on the premise of correct recognition, the safety and the stability of the identity recognition system are ensured, and the problems in the prior art are solved.
The invention also aims to provide an information updating method of the electroencephalogram identity recognition system based on deep learning.
The invention also aims to provide an electroencephalogram identity recognition system based on deep learning.
The invention adopts the technical scheme that an electroencephalogram identity recognition method based on deep learning comprises the following steps:
s101: stimulating a person to be recorded into the system by using a steady-state visual evoked paradigm, and collecting an electroencephalogram signal generated by stimulation;
s102: preprocessing the electroencephalogram data by adopting a band-pass filtering and combining an independent component analysis method, and performing single cutting on the preprocessed electroencephalogram data to expand an electroencephalogram data set;
s103: training a deep learning multi-classification network model through the cut time sequence electroencephalogram signals; the deep learning multi-classification network model comprises a convolutional neural network DNet and a convolutional neural network SNet which are independent from each other, wherein the two convolutional neural networks compress electroencephalogram information in the first two layers to enhance time domain information; the last layer of the convolutional neural network DNet and the last layer of the convolutional neural network SNet both replace a full connection layer through a convolutional layer; calculating the size of a required back propagation gradient, performing back propagation, optimizing weight parameters by a small batch random gradient descent method, and training a deep learning multi-classification network model;
s104: simultaneously calculating the output of a plurality of cut samples by a cutting training method, putting a plurality of adjacent subsamples together, and storing the result of the intermediate convolution in an expansion convolution mode; modifying a weight parameter optimization function by adding a penalty function, and extracting common characteristics of electroencephalogram signals of the cut samples;
s105: identity recognition; inputting the electroencephalogram data to be recognized into the trained deep learning multi-classification network model, and when the output value O is more than or equal to OthresholdAt the moment, outputting a label corresponding to the value, prompting personnel to successfully identify, and when O is reached<OthresholdIf so, the system indicates that the personnel is not input temporarily and is suspected to be an intruder, and the system prompts the recognition failure to input again; when the input fails for a plurality of times continuously, the system sends out an alarm to prompt that an intruder exists; o isthresholdIs a set threshold.
Further, in the step S101, the subject looks at the first display screen 1m away from the subject, and the first display screen appears a cross in the first 0 th second to prompt the subject to prepare, wherein the time of the cross is 0-1.2 seconds; the method comprises the steps that three colors of red, yellow and blue are randomly generated on a first display screen when the user watches the first display screen at the 1.2 th second, each color corresponds to 10Hz and 12Hz frequencies respectively, flashing stimulation is carried out in one of six states, the duration time is 1.2-5.2 seconds, rest is carried out in 5.2-10.2 seconds after each test, and the above is a flashing stimulation test.
Furthermore, the testee wears 32-lead electroencephalogram acquisition equipment, wherein 1 electrode is a reference electrode, the rest 31 electrodes acquire data, and the electroencephalogram signal data set of the testee is recorded through software in the electroencephalogram acquisition equipment.
Further, in step S102, expanding the electroencephalogram data set specifically includes:
marking the electroencephalogram data set: representing the preprocessed electroencephalogram data into a single-channel two-dimensional sequence, wherein the number of electrodes is used as the height, and the number of time sampling points corresponding to any electrode is used as the width; recording the preprocessed electroencephalogram data
Figure BDA0002399869010000041
N represents the total quantity of all the electroencephalogram data acquired in the k-th acquisition test, and the recording form of any electroencephalogram data in the k-th acquisition test is
Figure BDA0002399869010000042
Wherein j is more than or equal to 1 and less than or equal to N, RE×WRepresenting a two-dimensional data matrix set, wherein E represents the number of electrodes used for acquiring electroencephalogram data, and W represents the number of sampling points in the recording time of the electroencephalogram data; each electroencephalogram data
Figure BDA0002399869010000043
For corresponding identity tags
Figure BDA0002399869010000044
Is shown in which
Figure BDA0002399869010000045
Corresponding to elements in a G-type set L, wherein the set L represents the identity label of system personnel, and G is the total number of the tested persons;
intensive cutting, namely setting a window for each electroencephalogram sample in an electroencephalogram data set to slide, recording the window width parameter as W', recording the total electroencephalogram signal input width parameter W as the sampling rate × for single electroencephalogram data, and recording the time for any electroencephalogram data
Figure BDA0002399869010000046
And performing sliding intensive cutting on the electroencephalogram signal by using the window to generate the following data:
Figure BDA0002399869010000047
Figure BDA0002399869010000048
each one after cutting is shown
Figure BDA0002399869010000049
The generated electroencephalogram data set, i.e. for any electroencephalogram data
Figure BDA00023998690100000410
Performing clipping to generate W-W' +1 data, and the clipped brain electrical data has the same label as the clipped brain electrical data
Figure BDA00023998690100000411
Further, in the step S103, the convolutional neural network DNet and the convolutional neural network SNet do not have padding layers, and the first two layers of the convolutional neural network DNet and the convolutional neural network SNet decompose a filter with a size of n × m into filters with a size of n × 1 and 1 × m, so as to reduce the amount of calculation;
the convolutional neural network DNet is 11 layers, and a small-size filter and a small step length are adopted for extracting common characteristics in electroencephalogram signals; extracting the characteristics of the data processed by the first two layers through a square nonlinear layer, an average pooling layer and a logarithm activation function layer; batch standardization is used after each convolutional layer, so that the output data of the layer is close to normal distribution when the output data of the layer is used as input data of the next layer, Dropout is set to be 0.5 before each convolutional layer except the first two convolutional layers, and overfitting caused by too few electroencephalogram data samples is reduced;
the convolutional neural network SNet is 5 layers, and a large-size filter and a large step length are adopted for extracting well-known power spectrum characteristics of the brain electrical signals; after the data are compressed to a time dimension through the first two layers, the ELUs are used as an activation function, and the expression of the activation function is shown as the formula (1):
Figure BDA0002399869010000051
where α is set to 1, x represents the output of any neuron in the convolutional neural network SNet;
after the ELUs function is passed, adopting a maximum pooling layer to continue processing, and then using a standardized processing mode formed by a convolution layer, an ELUs activating function and the pooling layer;
in the dense layer, the convolutional neural network DNet and the convolutional neural network SNet respectively adopt filters with the sizes of 30 × 1 and 2 × 1 to convolve the obtained features, and the operation of replacing the fully-connected layer by the convolutional layer is used for further convolving the extracted features into a matrix feature of 1 × 1 × G, wherein G represents the number of people in the system.
Further, the training deep learning multi-classification network model specifically includes: the classifier adopts a softmax function shown as formula (2) to input electroencephalogram data
Figure BDA0002399869010000052
Is extracted from
Figure BDA0002399869010000053
Converting into a matrix containing G numerical values with the probability less than 1, wherein each numerical value in the matrix represents a label l on the premise of extracting characteristics of input electroencephalogram datagConditional probability of (a), thetaΨThe weight parameter of the feature extraction part in the network is shown;
Figure BDA0002399869010000054
wherein the content of the first and second substances,
Figure BDA0002399869010000055
to represent
Figure BDA0002399869010000056
Obtaining the label l under the conditiongThe conditional probability of (a);
in training, by minimizing the sum of losses between the output obtained by each data in a batch of input data through the network and the label corresponding to the data, the back propagation adjustment parameter is performed, and the new weight parameter θ' is updated as shown in formula (3):
Figure BDA0002399869010000057
wherein, B represents the EEG data of each training inputB is more than or equal to 1 and less than or equal to N, N represents the total electroencephalogram data of all people collected in the collection test, B is more than or equal to 1 and less than or equal to B, and G represents the number of people in the system, namely the dimensionality of the softmax output vector; g represents the element number in the vector; theta represents a weight parameter in the depth network; function delta, when
Figure BDA0002399869010000058
The number of the carbon atoms is 1, and the rest is 0.
Further, in step S104, a penalty function is added to modify the weight parameter, as shown in equation (4):
Figure BDA0002399869010000061
the function can be divided into two terms, one is the cross entropy obtained by the sample, the other is the negative logarithm of the sample multiplied by the conditional probability of the next sample adjacent to the sample,
Figure BDA0002399869010000062
related to formula (3).
Further, in step S102, the electroencephalogram data preprocessing: the acquired electroencephalogram signals are down-sampled to 256Hz, band-pass filtering is carried out at 0.5-40Hz, ICA decomposition is carried out on the signals, artifact interference of electrooculogram and myoelectricity is removed, and the signal-to-noise ratio of the electroencephalogram signals is improved.
An information updating method of an electroencephalogram identity recognition system based on deep learning comprises the following steps:
adding personnel to the system, wherein each time the steps S101-S102 are finished, the expanded electroencephalogram data set is input into a deep learning multi-classification network model, the last layer of the network model is modified in a training stage, so that the number of classified people is increased by 1, namely the final output size is 1 × 1 × (G +1), then the modified network model is trained through personnel data needing to be added, a deep learning frame is used for storing parameters of each layer except the last layer, a new deep network model is created, the last layer is increased, the original G parameters are placed, and the G +1 parameter is trained;
and (4) deleting the personnel information, namely changing the output size of the last layer of the network model into 1 × 1 × (G-1), and deleting the corresponding label.
A brain electric identity recognition system based on deep learning comprises a first display screen, a first computer host, 32-lead brain electric acquisition equipment, a second computer host and a second display screen; the 32-lead electroencephalogram acquisition equipment comprises an electrode cap and an electroencephalogram acquisition instrument, the electrode cap is connected with the electroencephalogram acquisition instrument, the electrode cap is in contact with the head of a testee, and the first computer host is connected with the second computer host;
the first computer host is used for generating steady-state visual stimulation and playing a flicker segment induced by the steady-state visual stimulation through the first display screen so as to induce an electroencephalogram signal of a testee;
the electroencephalogram acquisition instrument is used for acquiring electroencephalogram data acquired through the electrode cap and sending the acquired electroencephalogram data to the second computer host;
the second computer host is used for collecting electroencephalogram signals, marking the evoked segments played by the first display screen and the acquired electroencephalogram data correspondingly, and identifying the electroencephalogram signals through a deep learning multi-classification network model;
and the second display screen is used for feeding back the identification result of the second computer host and displaying the success of identity identification or prompting the intrusion of a person into the system.
The invention has the beneficial effects that:
1. in the stage of acquiring electroencephalogram data, a steady-state visual evoked paradigm is adopted to stimulate a user to be recorded into a system, and color and two frequencies are added in steady-state visual evoked to improve an evoked test, so that the purpose of increasing the signal-to-noise ratio of electroencephalogram can be achieved; the added different colors can ensure that the testee is more comfortable to stimulate relative to a single color in the acquisition process, and the induced signal stability is better due to the two frequencies, so that the acquired electroencephalogram data are better while the individual comfort is improved.
2. According to the invention, two convolutional neural networks are constructed, firstly, electroencephalogram data are compressed and time domain information is enhanced in the first two layers, and finally, convolutional layers are adopted to replace full-connection layers in the last layer, so that the input is not limited in width, nonlinearity is enhanced, and information contained in electroencephalogram signals is fully extracted.
3. Aiming at the problem of small quantity of electroencephalogram data, the electroencephalogram data are cut to achieve the purpose of increasing the electroencephalogram data; and subsequently, the system operation is accelerated through cutting training, and the weight parameter optimization function is modified by adding a penalty function to extract the common features of the electroencephalogram signals of the cut samples, so that the stability of the extracted features in the training is ensured.
4. Finally, a proper threshold value is added in the system identification stage, when the output value is greater than or equal to the threshold value, the label corresponding to the value is output, and identification can be refused on the premise of correct identification, so that the aims of ensuring the safety and stability of the identity identification system are fulfilled. The training data is increased along with the increase of personnel input in the system, and the recognition accuracy of the system tends to be stable and high on the basis of the characteristics of the deep network.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an electroencephalogram identification system based on deep learning in an embodiment of the present invention.
FIG. 2 is a flowchart of an electroencephalogram identification method based on deep learning according to an embodiment of the present invention.
FIG. 3 is a flowchart of an electroencephalogram identification operation based on deep learning according to an embodiment of the present invention.
FIG. 4 is a test collection process for an embodiment of the present invention.
Fig. 5 is a diagram of a SNet network architecture in an embodiment of the present invention.
FIG. 6 is a diagram of a DNet network architecture in an embodiment of the present invention.
In the figure, 1, a first display screen; 2. a first computer host; 3. an electrode cap; 4. an electroencephalogram acquisition instrument; 5. a second computer host; 6. and a second display screen.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The electroencephalogram signal has a plurality of natural advantages in the aspect of biological feature identification; because the visual signals are generated by the stimulation of the monochromatic light with different colors, the neural network distribution induced by the visual signals in the visual cortex is basically the same but not completely the same; the electrical activity of these neural networks varies in intensity, which in turn causes the SSVEP energy to vary. According to the invention, through correspondence of two frequencies of 10Hz and 12Hz and three colors, six different steady state visual evoked brain signals are generated by stimulation, when 10Hz stimulation is researched, the strongest red stimulation, the second lowest blue light and the weakest yellow light are found, but the effects are different when the frequency is increased to 12Hz, the best blue effect is shown when 12Hz is shown, and through test analysis, the 12Hz blue light with the highest signal-to-noise ratio is selected as an electroencephalogram data sample for training and testing. According to the method, the capability of CNN is utilized, electroencephalogram signals are collected by generating Steady-state Visual Evoked Potentials (SSVEP for short), and then the electroencephalogram signals are utilized to identify individuals.
The electroencephalogram identification system based on deep learning in the embodiment of the invention comprises a first display screen 1, a first computer host 2, electroencephalogram collection equipment with BrainProducts 32, a second computer host 5 and a second display screen 6, as shown in FIG. 1; the brain wave acquisition device comprises an electrode cap 3 and a brain wave acquisition instrument 4, wherein the electrode cap 3 is connected with the brain wave acquisition instrument 4, and the electrode cap 3 is contacted with the head of a testee; the background is composed of a second computer host 5 and a second display screen 6, and the first computer host 2 is connected with the second computer host 5;
the first computer host 2 is used for generating steady-state visual stimulation and playing a flicker segment induced by the steady-state visual stimulation through the first display screen 1 so as to induce an electroencephalogram signal of a testee;
the electroencephalogram acquisition instrument 4 is used for acquiring electroencephalogram data acquired through the electrode cap 3 and sending the acquired electroencephalogram data to the second computer host 5;
the second computer host 5 is used for collecting electroencephalogram signals, marking the induced flicker segments played on the first display screen 1 corresponding to the acquired electroencephalogram data, and identifying the electroencephalogram signals through a deep learning multi-classification network model;
and the second display screen 6 is used for feeding back the identification result of the second computer host 5, and displaying that the identity identification is successful or prompting someone to invade the system.
The electroencephalogram identity recognition method based on deep learning in the embodiment of the invention, as shown in figures 2-3, comprises the following steps:
s101: acquiring an electroencephalogram signal; stimulating a person to be recorded into the system by using a steady-state visual evoked paradigm, and collecting an electroencephalogram signal generated by stimulation;
a person to be recorded into the system wears the electrode cap 3, sits on the chair and looks up the first display screen 1 which is 1m away from the chair, collects electroencephalogram signals of the person through the electroencephalogram collector 4, and sends the electroencephalogram signals to the second computer host 5. The first display screen 1 appears a cross in the beginning 0 th second to prompt the testee to prepare, and the time for the cross to appear is 0-1.2 seconds; randomly generating three colors of red, yellow and blue on a first display screen 1 when the user watches the first display screen in the 1.2 th second, wherein each color corresponds to 10Hz and 12Hz frequencies respectively, flashing stimulation is carried out in one of six states, the duration time is 1.2-5.2 seconds, and the user takes a rest from 5.2-10.2 seconds after each test, namely a flashing stimulation test; the duration in this example was 4 seconds, with a rest of 5 seconds after each trial, as shown in figure 4; the tester wears the brain products 32 conductive electrode cap, wherein 1 electrode is a reference electrode, the rest 31 acquisition data, during the period, brain electrical signal data sets are recorded through brain vision Recorder software inside the brain products 32 conductive electrode cap, the sampling frequency of the data sets is set to 1000Hz, the acquisition test is carried out for 3 times, 16 same people are respectively acquired in each test, wherein 60 times of brain electrical signal acquisition is respectively carried out for any state during each test of any tester, the interval between the second test and the first test is one week, and the interval between the third test and the first test is two weeks.
S102: preprocessing the electroencephalogram data by adopting a band-pass filtering combined independent component analysis method; performing single cutting on the preprocessed electroencephalogram data to expand an electroencephalogram data set;
preprocessing electroencephalogram data: the collected electroencephalogram signals are down-sampled to 256Hz, band-pass filtering is carried out at 0.5-40Hz, ICA decomposition is carried out on the signals, interference of ocular and electromyogram artifacts is removed, and the signal-to-noise ratio of the electroencephalogram signals is improved.
The electroencephalogram data set is enlarged by cutting the electroencephalogram data into a plurality of electroencephalogram data, so that the electroencephalogram data volume is increased; the method specifically comprises the steps of marking an electroencephalogram data set and densely cutting;
marking the electroencephalogram data set: representing the preprocessed electroencephalogram data into a single-channel two-dimensional sequence, wherein the number of electrodes is used as the height, and the number of time sampling points corresponding to any electrode is used as the width; recording the preprocessed electroencephalogram data
Figure BDA0002399869010000091
N represents the total quantity of all the electroencephalogram data acquired in the k-th acquisition test, and the recording form of any electroencephalogram data in the k-th acquisition test is
Figure BDA0002399869010000092
Wherein j is more than or equal to 1 and less than or equal to N, RE×WRepresenting a two-dimensional data matrix set, wherein E represents the number of electrodes used for acquiring electroencephalogram data, and W represents the number of sampling points in the recording time of the electroencephalogram data; each electroencephalogram data
Figure BDA0002399869010000093
For corresponding identity tags
Figure BDA0002399869010000094
Is shown in which
Figure BDA0002399869010000095
Corresponding to elements in a G-type set L, wherein the set L represents the labels of system personnel, and G is the total number of the tested persons; for example, for any electroencephalogram data tag, the following is expressed:
Figure BDA0002399869010000096
wherein G is the total number of the testees, and the set L represents the labels of the system personnel and corresponds to the name of each testee.
Dense cutting: setting a window for each electroencephalogram sample in the electroencephalogram data set to slide, so as to generate a large number of samples, wherein the window width parameter is recorded as W ', the window width is set to 264, namely W' is 264, and the parameter is a value of a parameter suggested to be used in the invention. I.e. for each electroencephalogram data
Figure BDA0002399869010000101
The number of electrodes is E, and the total width parameter W is recorded as 256 × T (sampling rate × single electroencephalogram data recording time) of electroencephalogram signal input, wherein the size of the total width parameter W is the number of sampling points corresponding to each electrode, T is the single electroencephalogram data recording time, and for any electroencephalogram data
Figure BDA0002399869010000102
Sliding dense clipping on this signal with a window of W' 264 will yield the following data:
Figure BDA0002399869010000103
i.e. for any one
Figure BDA0002399869010000104
The cutting will generate W-W' +1(761)
Figure BDA0002399869010000105
Correspond to the data and they have the same label as the data before cutting
Figure BDA0002399869010000106
For each raw electroencephalogram data, the size is E × (256 × T) × (electrode number × and depth of each electrode corresponding to sampling point × 2), in the electroencephalogram data acquisition stage, the total number of electrodes is known to be 31(32 total electrodes including 1 reference electrode), so that E is 31, the stimulation time is 4s, and T is 4s, so that the size is 31 × 31024 × for each raw electroencephalogram data, in the experimental acquisition stage, each subject is respectively acquired 60 times in six states in each experiment, so that 60 data are generated in each case, the experiment performs acquisition tests on 16 persons, so that 960 data are generated in each case, and in this case, the raw electroencephalogram data is × × × (1024 samples, 42, high, broad, × channels), so that the raw electroencephalogram data is 960 531 × × (1024 samples, high, broad, × channels) is preprocessed and the post-training data is 31 (8276145) and 31, × times of samples are visible.
The invention represents the brain electrical signal into a single-channel two-dimensional sequence, wherein the number of the electrodes is taken as the height, and the number of the time sampling points corresponding to any electrode is taken as the width. Directions are provided for the application of deep learning to steady-state visual evoked application to identity recognition.
S103: training a deep learning multi-classification network model through the cut time sequence electroencephalogram signals;
constructing a deep network model M that can map the correct tags to the input data, i.e.
Figure BDA0002399869010000107
Where theta denotes the parameters of the network. For conventional machine learning, it will typically be
Figure BDA0002399869010000108
The method comprises the following two steps: 1. feature extraction: by the parameter thetaΨFirst from the input
Figure BDA0002399869010000109
Middle school character
Figure BDA00023998690100001010
Figure BDA00023998690100001010
2. Designing a classifier: training a classifier c with the learned features, the classifier comprising a parameter thetac. Through the above two steps, the network model shown as the following is finally obtained
Figure BDA00023998690100001011
However, in deep learning, the above two steps can be put together for simultaneous training and optimization.
The deep learning multi-classification network model comprises a convolutional neural network DNet and a convolutional neural network SNet which are independent from each other, wherein the two convolutional neural networks compress electroencephalogram information in the first two layers to enhance time domain information; the last layer of the convolutional neural network DNet and the last layer of the convolutional neural network SNet adopt convolutional layers to replace full-connection layers, so that the input is not limited in width, and information contained in the electroencephalogram signal is fully extracted. The 11-layer convolutional neural network DNet uses a filter with a relatively large number of layers and a small size, and a small step size, and has a main function of extracting a feature (the feature is not equivalent to a previously known type) common to brain signals; the 5-layer convolutional neural network SNet mainly adopts the fact that the number of layers is shallow, a large-size filter and a large step length are matched with each other to process electroencephalogram signals respectively, and the function of the 5-layer convolutional neural network SNet is to extract well-known electroencephalogram signal power spectrum characteristics. SNet can be used by fewer system personnel, and DNet effect is better when more personnel are available.
As shown in fig. 5 and 6, the configurations of the SNet network and the DNet network are shown in table 1 and table 2, respectively, for the detailed parameters of the network configurations.
TABLE 1 SNet network architecture details parameters
Number of layers Type (B) Size/number of steps Description of the invention Output feature size
0 Input - Brain electricity input 559×31×1
1 Conv_time 50×1/(1,1) Convolution with a bit line 510×31×40
2 Conv_spat 1×31/(1,1) Convolution with a bit line 510×1×40
- Nonline - Square non-linearity 510×1×40
3 Avg Pool 75×1/(15,1) Average pooling 30×1×40
- Nonline - Log nonlinearity 30×1×40
4 Conv_classifier 30×1/(1,1) Convolutional layer 1×1×G
5 Softmax - Output layer 1×1×G
Table 2 DNet network architecture details parameters
Number of layers Type (B) Size/number of steps Description of the invention Output feature size
0 Input - Brain electricity input 524×31×1
1 Conv_time 12×1/(1,1) Convolution with a bit line 513×31×25
2 Conv_spat 1×31/(1,1) Convolution with a bit line 513×1×25
3 MP1 3×1/(1,1) Maximum pooling 171×1×25
4 Conv_2 10×1/(1,1) Convolution with a bit line 162×1×50
5 MP2 3×1/(1,1) Maximum pooling 54×1×50
6 Conv_3 10×1/(1,1) Convolution with a bit line 45×1×100
7 MP3 3×1/(1,1) Maximum pooling 15×1×100
8 Conv_4 10×1/(1,1) Convolution with a bit line 6×1×200
9 MP4 3×1/(1,1) Maximum pooling 2×1×200
10 Conv_classifier 2×1/(1,1) Convolutional layer 1×1×G
11 Softmax - Output layer 1×1×G
The method includes the steps of utilizing filters with different sizes (12 × 1) and (50 × 1) in a first layer of convolution for DNet and SNet respectively, calculating respective low-level characteristics of each electrode by convolution of time, utilizing a filter with the size of (1 × 31) to spatially filter all electrodes in the next layer, compressing the height to 1, and achieving the purpose of enhancing time domain characteristics of the electroencephalogram signal, combining the two layers into one layer by utilizing an activation function between the two layers, namely utilizing filters with the sizes of (12 × 31) and (50 × 31) to replace the filters with the sizes of (12 ×) and (50 ×), and decomposing the filter with the size of (n × m) into the filters with the sizes of (n × 1) and (1 × m) so as to achieve the purpose of reducing the calculation amount.
And extracting features of the data processed by the convolutional neural network DNet through a square nonlinear layer, an average pooling layer and a logarithmic activation function layer. According to the method, batch standardization is used after each convolutional layer, so that the output data of the layer is close to normal distribution when the output data of the layer is used as input data of the next layer, Dropout is set to be 0.5 before each convolutional layer except the first two convolutional layers, and overfitting caused by too few electroencephalogram data samples is reduced.
After the convolutional neural network SNet compresses data to a time dimension through the first two layers, using ELUs (explicit Linear units) as an activation function, wherein the expression of the function is shown as formula (1):
Figure BDA0002399869010000121
where α is set to 1, x represents the output of any neuron in the convolutional neural network SNet;
the function has the following characteristics:
(1) the function combines the characteristics of two activation functions of sigmoid and ReLU, and has soft saturation in a left area and no saturation in a right area;
(2) its right linear region allows the ELUs to mitigate gradient vanishing, and its left soft saturation enables the ELUs to be more robust to input variations or noise.
(3) The mean value of the ELUs is close to zero, so the convergence speed is relatively faster.
After the ELUs function is passed, adopting a maximum pooling layer to continue processing, and then using a standardized processing mode formed by a convolution layer, an ELUs activating function and the pooling layer;
in Dense Layer (Dense Layer), convolution neural network DNet and convolution neural network SNet adopt filters with size of (30 ×) and (2 ×) to convolute the obtained features, and the operation of replacing full connection Layer with convolution Layer is used to convolute the extracted features into a 1 × 1 × G matrix feature, G represents the number of human members in the system, the operation has the following two advantages that (1) compared with full connection, the method enables input without limit width, only needs to set electrode number for input data, the input width is not limited, the width is unknown, (2) convolution output can be fully utilized in cutting training to improve efficiency, (3) nonlinearity can be enhanced through convolution to fully extract information contained in brain wave signals, in practical use, the characteristic size obtained by previous output can be calculated to set the size of the filter, for example, the 9 th Layer of parameter is 2 × 1 ×, which is from input to final obtained size, the next Layer of output is changed into 1, and the size of DNet needed filter is 67861, and the size of the next Layer is changed into 36 ×, 6778.
Calculating the size of the needed back propagation gradient, performing back propagation, optimizing weight parameters by a small batch random gradient descent method, and training a deep learning multi-classification network model:
in order to enter people into the system, the deep learning multi-classification network model needs to be trained, and all parameters are trained, wherein weights and biases are included. In general, in supervised learning, each input data can be classified by the network model, that is:
Figure BDA0002399869010000131
for any input data
Figure BDA0002399869010000132
An output result is obtained that is present in the tag L of class G. The main purpose of the system according to the invention is to achieve multi-classification, i.e. to identify a number of persons who have entered the system, a classifier
Figure BDA0002399869010000133
Wherein theta isΨ、θcRespectively extracting partial weight values and classifier weight values for the features in the deep network, wherein the classifier adopts a softmax function shown in formula (2) to input data
Figure BDA0002399869010000134
Is extracted from
Figure BDA0002399869010000135
Converting into a matrix containing G values with probability less than 1, wherein each value in the matrix represents the extracted features of the input data and each label lgThe degree of similarity between them;
Figure BDA0002399869010000141
wherein the content of the first and second substances,
Figure BDA0002399869010000142
to represent
Figure BDA0002399869010000143
Is extracted from
Figure BDA0002399869010000144
And a corresponding label lgThe degree of similarity between them; each value corresponds to a label, and multiple categories have labels for people in the system, thus indicating the possibility that the input signal belongs to each label.
In training, the parameters are adjusted by back propagation through the sum of losses between the output of each data in a minimized batch of data and the label to which the data belongs, and the updating formula of the parameters is shown as formula (3).
Figure BDA0002399869010000145
B represents the total number of data input in each training batch, B is more than or equal to 1 and less than or equal to N, N represents the total amount of electroencephalogram data of all people collected in a collection test, B is more than or equal to 1 and less than or equal to B, and G represents the number of people in the system, namely the size of a vector output by softmax; g represents any value in the vector; theta represents any weight in the depth network; function delta when
Figure BDA0002399869010000146
The number is 1, and the rest is 0; the method calculates the size of the required back propagation gradient and performs back propagation, and optimizes the parameters by a small-batch random gradient descent method.
S104: simultaneously calculating the output of a plurality of cut samples by a cutting training method, putting a plurality of adjacent subsamples together, and storing the result of the intermediate convolution in an expansion convolution mode; modifying a weight parameter optimization function by adding a penalty function, and extracting common characteristics of electroencephalogram signals of the cut samples;
if the data are directly cut and then all the data are input into the network model in batches, although the sample size is increased by multiple times, the problems that a large amount of redundancy exists in the increased samples, the calculation amount of the deep convolutional network is increased and the like are inevitable. The invention puts a plurality of adjacent subsamples together, and saves the result of the intermediate convolution by means of the expansion convolution, thereby solving the problems of data redundancy and calculated amount.
The intensive cutting training of the embodiment of the invention generates a new parameter W ', W ' represents the length of the data input network, and the number of samples input into the network at one time is W ' -W ' +1 according to W ' and the cutting size; it can be seen that the larger W ″ and the larger number of input networks are allowed by the memory, which can speed up the performance of the system more quickly, i.e. the method speeds up the performance of the system at the cost of increasing the memory consumption, and we usually propose to set W "to be twice W'. The output of each input electroencephalogram data is obtained through the network, and the intermediate-layer result is repeatedly used here, so that the output corresponding to a plurality of inputs is obtained at one time, and the efficiency is improved.
In order to make the network effect obtained by cutting training better, the invention provides a new parameter adjusting function as shown in formula (4):
Figure BDA0002399869010000151
the function can be divided into two terms, one term is the cross entropy of the sample, the other term is the negative logarithm of the sample multiplied by the conditional probability of the next sample adjacent to the sample, and the new parameter update formula also has the conditional probability value of the next sample next to the sample
Figure BDA0002399869010000152
Correlation;
in the formula (4)
Figure BDA0002399869010000153
Compared with the formula (3)
Figure BDA0002399869010000154
Is a new loss function.
In summary, for the electroencephalogram data after cutting, the electroencephalogram signals can be sent into the convolutional neural network one by one through a common method, the output of a plurality of cut samples is calculated through a cutting training method, a plurality of adjacent subsamples are put together, the result of the intermediate convolution is stored through an expansion convolution mode, redundant information is calculated at one time, and the efficiency of the electroencephalogram identity recognition system is improved; by adding penalty functions
Figure BDA0002399869010000155
Modifying weight parameter optimization function, punishing function makes loss in cutting training related to the sample and sample under the sample, network can pay more attention to stable feature existed between adjacent samples in training phase, and for samplesPunishment is carried out on the difference, and the common characteristics of the electroencephalogram signals of the cut samples are extracted, so that the accuracy is improved.
It can be clearly seen that the deep learning of the present invention has the following advantages: the extracted features and the classifier can be combined to construct a network model for joint optimization. The method is more useful in the case of large data volume, and the convolutional neural network has higher possibility of automatically extracting useful characteristics in the case of large data volume, so that the defect of overfitting is overcome. For electroencephalogram data, feature extraction is more meaningful, because features which cannot be extracted by a traditional method can be extracted by a deep learning method.
It is worth mentioning that: the prediction results obtained by the above clipping training are the same as those obtained by inputting a single sample. However, the following two cases are noted using this method: (1) the network cannot use any padding layer; (2) the appropriate loss function is selected to generate the same gradient when training multiple samples adjacent to one input in the clipping training and the single input of the samples. For the first case, the two deep network models in the invention do not contain a padding layer, and the second case can be dealt with by using a log-likelihood function as a loss function.
The electroencephalogram data is relatively small in data volume, and a deep learning method can obtain a better result only by using big data for training, so that the accuracy of the system is ensured by increasing the electroencephalogram data through cutting training; combines the cutting training with the characteristics of the electroencephalogram signal to achieve the purpose of supplementing each other.
S105: carrying out identity recognition through the trained model, and adding an intruder recognition description; the authenticated or identified person wears the electrode cap 3, the electroencephalogram data to be identified is input into the trained deep learning multi-classification network model, and when the output value O is more than or equal to OthresholdAt the moment, outputting a label corresponding to the value, prompting personnel to successfully identify, and when O is reached<OthresholdWhen the number of times of input failure exceeds three times, the system gives an alarm to indicate that the intruder exists, and OthresholdIs a set threshold.
The current research on the relevant aspects of electroencephalogram identity recognition is basically developed based on a closed set, and an identity recognition system has the following requirements: 1. the system entry personnel need to be accurately identified; 2. and refusing the personnel who are not logged into the system. The multi-classification method can only reach the first point, namely, for any person, even if the person is not the system entry person, the person can be authenticated to belong to a certain person in the system through the identity recognition of the system, and the requirement of the second point cannot be met.
In order to solve the second problem, a training network is modified in a recognition stage, a value output by softmax is recorded as O, and through deep analysis of an electroencephalogram identity recognition system, the value of a person considered to be in the system is considered to be larger and close to about 0.9, and the value of a person not belonging to the system is considered to be less than 0.05 in the last output value, so that a threshold value O is setthresholdThe size is set to 0.9, which can also be understood as the similarity of the logged person to the person already logged in the system. We therefore redefine the recognition result as follows:
(1) and (4) correct recognition: inputting any electroencephalogram data of the g-th person when O is more than or equal to OthresholdAnd the output value corresponds to the label to which g belongs, and the label is correctly identified;
(2) error identification: inputting any electroencephalogram data of the g-th person when O is more than or equal to OthresholdAnd the output value corresponds to a label which is not G in the label set G, namely, non-system entry personnel also accept the label which is corresponding to the label G in the system;
(3) and (3) refusal identification: o is<Othreshold
The safety of the electroencephalogram system is ensured by adding the threshold, and the condition that a person who is not logged in the system randomly identifies can be identified, so that the condition that correct identification is ensured is rejected.
The invention builds two convolution neural networks of SNet and DNet so as to pointedly process the expansibility problem of electroencephalogram identification, and aiming at the problem that personnel can not be added or deleted in multiple categories, the invention modifies the system information according to the following method:
if people need to be added into the system, the steps S101-S102 are repeated firstly, an expanded electroencephalogram data set is input into a deep learning multi-classification network model, two ways can be adopted in the training stage, 1, new added people and electroencephalogram data of previously input people are retrained to obtain a new deep convolutional neural network, 2, the last layer of the network model is modified in the training stage to increase the number of classified people by 1, namely the final output size is (1 × 1 × (G +1)), then the modified network model is trained through the data of the people needing to be added, parameters of each layer except the last layer are stored by using a PyTorch (deep learning framework), then a new deep network model is created, the last layer is increased, the parameters of the front layer are kept unchanged, the original G parameters are placed in the last layer, and the G +1 parameters are trained, the information of a certain person in the system is deleted, the output size of the last layer of the network model is changed to (1 × 1 × (G-1)), a corresponding label is deleted, and the O value is provided by the system when the O value is identified each timethresholdThe network is trained and the system is updated in real time. The method 1 needs to retrain the network, retrain is complicated for the adding and deleting personnel of a system, and the method 2 saves the method, and can realize the adding and deleting only by updating a small amount of parameters, so the method is more applicable.
The technical effects of the present invention will be described in detail with reference to simulations.
Identity recognition:
in the experimental paradigm, under the stimulation of 10Hz frequency, some experiments recognize that the red stimulation is strongest, the blue stimulation is second, and the yellow light is weakest, but the effect is different when the frequency is increased to 12Hz, and the blue effect is best when the frequency is 12Hz, in sum, 12Hz blue light is recommended to be selected as an electroencephalogram data sample for training and testing, 16 persons are used for training and testing, the EER (Equal Error Rate) value of SNet is 0.42% and the EER value obtained by DNet is 0.88% by using the method, and the Accuracy (Accuracy Rate) corresponding to the EER is 100% and 97.88% respectively. The EERs obtained after a period of time are 3.87% and 5.55%, respectively, and the accuracies are 87.75% and 86.29%, respectively, but the period of time is probably related to the collection conditions, so that 13 persons who have relatively good observation states in the experiment are selected to carry out the test, the accuracies are 96.03% and 94.36%, respectively, and the other three persons have poor conditions between every two tests, so that the stimulation flicker may not be seen in each experiment, which is caused by the design of the steady-state visual induction test, because five data are generated at one time in the induction stage, if one of the data is omitted, the five data are likely to have certain problems, but the problems do not influence the electroencephalogram in the aspect of practicability. In a practical stage, the electroencephalogram signal to be tested for the first time is only required to be input into the system, the best state of the testing for the first time is only required to be guaranteed, and in a subsequent identification process, the problem that the effect of the test identification stage is poor due to the fact that flicker is ignored in the identification stage can be solved. SNet is a shallow network, so the network can be used more effectively when the data quantity is small, and DNet can be used for easily coping with the situation when the number of people in the system is greatly increased.
And (3) refusing the intruder: we identify 11 objects as system personnel and the other 5 objects as intruders, and finally obtain the effects of 95.23% ACC (accuracy rate), 3.67% FAR (false acceptance rate) and 6.83% FRR (false rejection rate) under SNet.
The invention uses the multi-classification deep learning network to process input data, and rejects the intruder by setting a threshold value, compared with the method for training a plurality of two-classification networks, the invention can achieve the purposes of identifying personnel and rejecting the intruder by only training one classification network, thereby being more convenient and fast relatively.
The electroencephalogram identification method based on deep learning has a certain promotion effect on future identification technology, stability and practicability of electroencephalogram identification can be improved, and the electroencephalogram identification method based on deep learning can be used in practical occasions. The invention is suitable for brain-computer interface equipment with steady-state vision induction, and the identity recognition system can be safer and can accurately recognize personnel through the method.
The single color is easy to cause visual fatigue of people, and in order to obtain better input data, the invention uses a mode of combining various colors and frequencies to carry out flicker stimulation to obtain an electroencephalogram signal; the multiple colors induce the mental state of the individual not to be tired easily, thereby stabilizing the mental state of the testee; as the frequency rises, the signal induced by steady-state visual induction has better durability, and when the flicker with the frequency exceeding 12Hz is observed, noise such as eye jump can be generated, so the invention selects two frequencies of 10Hz and 12Hz, thereby obtaining better results. The multi-classification network is used in the aspect of identity recognition, so that only one network can be trained instead of a plurality of two-classification networks, and the system is more convenient and faster; generally, the intruders are rejected through two categories, while the multi-category network is generally only used for category identification, and the invention rejects the intruders through setting a threshold value, so that the security of the system is ensured; for the electroencephalogram with less data, the invention also adopts a cutting mode to increase the data quantity, and uses a cutting training mode to improve the performance and remove the redundancy generated by intensive cutting.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. An electroencephalogram identity recognition method based on deep learning is characterized by comprising the following steps:
s101: stimulating a person to be recorded into the system by using a steady-state visual evoked paradigm, and collecting an electroencephalogram signal generated by stimulation;
s102: preprocessing the electroencephalogram data by adopting a band-pass filtering and combining an independent component analysis method, and performing single cutting on the preprocessed electroencephalogram data to expand an electroencephalogram data set;
s103: training a deep learning multi-classification network model through the cut time sequence electroencephalogram signals; the deep learning multi-classification network model comprises a convolutional neural network DNet and a convolutional neural network SNet which are independent from each other, wherein the two convolutional neural networks compress electroencephalogram information in the first two layers to enhance time domain information; the last layer of the convolutional neural network DNet and the last layer of the convolutional neural network SNet both replace a full connection layer through a convolutional layer; calculating the size of a required back propagation gradient, performing back propagation, optimizing weight parameters by a small batch random gradient descent method, and training a deep learning multi-classification network model;
s104: simultaneously calculating the output of a plurality of cut samples by a cutting training method, putting a plurality of adjacent subsamples together, and storing the result of the intermediate convolution in an expansion convolution mode; modifying a weight parameter optimization function by adding a penalty function, and extracting common characteristics of electroencephalogram signals of the cut samples;
s105: identity recognition; inputting the electroencephalogram data to be recognized into the trained deep learning multi-classification network model, and when the output value O is more than or equal to OthresholdAt the moment, outputting a label corresponding to the value, prompting personnel to successfully identify, and when O is reached<OthresholdIf so, the system indicates that the personnel is not input temporarily and is suspected to be an intruder, and the system prompts the recognition failure to input again; when the input fails for a plurality of times continuously, the system sends out an alarm to prompt that an intruder exists; o isthresholdIs a set threshold.
2. The electroencephalogram identity recognition method based on deep learning of claim 1, wherein in the step S101, a subject looks up a first display screen (1) at a distance of 1m from the subject, the first display screen (1) appears in 0 th beginning, and the subject is prompted to be ready, and the time of the cross appearance is 0-1.2 seconds; the method comprises the steps of randomly generating red, yellow and blue colors on a first display screen (1) when the user watches the first display screen at the 1.2 th second, enabling each color to correspond to 10Hz and 12Hz frequencies respectively, selecting one of six states for flicker stimulation, enabling the duration to be 1.2-5.2 seconds, and enabling the user to rest from 5.2-10.2 seconds after each test, wherein the flicker stimulation test is performed for the time.
3. The electroencephalogram identity recognition method based on deep learning of claim 1 or 2, wherein the testee wears 32-lead electroencephalogram acquisition equipment, wherein 1 electrode is a reference electrode, the rest 31 electrodes acquire data, and during the period, a data set of electroencephalogram signals of the testee is recorded through software inside the electroencephalogram acquisition equipment.
4. The electroencephalogram identity recognition method based on deep learning of claim 1, wherein in the step S102, expanding the electroencephalogram data set specifically comprises:
marking the electroencephalogram data set: representing the preprocessed electroencephalogram data into a single-channel two-dimensional sequence, wherein the number of electrodes is used as the height, and the number of time sampling points corresponding to any electrode is used as the width; recording the preprocessed electroencephalogram data
Figure FDA0002399868000000021
N represents the total quantity of all the electroencephalogram data acquired in the k-th acquisition test, and the recording form of any electroencephalogram data in the k-th acquisition test is
Figure FDA0002399868000000022
Wherein j is more than or equal to 1 and less than or equal to N, RE×WRepresenting a two-dimensional data matrix set, wherein E represents the number of electrodes used for acquiring electroencephalogram data, and W represents the number of sampling points in the recording time of the electroencephalogram data; each electroencephalogram data
Figure FDA0002399868000000023
For corresponding identity tags
Figure FDA0002399868000000024
Is shown in which
Figure FDA0002399868000000025
Corresponding to elements in a G-class set LThe combined L represents the identity label of system personnel, and G is the total number of the tested persons;
intensive cutting, namely setting a window for each electroencephalogram sample in an electroencephalogram data set to slide, recording the width parameter of the window as W, inputting the total width parameter W of an electroencephalogram signal as the sampling rate ×, recording the time of a single electroencephalogram data, and for any electroencephalogram data
Figure FDA0002399868000000026
And performing sliding intensive cutting on the electroencephalogram signal by using the window to generate the following data:
Figure FDA0002399868000000027
Figure FDA0002399868000000028
each one after cutting is shown
Figure FDA0002399868000000029
The generated electroencephalogram data set, i.e. for any electroencephalogram data
Figure FDA00023998680000000210
Performing clipping to generate W-W' +1 data, and the clipped brain electrical data has the same label as the clipped brain electrical data
Figure FDA00023998680000000211
5. The electroencephalogram identity recognition method based on deep learning of claim 1, wherein in the step S103, neither the convolutional neural network DNet nor the convolutional neural network SNet has a padding layer, and the former two layers of the convolutional neural network DNet and the convolutional neural network SNet decompose a filter with the size of n × m into filters with the sizes of n × 1 and 1 × m, so as to reduce the amount of calculation;
the convolutional neural network DNet is 11 layers, and a small-size filter and a small step length are adopted for extracting common characteristics in electroencephalogram signals; extracting the characteristics of the data processed by the first two layers through a square nonlinear layer, an average pooling layer and a logarithm activation function layer; batch standardization is used after each convolutional layer, so that the output data of the layer is close to normal distribution when the output data of the layer is used as input data of the next layer, Dropout is set to be 0.5 before each convolutional layer except the first two convolutional layers, and overfitting caused by too few electroencephalogram data samples is reduced;
the convolutional neural network SNet is 5 layers, and a large-size filter and a large step length are adopted for extracting well-known power spectrum characteristics of the brain electrical signals; after the data are compressed to a time dimension through the first two layers, the ELUs are used as an activation function, and the expression of the activation function is shown as the formula (1):
Figure FDA0002399868000000031
where α is set to 1, x represents the output of any neuron in the convolutional neural network SNet;
after the ELUs function is passed, adopting a maximum pooling layer to continue processing, and then using a standardized processing mode formed by a convolution layer, an ELUs activating function and the pooling layer;
in the dense layer, the convolutional neural network DNet and the convolutional neural network SNet respectively adopt filters with the sizes of 30 × 1 and 2 × 1 to convolve the obtained features, and the operation of replacing the fully-connected layer by the convolutional layer is used for further convolving the extracted features into a matrix feature of 1 × 1 × G, wherein G represents the number of people in the system.
6. The electroencephalogram identity recognition method based on deep learning of claim 1, wherein in the step S103, the training of the deep learning multi-classification network model specifically comprises:
the classifier of the deep learning multi-classification network model adopts a softmax function shown in formula (2) to input electroencephalogram data
Figure FDA0002399868000000032
Is extracted from
Figure FDA0002399868000000033
Converting into a matrix containing G numerical values with the probability less than 1, wherein each numerical value in the matrix represents a label l on the premise of extracting characteristics of input electroencephalogram datagConditional probability of (a), thetaΨThe weight parameter of the feature extraction part in the network is shown;
Figure FDA0002399868000000034
wherein the content of the first and second substances,
Figure FDA0002399868000000035
to represent
Figure FDA0002399868000000036
Obtaining the label l under the conditiongThe conditional probability of (a);
in training, by minimizing the sum of losses between the output obtained by each data in a batch of input data through the network and the label corresponding to the data, the back propagation adjustment parameter is performed, and the new weight parameter θ' is updated as shown in formula (3):
Figure FDA0002399868000000037
b represents the total number of electroencephalogram data input in each training batch, B is more than or equal to 1 and less than or equal to N, N represents the total amount of electroencephalogram data of all people acquired in an acquisition test, B is more than or equal to 1 and less than or equal to B, and G represents the number of people in the system, namely the dimensionality of a softmax output vector; g represents the element number in the vector; theta represents a weight parameter in the depth network;function delta, when
Figure FDA0002399868000000038
The number of the carbon atoms is 1, and the rest is 0.
7. The electroencephalogram identity recognition method based on deep learning of claim 1 or 6, wherein in the step S104, a penalty function is added to modify weight parameters, as shown in formula (4):
Figure FDA0002399868000000041
the function can be divided into two terms, one is the cross entropy obtained by the sample, the other is the negative logarithm of the sample multiplied by the conditional probability of the next sample adjacent to the sample,
Figure FDA0002399868000000042
related to formula (3).
8. The electroencephalogram identity recognition method based on deep learning of claim 1, wherein in the step S102, electroencephalogram data preprocessing: the acquired electroencephalogram signals are down-sampled to 256Hz, band-pass filtering is carried out at 0.5-40Hz, ICA decomposition is carried out on the signals, artifact interference of electrooculogram and myoelectricity is removed, and the signal-to-noise ratio of the electroencephalogram signals is improved.
9. An information updating method of an electroencephalogram identity recognition system based on deep learning is characterized by comprising the following steps:
adding personnel to the system, wherein each time the steps S101-S102 are finished, the expanded electroencephalogram data set is input into a deep learning multi-classification network model, the last layer of the network model is modified in a training stage, so that the number of classified people is increased by 1, namely the final output size is 1 × 1 × (G +1), then the modified network model is trained through personnel data needing to be added, a deep learning frame is used for storing parameters of each layer except the last layer, a new deep network model is created, the last layer is increased, the original G parameters are placed, and the G +1 parameter is trained;
and (4) deleting the personnel information, namely changing the output size of the last layer of the network model into 1 × 1 × (G-1), and deleting the corresponding label.
10. A brain electric identity recognition system based on deep learning is characterized by comprising a first display screen (1), a first computer host (2), 32-lead brain electric acquisition equipment, a second computer host (5) and a second display screen (6); the 32-lead electroencephalogram acquisition equipment comprises an electrode cap (3) and an electroencephalogram acquisition instrument (4), the electrode cap (3) is connected with the electroencephalogram acquisition instrument (4), the electrode cap (3) is in contact with the head of a subject, and the first computer host (2) is connected with the second computer host (5);
the first computer host (2) is used for generating steady-state visual stimulation and playing the flicker segment induced by the steady-state visual stimulation through the first display screen (1) so as to induce an electroencephalogram signal of a testee;
the electroencephalogram acquisition instrument (4) is used for acquiring electroencephalogram data acquired through the electrode cap (3) and sending the acquired electroencephalogram data to the second computer host (5);
the second computer host (5) is used for collecting electroencephalogram signals, marking the evoked segments played by the first display screen (1) and the acquired electroencephalogram data correspondingly, and identifying the electroencephalogram signals through a deep learning multi-classification network model;
and the second display screen (6) is used for feeding back the identification result of the second computer host (5) and displaying the success of identity identification or prompting the intrusion of a person into the system.
CN202010143355.1A 2020-03-04 2020-03-04 Electroencephalogram identity recognition method and system based on deep learning and information updating method Active CN111329474B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010143355.1A CN111329474B (en) 2020-03-04 2020-03-04 Electroencephalogram identity recognition method and system based on deep learning and information updating method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010143355.1A CN111329474B (en) 2020-03-04 2020-03-04 Electroencephalogram identity recognition method and system based on deep learning and information updating method

Publications (2)

Publication Number Publication Date
CN111329474A true CN111329474A (en) 2020-06-26
CN111329474B CN111329474B (en) 2021-05-28

Family

ID=71174476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010143355.1A Active CN111329474B (en) 2020-03-04 2020-03-04 Electroencephalogram identity recognition method and system based on deep learning and information updating method

Country Status (1)

Country Link
CN (1) CN111329474B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111859338A (en) * 2020-07-08 2020-10-30 成都信息工程大学 Identity recognition method, system, storage medium, computer program and terminal
CN112401908A (en) * 2020-12-02 2021-02-26 中国人民解放军海军特色医学中心 Fatigue monitoring device, fatigue monitoring method, computing equipment and storage medium
CN112604163A (en) * 2020-12-30 2021-04-06 杭州电子科技大学 Auxiliary memory system based on transcranial direct current stimulation
CN113243924A (en) * 2021-05-19 2021-08-13 成都信息工程大学 Identity recognition method based on electroencephalogram signal channel attention convolution neural network
CN113425312A (en) * 2021-07-30 2021-09-24 清华大学 Electroencephalogram data processing method and device
CN113627391A (en) * 2021-08-31 2021-11-09 杭州电子科技大学 Cross-mode electroencephalogram signal identification method considering individual difference
CN113723247A (en) * 2021-08-20 2021-11-30 西安交通大学 Electroencephalogram identity recognition method and system
CN114424945A (en) * 2021-12-08 2022-05-03 中国科学院深圳先进技术研究院 Brain wave biological feature recognition system and method based on random graphic image flash
CN114578963A (en) * 2022-02-23 2022-06-03 华东理工大学 Electroencephalogram identity recognition method based on feature visualization and multi-mode fusion
CN115810138A (en) * 2022-11-18 2023-03-17 天津大学 Image identification method based on multi-electrode array in-vitro culture neuron network
CN115828208A (en) * 2022-12-07 2023-03-21 北京理工大学 Touch electroencephalogram unlocking method and system based on cloud edge collaboration
CN116369949A (en) * 2023-06-06 2023-07-04 南昌航空大学 Electroencephalogram signal grading emotion recognition method, electroencephalogram signal grading emotion recognition system, electronic equipment and medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090063866A1 (en) * 2007-08-29 2009-03-05 Jiri Navratil User authentication via evoked potential in electroencephalographic signals
CN101491441A (en) * 2009-02-26 2009-07-29 江西蓝天学院 Identification method based on electroencephalogram signal
CN101828921A (en) * 2010-06-13 2010-09-15 天津大学 Identity identification method based on visual evoked potential (VEP)
CN102755162A (en) * 2012-06-14 2012-10-31 天津大学 Audio-visual cognitive event-related electroencephalogram-based identification method
CN105942975A (en) * 2016-04-20 2016-09-21 西安电子科技大学 Stable state visual sense induced EEG signal processing method
CN107437011A (en) * 2016-05-26 2017-12-05 华为技术有限公司 The method and apparatus of identification based on EEG signals
CN108959895A (en) * 2018-08-16 2018-12-07 广东工业大学 A kind of EEG signals EEG personal identification method based on convolutional neural networks
CN109766751A (en) * 2018-11-28 2019-05-17 西安电子科技大学 Stable state vision inducting brain electricity personal identification method and system based on Frequency Domain Coding
CN109784023A (en) * 2018-11-28 2019-05-21 西安电子科技大学 Stable state vision inducting brain electricity personal identification method and system based on deep learning
CN110222643A (en) * 2019-06-06 2019-09-10 西安交通大学 A kind of Steady State Visual Evoked Potential Modulation recognition method based on convolutional neural networks

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090063866A1 (en) * 2007-08-29 2009-03-05 Jiri Navratil User authentication via evoked potential in electroencephalographic signals
CN101491441A (en) * 2009-02-26 2009-07-29 江西蓝天学院 Identification method based on electroencephalogram signal
CN101828921A (en) * 2010-06-13 2010-09-15 天津大学 Identity identification method based on visual evoked potential (VEP)
CN102755162A (en) * 2012-06-14 2012-10-31 天津大学 Audio-visual cognitive event-related electroencephalogram-based identification method
CN105942975A (en) * 2016-04-20 2016-09-21 西安电子科技大学 Stable state visual sense induced EEG signal processing method
CN107437011A (en) * 2016-05-26 2017-12-05 华为技术有限公司 The method and apparatus of identification based on EEG signals
CN108959895A (en) * 2018-08-16 2018-12-07 广东工业大学 A kind of EEG signals EEG personal identification method based on convolutional neural networks
CN109766751A (en) * 2018-11-28 2019-05-17 西安电子科技大学 Stable state vision inducting brain electricity personal identification method and system based on Frequency Domain Coding
CN109784023A (en) * 2018-11-28 2019-05-21 西安电子科技大学 Stable state vision inducting brain electricity personal identification method and system based on deep learning
CN110222643A (en) * 2019-06-06 2019-09-10 西安交通大学 A kind of Steady State Visual Evoked Potential Modulation recognition method based on convolutional neural networks

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111859338A (en) * 2020-07-08 2020-10-30 成都信息工程大学 Identity recognition method, system, storage medium, computer program and terminal
CN112401908A (en) * 2020-12-02 2021-02-26 中国人民解放军海军特色医学中心 Fatigue monitoring device, fatigue monitoring method, computing equipment and storage medium
CN112604163A (en) * 2020-12-30 2021-04-06 杭州电子科技大学 Auxiliary memory system based on transcranial direct current stimulation
CN113243924A (en) * 2021-05-19 2021-08-13 成都信息工程大学 Identity recognition method based on electroencephalogram signal channel attention convolution neural network
CN113425312A (en) * 2021-07-30 2021-09-24 清华大学 Electroencephalogram data processing method and device
CN113425312B (en) * 2021-07-30 2023-03-21 清华大学 Electroencephalogram data processing method and device
CN113723247A (en) * 2021-08-20 2021-11-30 西安交通大学 Electroencephalogram identity recognition method and system
CN113723247B (en) * 2021-08-20 2024-04-02 西安交通大学 Electroencephalogram identity recognition method and system
CN113627391B (en) * 2021-08-31 2024-03-12 杭州电子科技大学 Cross-mode electroencephalogram signal identification method considering individual difference
CN113627391A (en) * 2021-08-31 2021-11-09 杭州电子科技大学 Cross-mode electroencephalogram signal identification method considering individual difference
CN114424945A (en) * 2021-12-08 2022-05-03 中国科学院深圳先进技术研究院 Brain wave biological feature recognition system and method based on random graphic image flash
CN114424945B (en) * 2021-12-08 2024-05-31 中国科学院深圳先进技术研究院 Brain wave biological feature recognition system and method based on random graphic image flash
CN114578963A (en) * 2022-02-23 2022-06-03 华东理工大学 Electroencephalogram identity recognition method based on feature visualization and multi-mode fusion
CN114578963B (en) * 2022-02-23 2024-04-05 华东理工大学 Electroencephalogram identity recognition method based on feature visualization and multi-mode fusion
CN115810138A (en) * 2022-11-18 2023-03-17 天津大学 Image identification method based on multi-electrode array in-vitro culture neuron network
CN115828208B (en) * 2022-12-07 2023-09-08 北京理工大学 Touch brain electrolytic locking method and system based on cloud edge cooperation
CN115828208A (en) * 2022-12-07 2023-03-21 北京理工大学 Touch electroencephalogram unlocking method and system based on cloud edge collaboration
CN116369949B (en) * 2023-06-06 2023-09-15 南昌航空大学 Electroencephalogram signal grading emotion recognition method, electroencephalogram signal grading emotion recognition system, electronic equipment and medium
CN116369949A (en) * 2023-06-06 2023-07-04 南昌航空大学 Electroencephalogram signal grading emotion recognition method, electroencephalogram signal grading emotion recognition system, electronic equipment and medium

Also Published As

Publication number Publication date
CN111329474B (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN111329474B (en) Electroencephalogram identity recognition method and system based on deep learning and information updating method
Ma et al. Resting state EEG-based biometrics for individual identification using convolutional neural networks
Cecotti et al. Single-trial classification of event-related potentials in rapid serial visual presentation tasks using supervised spatial filtering
CN109784023B (en) Steady-state vision-evoked electroencephalogram identity recognition method and system based on deep learning
Alyasseri et al. EEG feature extraction for person identification using wavelet decomposition and multi-objective flower pollination algorithm
CN110353702A (en) A kind of emotion identification method and system based on shallow-layer convolutional neural networks
CN108960182A (en) A kind of P300 event related potential classifying identification method based on deep learning
Alyasseri et al. EEG-based person authentication using multi-objective flower pollination algorithm
CN110826527A (en) Electroencephalogram negative emotion recognition method and system based on aggressive behavior prediction
CN114533086B (en) Motor imagery brain electrolysis code method based on airspace characteristic time-frequency transformation
CN111184509A (en) Emotion-induced electroencephalogram signal classification method based on transfer entropy
Pan et al. Emotion recognition based on EEG using generative adversarial nets and convolutional neural network
CN112022153B (en) Electroencephalogram signal detection method based on convolutional neural network
CN111714118A (en) Brain cognition model fusion method based on ensemble learning
CN113208593A (en) Multi-modal physiological signal emotion classification method based on correlation dynamic fusion
Lai et al. Arrangements of resting state electroencephalography as the input to convolutional neural network for biometric identification
CN112488002A (en) Emotion recognition method and system based on N170
CN115414051A (en) Emotion classification and recognition method of electroencephalogram signal self-adaptive window
Li et al. Emotion recognition of subjects with hearing impairment based on fusion of facial expression and EEG topographic map
CN111616702A (en) Lie detection analysis system based on cognitive load enhancement
CN113128353B (en) Emotion perception method and system oriented to natural man-machine interaction
CN110464371A (en) Method for detecting fatigue driving and system based on machine learning
CN114081505A (en) Electroencephalogram signal identification method based on Pearson correlation coefficient and convolutional neural network
Zhu et al. RAMST-CNN: a residual and multiscale spatio-temporal convolution neural network for personal identification with EEG
CN111772629A (en) Brain cognitive skill transplantation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant