CN113752259B - Brain-computer interface control method, device and equipment for mechanical arm - Google Patents

Brain-computer interface control method, device and equipment for mechanical arm Download PDF

Info

Publication number
CN113752259B
CN113752259B CN202111033569.4A CN202111033569A CN113752259B CN 113752259 B CN113752259 B CN 113752259B CN 202111033569 A CN202111033569 A CN 202111033569A CN 113752259 B CN113752259 B CN 113752259B
Authority
CN
China
Prior art keywords
feature
signal
directions
dimensional space
mechanical arm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111033569.4A
Other languages
Chinese (zh)
Other versions
CN113752259A (en
Inventor
郭玉柱
潘康
李莉
魏彦兆
吴淮宁
张宝昌
张磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202111033569.4A priority Critical patent/CN113752259B/en
Publication of CN113752259A publication Critical patent/CN113752259A/en
Application granted granted Critical
Publication of CN113752259B publication Critical patent/CN113752259B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Dermatology (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Human Computer Interaction (AREA)
  • Automation & Control Theory (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The application relates to a brain-computer interface control method, device and equipment of a mechanical arm. The brain-computer interface control method of the mechanical arm comprises the following steps: acquiring multi-channel electroencephalogram signals; preprocessing is carried out on the electroencephalogram signal of each channel to obtain signal characteristics in six directions in a three-dimensional space; calculating a power spectrum for each signal characteristic, calculating a power spectrum characteristic of a corresponding preset frequency band according to the power spectrum, and taking the power spectrum characteristic as a frequency domain characteristic; performing sliding window convolution on each signal characteristic to obtain a time domain characteristic; splicing the frequency domain characteristics and the time domain characteristics of each channel to obtain characteristic results; and inputting the obtained multi-channel characteristic result into a pre-trained decoding model to obtain the movement speeds of the mechanical arm in three directions in a three-dimensional space. Therefore, the motion intensity of the mechanical arm is considered in the control method, the smoothness degree of mechanical arm motion control and the accuracy of brain control on the mechanical arm motion are improved, and the online control of the mechanical arm in a three-dimensional space is realized.

Description

Brain-computer interface control method, device and equipment of mechanical arm
Technical Field
The application relates to the technical field of brain-computer interface control, in particular to a method, a device and equipment for controlling a brain-computer interface of a mechanical arm.
Background
The Brain Computer Interface (BCI) is a connection or a pathway between the brain and a computer or an external device, and mainly acquires brain signals, performs feature extraction on the digital signals to obtain a most representative feature quantity of a certain functional activity, generates an external device instruction after classification, and the computer or the external device can also generate corresponding information to be fed back to the brain, thereby realizing brain-computer interaction.
In the related art, most of the existing methods for realizing anthropomorphic motion of the artificial limb or the exoskeleton through the BCI technology have limitations, so that the mechanical arm can only move at a fixed speed when the mechanical arm is controlled by the brain, the prediction efficiency is poor when the mechanical arm is controlled by the brain, the idea of a user cannot be well conformed, and more accurate control on the anthropomorphic motion of the mechanical arm is provided.
Disclosure of Invention
In view of this, the present application aims to overcome the technical problems of the prior art that the brain-controlled robot arm is not personified, and the accuracy rate still needs to be improved, and provides a method, a device and equipment for controlling a brain-computer interface of a robot arm.
In order to achieve the purpose, the following technical scheme is adopted in the application:
a first aspect of the present application provides a method for controlling a brain-computer interface of a robot arm, including:
acquiring multi-channel electroencephalogram signals;
performing preprocessing on the electroencephalogram signal of each channel to obtain signal characteristics in six directions in a three-dimensional space;
calculating a power spectrum for each signal characteristic, calculating a power spectrum characteristic of a corresponding preset frequency band according to the power spectrum, and taking the power spectrum characteristic as a frequency domain characteristic; performing sliding window convolution on each signal feature, and calculating to obtain a time domain feature;
splicing the frequency domain characteristics and the time domain characteristics of each channel to obtain a multi-channel characteristic result;
and inputting the obtained multi-channel characteristic result into a pre-trained decoding model to obtain the movement speeds of the mechanical arm in three directions in a three-dimensional space.
Optionally, the performing preprocessing to obtain signal features in six directions in a three-dimensional space includes:
respectively extracting spatial distribution components in six directions in a three-dimensional space from the electroencephalogram signals by using a common spatial mode;
and taking the spatial distribution components of the six directions as signal characteristics of the corresponding directions.
Optionally, the extracting, by using a common space mode, spatial distribution components in six directions in a three-dimensional space from the electroencephalogram signal respectively includes:
respectively inputting the electroencephalogram signals into six pre-trained spatial filters to obtain spatial distribution components in six directions in a three-dimensional space; the six spatial filters comprise an X-axis first-direction spatial filter, an X-axis second-direction spatial filter, a Y-axis first-direction spatial filter, a Y-axis second-direction spatial filter, a Z-axis first-direction spatial filter and a Z-axis second-direction spatial filter in a three-dimensional space.
Optionally, the calculating a power spectrum for each of the signal features includes:
constructing an autoregressive model according to a preset window length, and determining the autoregressive model parameter of each channel by using a Burg algorithm;
calculating a power spectrum for each of the signal features based on the autoregressive model and the determined autoregressive model parameters.
Optionally, the calculating, according to the power spectrum, a power spectrum characteristic of a corresponding preset frequency band includes:
obtaining a power spectral density map according to the power spectrum;
and calculating the sum of the areas under the preset frequency band in the power spectral density diagram, and correspondingly obtaining the power spectral characteristics of the signal characteristics in six directions in the preset frequency band.
Optionally, the decoding model includes: a transformer encoder layer, a pooling layer and a full connection layer.
Optionally, the performing sliding window convolution on each signal feature to obtain a time domain feature by calculation includes:
and performing sliding window convolution on each signal feature according to a preset window length to obtain a P300 feature related to the signal feature and the error, and taking the P300 feature as the time domain feature.
Optionally, after obtaining the moving speeds of the mechanical arm in three directions in the three-dimensional space, the method further includes:
controlling the mechanical arm to move according to the movement speeds in three directions; the three directions include an X-axis direction, a Y-axis direction and a Z-axis direction in a three-dimensional space.
A second aspect of the present application provides a brain-computer interface control device of a robot arm, including:
the acquisition module is used for acquiring multi-channel electroencephalogram signals;
the forward motion control decoupling module is used for executing preprocessing aiming at the electroencephalogram signal of each channel to obtain signal characteristics in six directions in a three-dimensional space;
the time-frequency characteristic extraction module is used for calculating a power spectrum for each signal characteristic, calculating the power spectrum characteristic of a corresponding preset frequency band according to the power spectrum, and taking the power spectrum characteristic as a frequency domain characteristic; performing sliding window convolution on each signal feature, and calculating to obtain a time domain feature;
the splicing module is used for splicing the frequency domain characteristics and the time domain characteristics of each channel to obtain a multi-channel characteristic result;
and the comprehensive instruction generation module is used for inputting the obtained multi-channel characteristic results into a pre-trained decoding model to obtain the movement speeds of the mechanical arm in three directions in a three-dimensional space.
A third aspect of the present application provides a brain-computer interface control device of a robot arm, including:
a processor, and a memory coupled to the processor;
the memory is used for storing a computer program;
the processor is configured to invoke and execute the computer program in the memory to perform the method according to the first aspect of the application.
The technical scheme provided by the application can comprise the following beneficial effects:
according to the scheme, after the multi-channel electroencephalogram signals are acquired, preprocessing operation can be performed on the electroencephalogram signals of each channel, signal characteristics in six directions in a three-dimensional space are obtained, and the electroencephalogram signals are decomposed into independent and decoupled signals along different directions. And then, calculating a power spectrum aiming at each signal characteristic, determining the power spectrum characteristic of a corresponding preset frequency band according to the calculated power spectrum, taking the power spectrum characteristic as a frequency domain characteristic, and meanwhile, performing sliding window convolution on each signal characteristic to calculate a time domain characteristic. After the frequency domain features and the time domain features of each channel are obtained, the frequency domain features and the time domain features of each channel are spliced to obtain feature results of a plurality of channels. And finally, inputting the obtained multi-channel characteristic result into a pre-trained decoding model, so that the movement speeds of the mechanical arm in three directions in a three-dimensional space can be obtained. Therefore, the electroencephalogram signals are processed into independent and decoupled signals, the frequency domain characteristics and the time domain characteristics are extracted, and the decoding model trained in advance is combined, so that the motion intensity of the mechanical arm is considered in the control method, the smoothness degree of motion control of the mechanical arm is improved, the accuracy of motion of the mechanical arm controlled by the brain is effectively improved, and the on-line control of the mechanical arm in the three-dimensional space is realized.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a method for controlling a brain-computer interface of a robot arm according to an embodiment of the present disclosure.
Fig. 2 is a schematic structural diagram of a brain-computer interface control device of a robot according to another embodiment of the present disclosure.
Fig. 3 is a schematic structural diagram of a brain-computer interface control device of a robot according to another embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail below. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without making any creative effort, shall fall within the protection scope of the present application.
In recent years, the BCI technology has emerged, and has become a hot spot in the fields of medical rehabilitation, education and training, and the like. The brain-controlled mechanical arm is always a research hotspot as the application of the brain-controlled mechanical arm in the field of medical rehabilitation, and has important significance for patients losing motor ability. Based on this, the embodiment of the application provides a brain-computer interface control method of a mechanical arm. Referring to fig. 1, the brain-computer interface control method of the robot arm may at least include the following steps:
and 11, acquiring multi-channel electroencephalogram signals.
In implementation, the multichannel electroencephalogram signals can be acquired through the electroencephalogram acquisition equipment.
The electroencephalogram acquisition equipment can be an electroencephalogram cap or other devices capable of acquiring electroencephalogram signals.
And 12, performing preprocessing aiming at the electroencephalogram signal of each channel to obtain signal characteristics in six directions in a three-dimensional space.
Step 13, calculating a power spectrum for each signal characteristic, calculating a power spectrum characteristic of a corresponding preset frequency band according to the power spectrum, and taking the power spectrum characteristic as a frequency domain characteristic; and performing sliding window convolution on each signal characteristic, and calculating to obtain a time domain characteristic.
And 14, splicing the frequency domain characteristics and the time domain characteristics of each channel to obtain a multi-channel characteristic result.
And step 15, inputting the obtained multi-channel characteristic result into a pre-trained decoding model to obtain the movement speeds of the mechanical arm in three directions in a three-dimensional space.
In this embodiment, after acquiring the electroencephalogram signals of multiple channels, for the electroencephalogram signal of each channel, a preprocessing operation may be performed on the electroencephalogram signal to obtain signal characteristics in six directions in a three-dimensional space, so that the electroencephalogram signal is decomposed into independent and decoupled signals in different directions. And then, calculating a power spectrum aiming at each signal characteristic, determining the power spectrum characteristic of a corresponding preset frequency band according to the calculated power spectrum, taking the power spectrum characteristic as a frequency domain characteristic, and meanwhile, performing sliding window convolution on each signal characteristic to calculate a time domain characteristic. After the frequency domain features and the time domain features of each channel are obtained, the frequency domain features and the time domain features of each channel are spliced to obtain feature results of a plurality of channels. And finally, inputting the obtained multi-channel characteristic result into a pre-trained decoding model, so that the movement speeds of the mechanical arm in three directions in a three-dimensional space can be obtained. Therefore, the electroencephalogram signals are processed into independent and decoupled signals, the frequency domain characteristics and the time domain characteristics are extracted, and the decoding model trained in advance is combined, so that the motion intensity of the mechanical arm is considered in the control method, the smoothness degree of motion control of the mechanical arm is improved, the accuracy of motion of the mechanical arm controlled by the brain is effectively improved, and the on-line control of the mechanical arm in the three-dimensional space is realized.
When the EEG signal acquisition device is specifically implemented, EEG acquisition equipment can be worn by a user, the user generates EEG signals (EEG signals) through motor imagery, the EEG acquisition equipment can be 64-path electrode caps, the EEG signals are acquired by the electrode caps, and the sampling frequency is 1000 Hz.
The motor imagery instruction corresponding relation is as follows:
TABLE 1 motor imagery instruction correspondence table
Motor imagery instruction Left hand movement Left-handed right motion Left-handed forward motion
Mechanical arm motion command The mechanical arm moves leftwards The mechanical arm moves to the right The mechanical arm moves forward
Motor imagery instruction Right hand forward motion Left hand upward movement Right hand upward movement
Mechanical arm motion command The mechanical arm moves backwards Upward movement of the mechanical arm The mechanical arm moves downwards
In step 12, preprocessing is performed to obtain signal features in six directions in a three-dimensional space, which may specifically include: respectively extracting spatial distribution components in six directions in a three-dimensional space from the electroencephalogram signals by using a common space mode; and taking the spatial distribution components of the six directions as signal characteristics of the corresponding directions.
When the common space mode is used and the spatial distribution components in six directions in the three-dimensional space are respectively extracted from the electroencephalogram signals, the electroencephalogram signals can be respectively input into six pre-trained spatial filters to obtain the spatial distribution components in six directions in the three-dimensional space; the six spatial filters comprise an X-axis first-direction spatial filter, an X-axis second-direction spatial filter, a Y-axis first-direction spatial filter, a Y-axis second-direction spatial filter, a Z-axis first-direction spatial filter and a Z-axis second-direction spatial filter in a three-dimensional space.
When the spatial filter is trained, training samples in six directions of an X axis (left and right), a Y axis (up and down), and a Z axis (front and back), namely an X axis first direction, an X axis second direction, a Y axis first direction, a Y axis second direction, a Z axis first direction and a Z axis second direction in a three-dimensional space can be processed respectively to obtain six spatial filters correspondingly, and then electroencephalogram signals are input into the six filters to obtain characteristics of decoupling of the six directions in the electroencephalogram signals. Suppose that E1 (left), E2 (right), E3 (front), E4 (back), E5 (top) and E6 (bottom) are multi-channel induced response time-space signal matrices under six motor imagery tasks in table 1, and the dimensions of the matrices are N × T, N is the number of electroencephalogram channels, and T is the number of samples collected by each channel at equal time intervals. In implementation, the common space mode is a space domain filtering feature extraction algorithm under two classification tasks, in order to obtain spatial distribution features in each direction, one of six classes can be regarded as one class, the other five classes can be regarded as one class, and the six classification tasks are converted into six two classification tasks, so that six spatial filters are obtained.
In specific implementation, it can be assumed that X1 is data of leftward motor imagery (i.e., E1), and X2 is data of non-leftward motor motion (i.e., E2-E6). Firstly, extracting the space distribution components in the left direction by X1 and X2, and implementing the steps as follows:
firstly, a mixed spatial covariance matrix of two types of data is obtained, and a covariance matrix R is obtained after X1 and X2 are normalized 1 、R 2 Can be respectively expressed as:
Figure BDA0003246127750000071
Figure BDA0003246127750000072
wherein, X T Denotes the transpose of X, trace (X) denotes the sum of the elements on the diagonal of matrix X. Thus, left and right mixed space covariance matrix R can be obtained L
Left and right hybrid spatial covariance matrix R L The expression of (c) is:
Figure BDA0003246127750000081
wherein the content of the first and second substances,
Figure BDA0003246127750000082
and
Figure BDA0003246127750000083
the mean covariance matrices for the left and right motion in the left and right mixture space, respectively.
Obtaining a left and right mixed spatial covariance matrix R L Then, the whitening eigenvalue matrix P can be obtained by principal component analysis L . First, the left and right mixed space covariance matrix R L And (3) decomposing the characteristic value according to the formula:
R L =UλU T (4)
where U is the eigenvector matrix of the matrix λ, λ is R L The characteristic values of (a) form a diagonal matrix. And (3) carrying out descending order arrangement on the eigenvalue, wherein the expression of the whitening eigenvalue matrix is as follows:
Figure BDA0003246127750000084
obtaining a whitening eigenvalue matrix P L Then, the covariance matrix R can be matched 1 、R 2 The following transformations are performed:
Figure BDA0003246127750000085
Figure BDA0003246127750000086
then to S 1 And S 2 And (3) performing principal component decomposition:
Figure BDA0003246127750000087
Figure BDA0003246127750000088
the matrix S can be proven by the above equation 1 The eigenvector sum matrix S 2 Are equal, i.e.:
B 1 =B 2 =B L (10)
at the same time, a diagonal matrix λ of two eigenvalues 1 And λ 2 The sum is the identity matrix, i.e.:
λ 12 =I (11)
since the sum of the eigenvalues of the two types of matrices is always 1, S is 1 The feature vector corresponding to the maximum feature value of (1) makes S 2 There is a minimum feature value and vice versa. Lambda is measured 1 The characteristic values in (1) are arranged in descending order, and lambda is 2 The eigenvalues in (a) are arranged in ascending order, from which point λ can be deduced 1 And λ 2 Having the following form:
λ 1 =diag(I 1 σ M 0) (12)
λ 2 =diag(0σ M I 2 ) (13)
whitening of electroencephalogram signal to lambda 1 And λ 2 For separation of the feature vectors corresponding to the largest eigenvalue in (1)The variance in the two signal matrices is optimal. Thereby, a matrix W is projected L The corresponding spatial filter (X-axis first spatial filter) is:
Figure BDA0003246127750000091
similarly, a rightward spatial filter (second spatial filter on X-axis) W can be obtained R Front and rear two-direction spatial filters (Z-axis first spatial filter and Z-axis second spatial filter) W F And W B And up-down direction spatial filters (Y-axis first spatial filter and Y-axis second spatial filter) W U And W D The matrix size is N x N, and the first r rows and the last r rows (2r < N) of the spatial filter are taken as the final filter, i.e., W 2r,L 、W 2r,R 、W 2r,F 、W 2r,B 、W 2r,U And W 2r,D Where r is the number of features determined according to actual requirements when generating the spatial filter, and is not limited herein. In this embodiment, r may be 1, X is one segment of the motor imagery EEG signal with a magnitude of N × T, and X is passed through six spatial filters W L 、W R 、W B 、W B 、W U And W D Respectively obtaining the spatial distribution components Z of the x, y and Z axes in the three-dimensional space L 、Z R 、Z F 、Z B 、Z U And Z D Namely:
Z dir =W 2r,dir X dir=L,R,F,B,U,D (15)
the six spatially distributed components are longitudinally stitched into Z, with a size of 12r x T. In this way, the spatial filters in six directions in the three-dimensional space can be obtained through the training sets of different motor imagery tasks, so that six spatial distribution components can be obtained.
After signal characteristics in six directions in a three-dimensional space are obtained, an autoregressive model can be constructed according to each signal characteristic and a preset window length, and autoregressive model parameters of each channel are determined by using a Burg algorithm; a power spectrum for each signal feature is calculated based on the autoregressive model and the determined autoregressive model parameters.
Wherein the preset window length may be 400 ms.
In particular, the method can be used for Z after pretreatment LR 、Z FB And Z UD Respectively constructing autoregressive models by using 400ms windows, wherein the order can be 16, the step length of window sliding is 10ms, and the models are as follows:
Figure BDA0003246127750000101
wherein Z is j (t) is the estimated signal of the jth feature at time t, w j Is the weight coefficient, ε is the estimation error, and p is the order of the autoregressive model of 16. The parameter w of the autoregressive model can then be estimated using the Burg algorithm j And determining an autoregressive model so as to calculate and obtain a power spectrum of the signal characteristic of each channel.
In practice, the phenomenon of time-dependent desynchronization/event-dependent synchronization is obviously observed in the motor imagery paradigm, and usually occurs in the mu (8-13Hz) frequency band. Therefore, the energy of the μ band can be taken as a feature, i.e., the sum of the areas under the μ band in the power spectral density map, so as to obtain the power spectral feature P of the μ band of the Z signal, i.e., the frequency domain feature of the signal feature.
Specifically, the parameters of the autoregressive model are estimated by using the Burg algorithm, and the specific implementation manner of calculating the power spectrum features may refer to the prior art, which is not described herein again.
And calculating the frequency domain characteristics of the signal characteristics, and simultaneously considering the time information related to the error to realize a closed-loop BCI system. In this way, when performing sliding window convolution on each signal feature and calculating to obtain a time domain feature, the sliding window convolution may be performed with a preset window length for each signal feature to obtain a P300 feature of the signal feature related to an error, and the P300 feature is used as the time domain feature.
In practical implementation, when the subject notices that the mechanical arm does not reach the expected control position, about 300ms after stimulation, a positive potential called P300 is generated in the brain, so that the P300 feature can be taken as a time domain feature, and the six spatial component distributions after preprocessing are respectively subjected to sliding window convolution by a window of 300ms, so as to obtain a P300 feature Q, wherein the formula is as follows:
Figure BDA0003246127750000102
wherein Q is j (t) is the P300 feature of the jth feature at time t,
Figure BDA0003246127750000103
is the first derivative of a Gaussian wavelet function, l is
Figure BDA0003246127750000104
Dimension of (c), Z j (t) is the signal of the jth feature at time t. In this embodiment, l is 30, i.e. the window length is 300ms, and the step size of window sliding is 10 ms.
After the power spectrum features P and P300 features Q are obtained, the two features in each channel may be longitudinally spliced to finally obtain a feature result F, which has a size of 24r × M. Wherein M is the sample number of the time dimension after T is subjected to data preprocessing and feature extraction.
After obtaining the multi-channel feature results, the obtained multi-channel feature results may be input into a pre-trained decoding model, where the decoding model may include: a transformer encoder layer, a pooling layer and a full connection layer. The multi-channel characteristic result can be mapped into the movement speeds of the mechanical arm in the x, y and z directions in the three-dimensional space by using the decoding model so as to realize the three-dimensional continuous movement of the mechanical arm.
The Transformer is composed of a plurality of encoders and decoders and is mostly used for generating a formula task. Because the brain-controlled mechanical arm belongs to an understanding task, only an encoder part is needed. The advantage of the Transformer is that the features at different moments can be processed in parallel, so that information interaction can be carried out. The Transformer encoder contains two blocks, a self attribute block and a feed forward block. The self-attribute block can obtain a new feature of each time point, the new feature is a linear weighted sum of original features of each time point, and the model can learn whether the feature of a certain time point needs attention and how much attention is needed, so that the performance of the time series task is improved.
In order for the model to understand the temporal order of the input features, sine and cosine relative position codes may be superimposed on the input features. The output of the Transformer encoder is a matrix (24r M) with the same size as the input features, so that a pooling operation aggregation message (24r 1) needs to be made after the encoder of the last layer, and then the pooling operation aggregation message is mapped to the movement speed of the mechanical arm in three directions by one fully-connected layer. The formula is as follows:
V=FC(pooling(f(F))) (18)
wherein f is a multilayer encoder, and considering that the data size is not large and overfitting is prevented, the embodiment can adopt 3 layers of encoders; the pooling method uses mean pooling, i.e. averaging M matrices of 24r 1; for dimension matching, FC is a 24r × 3 fully connected layer, transforming the 24r × 1 matrix into a 3 × 1 matrix.
When parameters of the decoding model are trained, multi-channel feature results can be collected as training samples to form a sample training set, the sample training set is input into a decoding model formed by a transform encoder layer, a posing layer and a full connection layer, preset speed is used as output data of the model, abstract features related to the brain-controlled mechanical arm are extracted through the decoding model, and the motion speed of the mechanical arm is output; and adjusting the parameter values of the decoding model by using an error back propagation algorithm until the fitting degree reaches a preset value, thus obtaining the trained decoding model.
In practice, the decoding model has F (24r M) as input, V (3 x 1) as output, x, y, and z motion speeds, respectively, with positive output indicating positive motion along the axis and negative output indicating reverse motion along the axis. Therefore, the decoding model based on the transform encoder can process current and historical information in parallel, a relevance score is given to information at each moment by a self attribute block in the model to represent the attention of the model to the information, the influence caused by noise is reduced, and the decoding model outputs the movement speeds of the mechanical arm in different directions, so that the online continuous control of the mechanical arm in a three-dimensional space is realized.
Specifically, the specific implementation manner of the training decoding model is as follows:
two mechanical arms A and B are placed, the motion of the mechanical arm A is used as supervision information, and a control system of the mechanical arm B is trained to accurately track the motion of the mechanical arm A. In the training process, the mechanical arm A moves along a preset track and speed, and the motion state of the mechanical arm B is determined by the output of the decoding model.
The experimental paradigm is as follows:
the method comprises the steps of presetting six motion tracks, namely left, right, front, back, up and down, and presetting a plurality of different motion speed grades. Firstly, the parameters of the decoding model are initialized randomly, and an experimenter can carry out a plurality of experiments, wherein the contents of each experiment are as follows: the mechanical arm A selects a preset motion track and a motion speed grade, the motion track direction of the experiment is prompted after the selection is finished, and a user obtains a corresponding motor imagery instruction according to the table 1 and the prompt; then, after the mechanical arm A starts to move, a user looks at the motion track of the mechanical arm A, executes a corresponding motor imagery command to generate a brain signal, and controls the motion state of the mechanical arm B through the output of the decoding model. When the motion states of the mechanical arm B and the mechanical arm A have deviation, the parameters of the decoding model are updated through a gradient descent method based on the deviation, and therefore the mechanical arm B can track the motion track and the speed of the mechanical arm A. There was a rest interval between each two experiments. After the movement speeds of the mechanical arm in three directions in the three-dimensional space are obtained, the movement of the mechanical arm can be controlled according to the movement speeds in the three directions. Wherein the three directions may be an X-axis direction, a Y-axis direction, and a Z-axis direction in a three-dimensional space.
The application provides a brain-computer interface control method of a mechanical arm, which adopts six motor imagery instructions to correspond to three-dimensional motion of the mechanical arm one by one, and realizes continuous motion of the mechanical arm in a three-dimensional space; decomposing the EEG signal into independent signal components in each direction of a three-dimensional space by adopting a CSP algorithm, thereby more accurately and respectively obtaining the characteristics of each direction through a power spectrum of a mu frequency band; the characteristics of all directions are mapped into the movement speed of the mechanical arm in the corresponding direction through a decoding model by a regression method, so that more complex mechanical arm movement is realized; a decoding model based on a transform encoder is adopted, current and historical information can be processed in parallel, a relevance score is marked on information of each moment by a self attribute block to represent the attention degree of the model to the information of the moment, and the influence caused by noise can be reduced. Therefore, the method has great significance for certain specific crowds or specific conditions, is harmless to human bodies by adopting non-invasive scalp electroencephalogram acquisition, and is easy to use and popularize. For some patients, the method can gradually restore own motor nerves, thereby restoring normal life.
Based on the same technical concept, embodiments of the present application also provide a brain-computer interface control device of a robot arm, as shown in fig. 2, the device may include: an obtaining module 201, configured to obtain multichannel electroencephalogram signals; the forward motion control decoupling module 202 is used for performing preprocessing on the electroencephalogram signal of each channel to obtain signal characteristics in six directions in a three-dimensional space; the time-frequency feature extraction module 203 is configured to calculate a power spectrum for each signal feature, calculate a power spectrum feature of a corresponding preset frequency band according to the power spectrum, and use the power spectrum feature as a frequency domain feature; performing sliding window convolution on each signal characteristic, and calculating to obtain a time domain characteristic; the splicing module 204 is configured to splice the frequency domain features and the time domain features of each channel to obtain a multi-channel feature result; and the comprehensive instruction generating module 205 is configured to input the obtained multi-channel feature result into a pre-trained decoding model, so as to obtain the motion speeds of the mechanical arm in three directions in the three-dimensional space.
Wherein the decoding model may include: a transformer encoder layer, a pooling layer and a full connection layer.
Optionally, when preprocessing is performed to obtain signal features in six directions in a three-dimensional space, the forward motion control decoupling module 202 is specifically configured to: respectively extracting spatial distribution components in six directions in a three-dimensional space from the electroencephalogram signals by using a common space mode; and taking the spatial distribution components of the six directions as signal characteristics of the corresponding directions.
Optionally, when the common space mode is used and spatial distribution components in six directions in the three-dimensional space are extracted from the electroencephalogram signal, the forward motion control decoupling module 202 may be specifically configured to: respectively inputting the electroencephalogram signals into six pre-trained spatial filters to obtain spatial distribution components in six directions in a three-dimensional space; the six spatial filters comprise an X-axis first-direction spatial filter, an X-axis second-direction spatial filter, a Y-axis first-direction spatial filter, a Y-axis second-direction spatial filter, a Z-axis first-direction spatial filter and a Z-axis second-direction spatial filter in a three-dimensional space.
Optionally, when calculating the power spectrum for each signal feature, the time-frequency feature extraction module 203 is configured to: constructing an autoregressive model according to a preset window length, and determining the autoregressive model parameter of each channel by using a Burg algorithm; a power spectrum for each signal feature is calculated based on the autoregressive model and the determined autoregressive model parameters.
Optionally, when calculating the power spectrum feature of the corresponding preset frequency band according to the power spectrum, the time-frequency feature extraction module 203 may be specifically configured to: obtaining a power spectral density map according to the power spectrum; and calculating the sum of the areas under the preset frequency band in the power spectral density diagram, and correspondingly obtaining the power spectral characteristics of the signal characteristics in the six directions in the preset frequency band.
Optionally, when performing sliding window convolution on each signal feature and calculating to obtain a time domain feature, the time-frequency feature extraction module 203 may be specifically configured to: and performing sliding window convolution on each signal feature according to a preset window length to obtain a P300 feature of the signal feature related to the error, and taking the P300 feature as a time domain feature.
Optionally, the brain-computer interface control device of the mechanical arm may further include a control module, where the control module is configured to: and controlling the mechanical arm to move according to the movement speeds in the three directions. Wherein, the three directions include X-axis direction, Y-axis direction and Z-axis direction in the three-dimensional space.
In this embodiment, reference may be made to the specific implementation of the brain-computer interface control device of the mechanical arm in any of the above embodiments, and details are not described here.
Based on the same technical concept, embodiments of the present application also provide a brain-computer interface control device of a robot arm, as shown in fig. 3, the device may include: a processor 301, and a memory 302 connected to the processor 301; the memory 302 is used to store computer programs; the processor 301 is configured to call and execute a computer program in the memory 302 to perform the brain-computer interface control method of the robot arm according to any of the above embodiments.
In this embodiment, reference may be made to the specific implementation of the brain-computer interface control device of the mechanical arm described in any embodiment above, and details are not described here again.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that, in the description of the present application, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present application, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (6)

1. A brain-computer interface control method of a mechanical arm is characterized by comprising the following steps:
acquiring multi-channel electroencephalogram signals;
performing preprocessing on the electroencephalogram signal of each channel to obtain signal characteristics in six directions in a three-dimensional space;
calculating a power spectrum for each signal characteristic, calculating a power spectrum characteristic of a corresponding preset frequency band according to the power spectrum, and taking the power spectrum characteristic as a frequency domain characteristic; performing sliding window convolution on each signal feature, and calculating to obtain a time domain feature;
splicing the frequency domain characteristics and the time domain characteristics of each channel to realize a closed-loop BCI system to obtain a multi-channel characteristic result;
inputting the obtained multi-channel characteristic result into a pre-trained decoding model to obtain the movement speeds of the mechanical arm in three directions in a three-dimensional space so as to control the continuous movement of the mechanical arm in the three-dimensional space;
wherein, the executing the preprocessing to obtain the signal characteristics of six directions in the three-dimensional space comprises: respectively extracting spatial distribution components in six directions in a three-dimensional space from the electroencephalogram signals by using a common spatial mode; taking the spatial distribution components in six directions as signal characteristics in corresponding directions;
using a common space mode, respectively extracting spatial distribution components in six directions in a three-dimensional space from the electroencephalogram signal comprises: respectively inputting the electroencephalogram signals into six pre-trained spatial filters to obtain spatial distribution components in six directions in a three-dimensional space; the six spatial filters comprise an X-axis first-direction spatial filter, an X-axis second-direction spatial filter, a Y-axis first-direction spatial filter, a Y-axis second-direction spatial filter, a Z-axis first-direction spatial filter and a Z-axis second-direction spatial filter in a three-dimensional space;
said computing a power spectrum for each of said signal features, comprising: constructing an autoregressive model according to a preset window length, and determining the autoregressive model parameter of each channel by using a Burg algorithm; calculating a power spectrum for each of the signal features based on the autoregressive model and the determined autoregressive model parameters;
performing sliding window convolution on each signal feature, and calculating to obtain a time domain feature, including: performing sliding window convolution on each signal feature according to a preset window length to obtain a P300 feature related to the signal feature and an error, and taking the P300 feature as the time domain feature; the formula for calculating the P300 feature is as follows:
Figure FDA0003691650220000021
wherein Q is j (t) is the P300 feature of the jth feature at time t,
Figure FDA0003691650220000022
is the first derivative of a Gaussian wavelet function, l is
Figure FDA0003691650220000023
Dimension of (c), Z j (t) is the signal of the jth feature at time t.
2. The brain-computer interface control method of a mechanical arm according to claim 1, wherein the calculating the power spectrum characteristic of the corresponding preset frequency band according to the power spectrum comprises:
obtaining a power spectral density map according to the power spectrum;
and calculating the sum of the areas under the preset frequency band in the power spectral density diagram, and correspondingly obtaining the power spectral characteristics of the signal characteristics in six directions in the preset frequency band.
3. The brain-computer interface control method of a robotic arm of claim 1, wherein the decoding model comprises: a transformerecoder layer, a pooling layer and a full connection layer.
4. The brain-computer interface control method of a robotic arm according to claim 1, wherein after obtaining the moving speeds of the robotic arm in three directions in the three-dimensional space, the method further comprises:
controlling the mechanical arm to move according to the movement speeds in three directions; the three directions include an X-axis direction, a Y-axis direction and a Z-axis direction in a three-dimensional space.
5. A brain-computer interface control device of a robot arm, comprising:
the acquisition module is used for acquiring multi-channel electroencephalogram signals;
the forward motion control decoupling module is used for executing preprocessing aiming at the electroencephalogram signal of each channel to obtain signal characteristics in six directions in a three-dimensional space;
the time-frequency characteristic extraction module is used for calculating a power spectrum for each signal characteristic, calculating the power spectrum characteristic of a corresponding preset frequency band according to the power spectrum, and taking the power spectrum characteristic as a frequency domain characteristic; performing sliding window convolution on each signal feature, and calculating to obtain a time domain feature;
the splicing module is used for splicing the frequency domain characteristics and the time domain characteristics of each channel to realize a closed-loop BCI system and obtain a multi-channel characteristic result;
the comprehensive instruction generation module is used for inputting the obtained multi-channel characteristic result into a pre-trained decoding model to obtain the movement speeds of the mechanical arm in three directions in a three-dimensional space so as to control the continuous movement of the mechanical arm in the three-dimensional space;
wherein, the executing the preprocessing to obtain the signal characteristics of six directions in the three-dimensional space comprises: respectively extracting spatial distribution components in six directions in a three-dimensional space from the electroencephalogram signals by using a common spatial mode; taking the spatial distribution components in six directions as signal characteristics in corresponding directions;
the method for extracting the spatial distribution components of six directions in the three-dimensional space from the electroencephalogram signals respectively by using the common spatial mode comprises the following steps: respectively inputting the electroencephalogram signals into six pre-trained spatial filters to obtain spatial distribution components in six directions in a three-dimensional space; the six spatial filters comprise an X-axis first-direction spatial filter, an X-axis second-direction spatial filter, a Y-axis first-direction spatial filter, a Y-axis second-direction spatial filter, a Z-axis first-direction spatial filter and a Z-axis second-direction spatial filter in a three-dimensional space;
said computing a power spectrum for each of said signal features, comprising: constructing an autoregressive model according to a preset window length, and determining the autoregressive model parameter of each channel by using a Burg algorithm; calculating a power spectrum for each of the signal features based on the autoregressive model and the determined autoregressive model parameters;
performing sliding window convolution on each signal feature, and calculating to obtain a time domain feature, including: performing sliding window convolution on each signal feature according to a preset window length to obtain a P300 feature related to the signal feature and an error, and taking the P300 feature as the time domain feature; the formula for calculating the P300 feature is as follows:
Figure FDA0003691650220000041
wherein Q is j (t) is the P300 feature of the jth feature at time t,
Figure FDA0003691650220000042
is the first derivative of a Gaussian wavelet function, l is
Figure FDA0003691650220000043
Dimension of (c), Z j (t) is the signal of the jth feature at time t.
6. A brain-computer interface control apparatus of a robot arm, comprising:
a processor, and a memory coupled to the processor;
the memory is used for storing a computer program;
the processor is configured to invoke and execute the computer program in the memory to perform the method of any of claims 1-4.
CN202111033569.4A 2021-09-03 2021-09-03 Brain-computer interface control method, device and equipment for mechanical arm Active CN113752259B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111033569.4A CN113752259B (en) 2021-09-03 2021-09-03 Brain-computer interface control method, device and equipment for mechanical arm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111033569.4A CN113752259B (en) 2021-09-03 2021-09-03 Brain-computer interface control method, device and equipment for mechanical arm

Publications (2)

Publication Number Publication Date
CN113752259A CN113752259A (en) 2021-12-07
CN113752259B true CN113752259B (en) 2022-08-05

Family

ID=78792945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111033569.4A Active CN113752259B (en) 2021-09-03 2021-09-03 Brain-computer interface control method, device and equipment for mechanical arm

Country Status (1)

Country Link
CN (1) CN113752259B (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102361005B1 (en) * 2018-11-15 2022-02-10 고려대학교 산학협력단 Device and method for optimal channel selection using correlation and filter-bank common spatial pattern features in brain-computer interface
CN111631907B (en) * 2020-05-31 2022-06-03 天津大学 Cerebral apoplexy patient hand rehabilitation system based on brain-computer interaction hybrid intelligence
CN113158793B (en) * 2021-03-15 2023-04-07 东北电力大学 Multi-class motor imagery electroencephalogram signal identification method based on multi-feature fusion
CN112998711A (en) * 2021-03-18 2021-06-22 华南理工大学 Emotion recognition system and method based on wearable device
CN113100769A (en) * 2021-04-21 2021-07-13 北京理工大学 Physiological index-based working state evaluation method and system for unmanned aerial vehicle operator
CN113288180A (en) * 2021-05-14 2021-08-24 南昌大学 Brain control system based on non-invasive brain-computer interface and implementation method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
脑-机接口中空域滤波技术现状与进展;吴小培;《安徽大学学报(自然科学版)》;20170331;第41卷(第2期);第14-31页 *

Also Published As

Publication number Publication date
CN113752259A (en) 2021-12-07

Similar Documents

Publication Publication Date Title
Qi et al. Intelligent human-computer interaction based on surface EMG gesture recognition
CN103735262B (en) Dual-tree complex wavelet and common spatial pattern combined electroencephalogram characteristic extraction method
CN109657642A (en) A kind of Mental imagery Method of EEG signals classification and system based on Riemann&#39;s distance
Chen et al. Quadcopter robot control based on hybrid brain–computer interface system
Alwasiti et al. Motor imagery classification for brain computer interface using deep metric learning
CN112043473B (en) Parallel nested and autonomous preferred classifier for brain-myoelectricity fusion perception of intelligent artificial limb
CN107808166B (en) Electromyographic feature extraction method for linear Laplace discrimination of MEMD tensor
CN104951797B (en) A kind of ELM Mental imagery brain electricity sorting techniques based on AR Coefficient Spaces
Miller et al. Higher dimensional analysis shows reduced dynamism of time-varying network connectivity in schizophrenia patients
Zhang et al. Asynchronous brain-computer interface shared control of robotic grasping
Gupta et al. Detecting eye movements in EEG for controlling devices
Abbasi-Asl et al. Brain-computer interface in virtual reality
Khushaba et al. Myoelectric control with fixed convolution-based time-domain feature extraction: Exploring the spatio–temporal interaction
Wang et al. Facilitate sEMG-based human–machine interaction through channel optimization
CN114358130A (en) Method for intelligent control of electric chair through brain-electricity-eye fusion
CN113752259B (en) Brain-computer interface control method, device and equipment for mechanical arm
Wang et al. Research on the key technologies of motor imagery EEG signal based on deep learning
Wang et al. Continuous motion estimation of lower limbs based on deep belief networks and random forest
Georgescu et al. The future of work: towards service robot control through brain-computer interface
Bhalerao et al. Automatic detection of motor imagery EEG signals using swarm decomposition for robust BCI systems
CN115813409A (en) Ultra-low-delay moving image electroencephalogram decoding method
Idowu et al. Efficient classification of motor imagery using particle swarm optimization-based neural network for IoT applications
Hsu et al. Tracking non-stationary EEG sources using adaptive online recursive independent component analysis
Wang et al. Research of EEG recognition algorithm based on motor imagery
Lin et al. A motor-imagery BCI system based on deep learning networks and its applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant