CN113448440A - Asynchronous brain-computer interface construction method based on potential fusion - Google Patents

Asynchronous brain-computer interface construction method based on potential fusion Download PDF

Info

Publication number
CN113448440A
CN113448440A CN202110770060.1A CN202110770060A CN113448440A CN 113448440 A CN113448440 A CN 113448440A CN 202110770060 A CN202110770060 A CN 202110770060A CN 113448440 A CN113448440 A CN 113448440A
Authority
CN
China
Prior art keywords
sample
potential
hyperplane
computer interface
construction method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110770060.1A
Other languages
Chinese (zh)
Inventor
李梦凡
宋智勇
杨光
廖文喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University of Technology
Original Assignee
Hebei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University of Technology filed Critical Hebei University of Technology
Priority to CN202110770060.1A priority Critical patent/CN113448440A/en
Publication of CN113448440A publication Critical patent/CN113448440A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • A61B5/378Visual stimuli
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Biophysics (AREA)
  • Neurosurgery (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Dermatology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention provides a potential fusion-based asynchronous brain-computer interface construction method, which comprises the following steps: s1, constructing interfaces corresponding to the working state and the idle state of the asynchronous brain-computer interface, and inducing related potentials; s2, collecting the related potential of the constructed interface in two states by the system; and S3, identifying the fusion potential through a linear discrimination criterion algorithm to obtain the state of the system and an output command. The invention has the beneficial effects that: the asynchronous brain-computer interface construction method based on the event-related potential and the transient visual evoked potential can effectively identify the state of the system and output instructions by inducing the event-related potential and the transient visual evoked potential in the oddball paradigm.

Description

Asynchronous brain-computer interface construction method based on potential fusion
Technical Field
The invention belongs to the technical field of biomedical engineering brain-computer interfaces, and particularly relates to a potential fusion-based asynchronous brain-computer interface construction method.
Background
The brain-computer interface establishes an information communication path independent of peripheral nerves between the human brain and external equipment through equipment such as a computer. The control intention of a tested person is obtained by decoding the electroencephalogram signals, and the control intention is converted into a control instruction of external equipment, so that the interaction between the brain and the robot is finally realized. The traditional brain-computer interface requires real-time synchronization of a testee and a system, so that the freedom of the testee in time is limited, and the system is difficult to adapt to a complex and variable environment. And the asynchronous brain-computer interface allows the testee to be switched between a working state and an idle state at will, so that the testee can react at any time according to the actual condition, and the flexibility of the system is improved.
To date, there are a number of deficiencies in the art of constructing asynchronous brain-computer interfaces. For example, an asynchronous brain-computer interface system is constructed by using two or more than two paradigms, so that the complexity of the system is increased; an asynchronous brain-computer interface system built by a potential is utilized, and the idle state and the working state are difficult to distinguish; the asynchronous brain-computer interface system is identified by utilizing various potentials induced by various paradigms, so that the task burden of a testee is greatly increased. The above problems seriously hinder the development process of asynchronous friendly brain-computer interface.
The invention provides an asynchronous brain-computer interface construction method based on event-related potential and transient visual evoked potential, which is characterized in that the event-related potential and the transient visual evoked potential are simultaneously evoked by only using a single Eddball normal form (oddball), and a method for fusing the two potentials is designed to identify the state of an asynchronous system and output an instruction. The asynchronous brain-computer interface construction method provided by the invention has feasibility, can effectively identify the state of the system and output instructions, reduces the experimental burden of a testee, and increases the man-machine friendliness of the brain-computer interface.
Disclosure of Invention
In view of this, the present invention aims to provide a method for constructing an asynchronous brain-computer interface based on potential fusion, so as to solve the problem that two or more paradigms are used to construct an asynchronous brain-computer interface system, thereby increasing the complexity of the system; an asynchronous brain-computer interface system built by a potential is utilized, and the idle state and the working state are difficult to distinguish; the asynchronous brain-computer interface system is identified by utilizing various potentials induced by various paradigms, so that the task burden of a testee is greatly increased.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
an asynchronous brain-computer interface construction method based on potential fusion comprises the following steps:
s1, constructing interfaces corresponding to the working state and the idle state of the asynchronous brain-computer interface, and inducing related potentials;
s2, collecting the related potential of the constructed interface in two states by the system;
and S3, fusing and identifying the related potentials through a linear judgment criterion algorithm to obtain the state and the output instruction of the system.
Further, the construction of the asynchronous brain-computer interface working state and idle state interface in step S1 is performed by inputting the image of the video interface into the visual instruction interface of the oddball paradigm.
Further, the evoked relevant potential process: the pictures in the video interface are used as corresponding control action instructions, the pictures in the video interface flicker by adopting an oddball row-column flashing design in the visual instruction interface, and corresponding related potentials are induced and generated when the corresponding control action instructions are triggered.
Further, the relevant potentials include event-related potentials and transient visual evoked potentials.
Further, the process of fusing and identifying the related potentials by the linear discrimination criterion algorithm comprises the following steps:
s301, after the two collected types of potentials are subjected to pretreatment and feature extraction, the distance from each sample in each type of potential to the hyperplane is obtained by using a distance formula;
s302, sequencing each sample through a linear discrimination criterion algorithm, and obtaining the maximum distance from each type of potential sample to the hyperplane according to the distance from each sequenced sample to the hyperplane;
and S303, converting the distance between each sample and the hyperplane and the maximum distance between each sample and the hyperplane, and respectively comparing the probability values of rows and columns to obtain the state and the instruction of the system.
Further, the distance formula in step S301 is as follows:
f(x)=wTx+w0wherein W isTRepresents a hyperplane projection vector; x represents sample data, w0Is a constant.
Further, the maximum distance formula from the sample of each type of potential to the hyperplane in step S301 is as follows:
finding the maximum distance of each class according to the distance value f (x) from each sample to the hyperplane
Figure BDA0003152648750000031
Figure BDA0003152648750000032
Where f (X) represents the distance value from the sample to the hyperplane, class represents the sample class (1, 2), and XtrainRepresenting a sample set used to train the hyperplane.
Further, the preprocessing and feature extraction processes in step S301 are as follows:
the pretreatment of the event-related potential mainly comprises data interception, band-pass filtering, baseline correction and data down-sampling;
the preprocessing of the transient visual evoked potential comprises data interception and band-pass filtering;
the characteristic extraction of the event-related potential adopts a superposition average method to reduce the interference noise in the signal;
the feature extraction of the transient visual evoked potential adopts a method of combining a fast algorithm of superposition average and discrete Fourier transform.
Further, in step S303, the distance from each sample to the hyperplane and the maximum distance from each sample to the hyperplane are converted into probability values, and the probability values of rows and columns are respectively compared as follows:
converting the distance value from the sample to the hyperplane and the maximum distance value from the sample to the hyperplane into probability values through the following probability formula;
Figure BDA0003152648750000041
compare the probability values of rows and columns:
Figure BDA0003152648750000042
Prow=max(Pi),(i=1,2,3,4) (5)
Pcolumn=max(Pi),(i=5,6,7,8) (6)
Figure BDA0003152648750000043
Pi ERPprobability value, P, converted for the event-related potential sample generated by the ith stimulusi TSVEPProbability value, P, converted for the i-th stimulus-generated transient visual evoked potential sampleiIs Pi ERPAnd Pi TSVEPThe sum of (a) and (b) represents the probability, P, of the ith stimulusrowFor the purpose of stimulating, PcolumnAre column stimuli.
Compared with the prior art, the asynchronous brain-computer interface construction method based on the potential fusion has the following advantages:
(1) the construction method is based on an asynchronous brain-computer interface construction method of event-related potential and transient visual evoked potential, the event-related potential and the transient visual evoked potential are evoked in an oddball paradigm, and the state of a system can be effectively identified and an instruction can be effectively output;
(2) the construction method of the invention reduces the task burden of a testee in the experimental process, and simplifies the complexity of an asynchronous brain-computer interface system;
(3) according to the construction method, the fusion of the event-related potential and the transient visual evoked potential is used as a basis for judging the state of the asynchronous brain-computer interface system and outputting an instruction, so that the error of a single potential is avoided, and the robustness of the asynchronous brain-computer interface system is improved;
(4) in the construction method, the P-FLDA algorithm converts the classification results of the event-related potential and the transient visual evoked potential into probability values, and the two probability values are added to complete the fusion of the two potentials, so that the probability values are used as the basis for judging the state of the asynchronous brain-computer interface system and outputting instructions. The P-FLDA algorithm effectively improves the identification precision of the state and the instruction of the asynchronous brain-computer interface system.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a diagram of a visual instruction-inducing interface according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of visual command interface evoked event related potentials and transient visual evoked potentials according to an embodiment of the present invention;
FIG. 3 is a diagram of an intelligent vehicle video interface according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating the identification of the brain working and idle states of the asynchronous brain-computer interface according to an embodiment of the present invention;
FIG. 5 shows a 32 conducting cap electrode placement position for a method of construction according to an embodiment of the invention;
FIG. 6 is an experimental environment of a construction method according to an embodiment of the present invention;
FIG. 7 is an experimental flow chart of the construction method according to the embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
As shown in fig. 1 to 7, a method for constructing an asynchronous brain-computer interface based on potential fusion includes the following steps:
s1, constructing interfaces corresponding to the working state and the idle state of the asynchronous brain-computer interface, and inducing related potentials;
s2, collecting the related potential of the constructed interface in two states by the system;
and S3, fusing and identifying the related potentials through a linear judgment criterion algorithm to obtain the state and the output instruction of the system.
In the step S1, the asynchronous brain-computer interface working state and idle state interface is constructed by inputting the image of the video interface into the visual instruction interface of the oddball paradigm.
Evoked correlation potential process: the pictures in the video interface are used as corresponding control action instructions, the pictures in the video interface flicker by adopting an oddball row-column flashing design in the visual instruction interface, and corresponding related potentials are induced and generated when the corresponding control action instructions are triggered.
The related potential comprises event related potential and transient visual evoked potential, the state of the asynchronous brain-computer interface system is identified by using the signal characteristic of the transient visual evoked potential, and an instruction is output by using the event related potential.
The oddball is adopted to construct an asynchronous brain-computer interface system, so that the complexity of the system is greatly simplified; the testee only needs to learn the single paradigm of the Edburg and completes the experiment task according to the requirement, so that the learning difficulty and the experiment burden of the testee are reduced; the visual instruction interface corresponds to a working state, and the intelligent vehicle video interface corresponds to an idle state. The asynchronous brain-computer interface system outputs an intention corresponding instruction of a testee in a working state, and does not output any instruction in an idle state; the testee can switch between the working state and the idle state arbitrarily according to the actual situation, the purpose of freely controlling the time of the output instruction is achieved, and meanwhile, the flexibility of the system is improved; carrying out probability fusion on the event-related potential and the transient visual evoked potential through a P-FLDA algorithm (Fisher linear discriminant criterion algorithm), wherein the two potentials are jointly used as a characteristic identification asynchronous brain-computer interface system state and an output instruction; the fusion potential of the event-related potential and the transient visual evoked potential can effectively identify the state and the output instruction of the asynchronous brain-computer interface system, and compared with the traditional method, the identification accuracy is improved to a certain extent.
The process of identifying the fusion potential by the Fisher linear discriminant criterion algorithm comprises the following steps:
s301, after the two collected types of potentials are subjected to pretreatment and feature extraction, the distance from each sample in each type of potential to the hyperplane is obtained by using a distance formula;
s302, sequencing each sample through a linear discrimination criterion algorithm, and obtaining the maximum distance from each type of potential sample to the hyperplane according to the distance from each sequenced sample to the hyperplane;
and S303, converting the distance between each sample and the hyperplane and the maximum distance between each sample and the hyperplane, and respectively comparing the probability values of rows and columns to obtain the state and the instruction of the system.
The distance formula in step S301 is as follows:
f(x)=wTx+w0wherein W isTRepresents a hyperplane projection vector; x represents sample data, w0Is a constant.
The maximum distance formula from the sample of each type of potential to the hyperplane in step S301 is as follows:
finding the maximum distance of each class according to the distance value f (x) from each sample to the hyperplane
Figure BDA0003152648750000071
Figure BDA0003152648750000072
Where f (X) represents the distance value from the sample to the hyperplane, class represents the sample class (1, 2), and XtrainRepresenting a sample set used to train the hyperplane.
The preprocessing and feature extraction processes in step S301 are as follows:
the pretreatment of the event-related potential mainly comprises data interception, band-pass filtering, baseline correction and data down-sampling;
the preprocessing of the transient visual evoked potential comprises data interception and band-pass filtering;
the characteristic extraction of the event-related potential adopts a superposition average method to reduce the interference noise in the signal;
the feature extraction of the transient visual evoked potential adopts a method of combining a fast algorithm of superposition average and discrete Fourier transform.
In step S303, the distance from each sample to the hyperplane and the maximum distance from each sample to the hyperplane are converted into probability values, and the process of comparing the probability values of rows and columns respectively is as follows:
converting the distance value from the sample to the hyperplane and the maximum distance value from the sample to the hyperplane into probability values through the following probability formula;
Figure BDA0003152648750000081
compare the probability values of rows and columns:
Figure BDA0003152648750000082
Prow=max(Pi),(i=1,2,3,4) (5)
Pcolumn=max(Pi),(i=5,6,7,8) (6)
Figure BDA0003152648750000083
Pi ERPprobability value, P, converted for the event-related potential sample generated by the ith stimulusi TSVEPProbability value, P, converted for the i-th stimulus-generated transient visual evoked potential sampleiIs Pi ERPAnd Pi TSVEPThe sum of (a) and (b) represents the probability, P, of the ith stimulusrowFor the purpose of stimulating, PcolumnAre column stimuli.
The specific implementation is as follows:
(1) asynchronous brain-computer interface design
The visual stimulation material of the invention is an asynchronous brain-computer interface control object-intelligent vehicle, which has 16 intelligent vehicle action pictures: the intelligent vehicle is forward, the intelligent vehicle is backward, the intelligent vehicle is turned left, the intelligent vehicle is turned right, the mechanical arm is forward, the mechanical arm is backward, the mechanical arm is turned left, the mechanical arm is turned right, the mechanical arm is opened, the mechanical arm is closed, the camera is forward, the camera is turned backward, the camera is turned right, the camera is turned left, the ultrasonic wave is kept away from the barrier and the infrared is kept away from the barrier, and the content of each picture respectively represents the corresponding action instruction for controlling the intelligent vehicle.
The European Deberg line flash design is adopted in the visual instruction interface, and the line flash interface is manufactured by using a matlab-based psychology toolbox, and the interface is shown in figure 1 (a). The pictures of 16 smart cars are arranged in 4 × 4, each row flashes randomly one by one in each column, and fig. 1(b) and (c) respectively flash in the third row and the third column. The flickering picture is a motion picture of the intelligent vehicle, the flickering duration of each row or each column is 200ms, the flickering interval time is 100ms, and therefore the stimulation frequency of the flickering is 3.3 Hz. Except for the flashing picture, the other pictures are shielding pictures, the shielding pictures are pictures with white circles in the middle and black backgrounds, and the shielding pictures are replaced by the shielding pictures after the pictures of the intelligent vehicle are flashed. The definition of the experimental period refers to the process that all rows and columns do not flicker once repeatedly; "Experimental Unit" refers to a process consisting of several cycles. In the experiment, the tested person is required to watch the control instruction corresponding to the same flicker stimulation completion picture in each experiment unit. The stimulus targeted for the test is called the target stimulus, and the other non-targeted stimuli are called the non-target stimuli. During the course of the trial injection of the visual target stimulation, the corresponding brain potentials are evoked, and fig. 2 is a schematic diagram of the visual command interface evoked event related potentials and the transient visual evoked potentials. The rows and columns in the visual instruction interface flash randomly, and the blue blocks represent the rows or columns that do not contain the target stimulus; the red blocks represent the row or column flashes containing the target stimulus. The appearance of the red block indicates that the interface flickers a picture watched by the test, and event-related potential with time domain characteristics is induced in the brain; the red color blocks and the blue color blocks continuously appear to form a visual stimulus of flashing of all rows and columns at a fixed frequency, and transient visual evoked potentials with frequency domain characteristics are evoked.
In the intelligent vehicle video interface, a video is collected by a camera at the top of the intelligent vehicle, and the resolution of the camera of the intelligent vehicle is 640 multiplied by 480. The camera collects the visual angle picture of the intelligent vehicle in real time and transmits the environment of the intelligent vehicle to the computer display screen through wireless communication, an intelligent vehicle video interface is provided for a testee, and the size of the intelligent vehicle video interface in the display screen is consistent with that of the visual instruction interface. The picture transmitted back by the intelligent vehicle is shown in fig. 3, and the video is shot in the experiment, wherein the video comprises obstacles such as tables, chairs, boxes and the like placed in the laboratory.
(2) Method for identifying brain working and idle states of asynchronous brain-computer interface
The identification process of the working state and the idle state of the brain of the asynchronous brain-computer interface is shown in fig. 4, firstly, a system collects brain electrical signals, respectively extracts Event-Related potentials (ERP) in time domain signals and Transient Visual Evoked potentials (TSVEP) in time domain signals from the collected brain electrical signals, then converts sample values into probability values through a P-FLDA (P-flash data acquisition) algorithm from the extracted brain electrical signals, finally, in one experimental period, the probability values in rows and columns are respectively compared, if the values in the rows and columns are all larger than 0, the brain is in the working state, then the maximum values in the rows and the maximum values in the columns are compared, and the target stimulation to be watched by the test is correspondingly obtained through the row number and the column number of the maximum values, otherwise, the brain is in the idle state. The specific process is as follows:
firstly, preprocessing and characteristic extraction are carried out on the collected event-related potential and the transient visual evoked potential, and then the distance from each sample to the hyperplane is obtained by utilizing an FLDA algorithm. The formula for calculating this distance value is as follows:
f(x)=wTx+w0 (1)
different sample sets have different sequential distances, and the P-FLDA makes the distances in the same order through the distances of training samplesAnd (5) unifying. Finding the maximum distance of each class according to the distance value f (x) from each sample to the hyperplane
Figure BDA0003152648750000101
Figure BDA0003152648750000102
Then combining f (x)
Figure BDA0003152648750000103
Conversion to probability formula:
Figure BDA0003152648750000104
Figure BDA0003152648750000105
Prow=max(Pi),(i=1,2,3,4) (5)
Pcolumn=max(Pi),(i=5,6,7,8) (6)
Figure BDA0003152648750000106
Piis Pi ERPAnd Pi TSVEPAnd, at the same time, the probability of the ith stimulus. Positive Pi ERPIndicating that the ith stimulus is classified as a target stimulus; negative Pi ERPMeaning that the ith stimulus is classified as a non-target stimulus; positive Pi TSVEPRepresenting the working state; negative Pi TSVEPIndicating an idle state. The first four stimuli (i ═ 1,2,3,4) are those in the rows, and the last four stimuli (i ═ 5,6,7,8) are those in the columns. In the integration step, the row stimulus (P) is foundrow) And column stimulation (P)column) Maximum probability of. The corresponding rows and columns are called RowmaxAnd Columnmax. If P isrowAnd PcolumnAll are positive, then judge to be in RowmaxAnd ColumnmaxThe stimulus of (1) is a target stimulus. If one or both of the maximum probabilities are negative, the integrating step outputs an idle state as a result.
The method combines event-related potential and transient visual evoked potential induced under the European burger paradigm, and effectively identifies the state and output command of the asynchronous brain-computer interface system.
The electroencephalogram acquisition system platform uses a Neuroscan electroencephalogram acquisition analysis system, and the system mainly comprises a SynAmps2 amplifier, an acquisition panel, an electrode cap and a Curry8 software system. The system ensures that each channel has no phase deviation between signals through starting and time locking functions in the acquisition process. Meanwhile, the channel data are processed by using a 24-bit A/D analog-digital chip and are transmitted to a computer where Curry8 software is located through a USB2.0 interface. With the 32-lead cap, the electrode placement position is shown in fig. 5. The electroencephalogram experiments are carried out in quiet laboratories with room temperature of 25-26 ℃ and good sound insulation, the seat is comfortable, the size of a computer display is 23.3 inches, and the resolution is 1920 x 1080 pixels. Before the experiment, the tested person is informed to keep the body stable as much as possible, and other actions are reduced. The tested subject is required to have sufficient sleep before the experiment to ensure the good mental state of the experimental process, and simultaneously is required to keep the hair clean and avoid waveform distortion caused by overlarge impedance between the electrode and the scalp, the impedance of the electrode is required to be below 5K omega, and the experimental sampling rate is 1000 Hz.
Two display screens with the same resolution are arranged right in front of the tested vehicle, a visual instruction interface is arranged on the left display screen, and a video picture transmitted back by the intelligent vehicle is arranged on the right display screen, as shown in fig. 6. In the experimental process, when a visual instruction interface is watched, each picture represents a control instruction, a tested person sequentially watches 16 control instructions to complete corresponding instruction output actions, one instruction output is completed to be an experimental unit, each experimental unit totally carries out 10 experimental periods, namely, pictures represented by target stimulation in one experimental unit flicker for 20 times, and when the target stimulation flickers, the tested person conducts default times. When watching a video picture in front of the intelligent vehicle, the tested person is required to watch the content in the picture, the attention is concentrated, the vision always changes along with the picture, and the things in front are observed at the first visual angle of the intelligent vehicle. During the experiment, the tested object firstly watches the control instruction in the visual instruction interface, when finishing an instruction output (namely an experiment unit is finished), the tested object watches the intelligent vehicle video image, the watching time is the time of finishing an experiment unit, the tested view angle is converted in turn, the experiment flow chart is shown in fig. 7, and the black color block and the white color block respectively represent the experiment unit watching the visual instruction interface and the experiment unit watching the intelligent vehicle image. The test time is about 20 minutes in all, and the test can obtain sufficient preparation time before the test is finished.
The experiment is finished, namely the electroencephalogram acquisition is finished, and the electroencephalogram processing process is finished as in (2) in the specific implementation mode.
Those of ordinary skill in the art will appreciate that the elements and method steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of clearly illustrating the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. An asynchronous brain-computer interface construction method based on potential fusion is characterized by comprising the following steps:
s1, constructing interfaces corresponding to the working state and the idle state of the asynchronous brain-computer interface, and inducing related potentials;
s2, collecting the related potential of the constructed interface in two states by the system;
and S3, fusing and identifying the related potentials through a linear judgment criterion algorithm to obtain the state and the output instruction of the system.
2. The asynchronous brain-computer interface construction method based on electric potential fusion according to claim 1, characterized in that: in the step S1, the asynchronous brain-computer interface working state and idle state interface is constructed by inputting the image of the video interface into the visual instruction interface of the oddball paradigm.
3. The asynchronous brain-computer interface construction method based on potential fusion of claim 2, wherein the evoked relevant potential process: the pictures in the video interface are used as corresponding control action instructions, the pictures in the video interface flicker by adopting an oddball row-column flashing design in the visual instruction interface, and corresponding related potentials are induced and generated when the corresponding control action instructions are triggered.
4. The asynchronous brain-computer interface construction method based on electric potential fusion according to claim 3, characterized in that: the relevant potentials include event-related potentials and transient visual evoked potentials.
5. The asynchronous brain-computer interface construction method based on electric potential fusion according to claim 1, characterized in that: the process of fusing and identifying the related potentials by the linear discrimination criterion algorithm comprises the following steps:
s301, after the two collected types of potentials are subjected to pretreatment and feature extraction, the distance from each sample in each type of potential to the hyperplane is obtained by using a distance formula;
s302, sequencing each sample through a linear discrimination criterion algorithm, and obtaining the maximum distance from each type of potential sample to the hyperplane according to the distance from each sequenced sample to the hyperplane;
and S303, converting the distance between each sample and the hyperplane and the maximum distance between each sample and the hyperplane, and respectively comparing the probability values of rows and columns to obtain the state and the instruction of the system.
6. The asynchronous brain-computer interface construction method based on electric potential fusion according to claim 5, characterized in that: the distance formula in step S301 is as follows:
f(x)=wTx+w0wherein W isTRepresents a hyperplane projection vector; x represents sample data, and w0 is a constant.
7. The asynchronous brain-computer interface construction method based on electric potential fusion according to claim 5, characterized in that: the maximum distance formula from the sample of each type of potential to the hyperplane in step S301 is as follows:
finding the maximum distance of each class according to the distance value f (x) from each sample to the hyperplane
Figure FDA0003152648740000021
Figure FDA0003152648740000022
Where f (X) represents the distance value from the sample to the hyperplane, class represents the sample class, XtrainRepresenting a sample set used to train the hyperplane.
8. The asynchronous brain-computer interface construction method based on electric potential fusion according to claim 5, characterized in that: the preprocessing and feature extraction processes in step S301 are as follows:
the pretreatment of the event-related potential mainly comprises data interception, band-pass filtering, baseline correction and data down-sampling;
the preprocessing of the transient visual evoked potential comprises data interception and band-pass filtering;
the characteristic extraction of the event-related potential adopts a superposition average method to reduce the interference noise in the signal;
the feature extraction of the transient visual evoked potential adopts a method of combining a fast algorithm of superposition average and discrete Fourier transform.
9. The asynchronous brain-computer interface construction method based on electric potential fusion according to claim 5, characterized in that: in step S303, the distance from each sample to the hyperplane and the maximum distance from each sample to the hyperplane are converted into probability values, and the process of comparing the probability values of rows and columns respectively is as follows:
converting the distance value from the sample to the hyperplane and the maximum distance value from the sample to the hyperplane into probability values through the following probability formula;
Figure FDA0003152648740000031
compare the probability values of rows and columns:
Figure FDA0003152648740000032
Prow=max(Pi),(i=1,2,3,4) (5)
Pcolumn=max(Pi),(i=5,6,7,8) (6)
Figure FDA0003152648740000033
Pi ERPprobability value, P, converted for the event-related potential sample generated by the ith stimulusi TSVEPProbability value, P, converted for the i-th stimulus-generated transient visual evoked potential sampleiIs Pi ERPAnd Pi TSVEPThe sum of (a) and (b) represents the probability, P, of the ith stimulusrowFor the purpose of stimulating, PcolumnAre column stimuli.
CN202110770060.1A 2021-07-07 2021-07-07 Asynchronous brain-computer interface construction method based on potential fusion Withdrawn CN113448440A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110770060.1A CN113448440A (en) 2021-07-07 2021-07-07 Asynchronous brain-computer interface construction method based on potential fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110770060.1A CN113448440A (en) 2021-07-07 2021-07-07 Asynchronous brain-computer interface construction method based on potential fusion

Publications (1)

Publication Number Publication Date
CN113448440A true CN113448440A (en) 2021-09-28

Family

ID=77815324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110770060.1A Withdrawn CN113448440A (en) 2021-07-07 2021-07-07 Asynchronous brain-computer interface construction method based on potential fusion

Country Status (1)

Country Link
CN (1) CN113448440A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105843377A (en) * 2016-03-17 2016-08-10 天津大学 Hybrid brain-computer interface based on asynchronous parallel induction strategy
CN110811613A (en) * 2019-11-22 2020-02-21 河北工业大学 Method for improving event-related potential signal-to-noise ratio based on European Debao and DMST paradigm fusion
CN111803066A (en) * 2020-07-14 2020-10-23 河北工业大学 Double stimulation method for visual induction brain-computer interface based on coded modulation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105843377A (en) * 2016-03-17 2016-08-10 天津大学 Hybrid brain-computer interface based on asynchronous parallel induction strategy
CN110811613A (en) * 2019-11-22 2020-02-21 河北工业大学 Method for improving event-related potential signal-to-noise ratio based on European Debao and DMST paradigm fusion
CN111803066A (en) * 2020-07-14 2020-10-23 河北工业大学 Double stimulation method for visual induction brain-computer interface based on coded modulation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宫铭鸿: "事件相关电位与视觉诱发电位融合的异步脑机接口", 《中国知网硕士学位论文数据库》 *

Similar Documents

Publication Publication Date Title
CN106214391B (en) Intelligent nursing bed based on brain-computer interface and control method thereof
Gerson et al. Cortically coupled computer vision for rapid image search
Pfurtscheller et al. 15 years of BCI research at Graz University of Technology: current projects
CN108415554B (en) Brain-controlled robot system based on P300 and implementation method thereof
CN105549743A (en) Robot system based on brain-computer interface and implementation method
CN111930238B (en) Brain-computer interface system implementation method and device based on dynamic SSVEP (secure Shell-and-Play) paradigm
CN114424945B (en) Brain wave biological feature recognition system and method based on random graphic image flash
CN112162634A (en) Digital input brain-computer interface system based on SEEG signal
Lo et al. Novel non-contact control system for medical healthcare of disabled patients
Li et al. An adaptive P300 model for controlling a humanoid robot with mind
Mazurek et al. Utilizing high-density electroencephalography and motion capture technology to characterize sensorimotor integration while performing complex actions
Wang et al. P300 brain-computer interface design for communication and control applications
Gong et al. An idle state-detecting method based on transient visual evoked potentials for an asynchronous ERP-based BCI
CN114601476A (en) EEG signal emotion recognition method based on video stimulation
Zhang et al. Decoding coordinated directions of bimanual movements from EEG signals
Scherer et al. Kinect-based detection of self-paced hand movements: enhancing functional brain mapping paradigms
Farmaki et al. Application of dry EEG electrodes on low-cost SSVEP-based BCI for robot navigation
CN101339413B (en) Switching control method based on brain electric activity human face recognition specific wave
Wang et al. An eye tracking and brain–computer interface-based human–environment interactive system for amyotrophic lateral sclerosis patients
CN113082448A (en) Virtual immersion type autism children treatment system based on electroencephalogram signal and eye movement instrument
CN117612710A (en) Medical diagnosis auxiliary system based on electroencephalogram signals and artificial intelligence classification
CN113359991A (en) Intelligent brain-controlled mechanical arm auxiliary feeding system and method for disabled people
CN113448440A (en) Asynchronous brain-computer interface construction method based on potential fusion
O'Doherty et al. Exploring gaze-motor imagery hybrid brain-computer interface design
Park et al. Application of EEG for multimodal human-machine interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210928

WW01 Invention patent application withdrawn after publication