CN112328072A - Multi-mode character input system and method based on electroencephalogram and electrooculogram - Google Patents

Multi-mode character input system and method based on electroencephalogram and electrooculogram Download PDF

Info

Publication number
CN112328072A
CN112328072A CN202011074925.2A CN202011074925A CN112328072A CN 112328072 A CN112328072 A CN 112328072A CN 202011074925 A CN202011074925 A CN 202011074925A CN 112328072 A CN112328072 A CN 112328072A
Authority
CN
China
Prior art keywords
signal
electroencephalogram
data
interface
eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011074925.2A
Other languages
Chinese (zh)
Inventor
潘家辉
唐秀雯
李享运
陈广源
王帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Normal University
Original Assignee
South China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Normal University filed Critical South China Normal University
Priority to CN202011074925.2A priority Critical patent/CN112328072A/en
Publication of CN112328072A publication Critical patent/CN112328072A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Dermatology (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention relates to a multimode character input system and method based on electroencephalogram and electrooculogram, comprising the following steps: a stimulation display unit for displaying characters and controlling the flickering of the characters; the signal acquisition unit is used for acquiring and preprocessing the electroencephalogram signals and finally packaging the electroencephalogram signals into electroencephalogram packaging data; the signal analysis unit is used for analyzing the electroencephalogram encapsulation data, identifying signals and outputting control instructions according to identification results of the signal identification; and a result output unit for displaying the output result; the control instructions include blink instructions to control selection and withdrawal of characters and closed-eye instructions to control suspension and resumption of the system; the stimulation response module adopts an option coding mode based on pinyin syllables to design a stimulation paradigm. The invention has simple control source principle, low signal acquisition cost, high system practicability and higher input rate and accuracy.

Description

Multi-mode character input system and method based on electroencephalogram and electrooculogram
Technical Field
The invention relates to the field of biomedical signal processing and man-machine interaction, in particular to a multi-mode character input system and method based on brain electricity and eye electricity.
Background
Millions of people worldwide suffer from severe motor function disorders such as stroke, spinal cord injury, amyotrophic lateral sclerosis, and the like. Except vision and muscles around eyes, all motor systems of a patient with severe dyskinesia are damaged by different procedures and cannot do autonomous movement, and the patient basically loses the ability of communicating with the outside. Although the current medical level is continuously improved to help the motor nerve-impaired patients to prolong life, the quality of life is not improved because they cannot autonomously communicate with the outside, and burden is also brought to the family and the society.
Barrier-free human-computer interaction provides a useful alternative means of interaction with the outside world for such disabled persons who lose their motor abilities. The barrier-free technology mainly studies how to help disabled people freely participate in social activities, and the human-computer interaction system studies on designing, evaluating and implementing an interactive computing system for people and related phenomena thereof. A barrier-free man-machine interaction system is a novel man-machine interface system which does not need to rely on limbs for information communication, and can enable a user to rely on own idea for independent information communication with the outside. The barrier-free man-machine interaction technology using bioelectricity as an information carrier is a technology which uses bioelectricity signals to replace a traditional hardware interface so that a user can complete interactive communication with external equipment. Because the bioelectricity signals are rich in human activity information, the human-computer interaction based on the bioelectricity signals can effectively reflect the real activity intention of people, help severe dyskinesia patients overcome the difficulty of communicating with the outside and using information technology equipment such as computers, and provide use opportunities with similar degrees for people suffering from specific diseases or diseases.
At present, the character input system based on the bioelectricity has some mature achievements, but the character input system also has the following defects:
systems with high character input rate and high accuracy often adopt multi-electrode or high-precision acquisition equipment, and are very expensive. Most dyskinetic patient families have expended large amounts of money and are unable to pay for equipment at daily medical care. The system can only be in a laboratory stage and cannot be generally applied in daily life;
if low-cost equipment is adopted to collect electroencephalogram signals, the system is developed by using the same research method under the condition of insufficient precision, so that the system accuracy is low, the application level cannot be reached, and the practicability is not realized.
Disclosure of Invention
Based on this, the invention aims to provide a multi-mode character input system based on electroencephalogram and electrooculogram, which combines the electrooculogram signal and the electroencephalogram signal to realize multi-mode control, makes up the defect of a single signal source, reduces the cost and enhances the practicability of the system.
A multimodal character input system based on electroencephalography and electrooculography, comprising:
the stimulation display unit is used for displaying characters used for character input and controlling the flickering of the characters;
the signal acquisition unit is used for acquiring electroencephalogram signals to acquire original acquisition signals, processing the original acquisition signals to acquire filtered electroencephalogram signals, and packaging the filtered electroencephalogram signals into electroencephalogram packaging data;
the signal analysis unit is used for analyzing the electroencephalogram encapsulation data, identifying signals and outputting control instructions according to identification results of the signal identification; and
the result output unit is used for receiving the control instruction from the signal analysis unit, and displaying a corresponding output result according to the control instruction and in combination with the characters flickering in the stimulation display unit;
the control instructions comprise a blinking instruction and a closed-eye instruction, the blinking instruction is used for controlling the selection and withdrawal of characters, and the closed-eye instruction is used for controlling the suspension and recovery of the system;
the stimulation response module adopts an option coding mode based on pinyin syllables to design a stimulation paradigm.
The multimode character input system based on electroencephalogram and electrooculogram has the advantages of simple control source principle, low signal acquisition cost, high system practicability, and higher input rate and accuracy.
Furthermore, the signal acquisition unit comprises a signal extraction chip, a signal preprocessing module and a data packaging module;
the signal extraction chip is a single-channel EEG extraction chip which measures an EEG signal on the forehead of a person so as to acquire the original acquisition signal;
the signal preprocessing module converts the original acquisition signal from a time domain to a frequency domain by adopting fast Fourier transform, performs spectrum analysis, filtering and amplification on the signal converted into the frequency domain, and then performs inverse Fourier transform to restore the signal in the frequency domain into a time domain signal so as to obtain the filtered electroencephalogram signal;
the data packaging module packages the filtered electroencephalogram signals into a data packet with a preset format, the data packet comprises electroencephalogram signals with different frequency bands, concentration degrees and meditation degrees, and the data packet forms the electroencephalogram packaging data.
Further, the signal analysis unit comprises a data analysis module, a signal identification module and a signal learning module;
the data analysis module is used for analyzing the electroencephalogram encapsulation data to obtain target data, and the target data comprises the filtering electroencephalogram data and electroencephalogram Alpha wave data;
the signal identification module compares the target data acquired in the data analysis module with a data threshold of a corresponding target signal so as to identify the corresponding target signal, wherein the target signal comprises a preset electroencephalogram signal and a preset electrooculogram signal, and the data threshold comprises a closed-eye Alpha threshold and a blink threshold;
when the target signal is identified as the preset electroencephalogram signal, outputting an eye closing instruction; and outputting a blinking instruction when the target signal is identified as the preset eye electric signal.
Further, the stimulation display unit comprises a stimulation paradigm interface for displaying characters;
the stimulation paradigm interface comprises a first-level interface, a second-level interface, a third-level interface and a fourth-level interface;
the primary interface is a syllable set, and the syllable set is divided into 3 options to be selected according to the pinyin characteristics of the Chinese characters, namely an initial consonant set, a vowel set and an integral reading syllable set;
the secondary interface is a specific syllable selection interface, and the specific syllable is selected by entering the to-be-selected option from the secondary interface;
the third-level interface is the vowel set interface, and vowels meeting grammar rules are selected from the third-level interface;
the four-level interface is the final collection interface, and additional syllables are selected from the four-level interface or the final collection interface is directly ended.
Further, the stimulation display unit further comprises a flicker control module, and the flicker control module is used for controlling the flicker of the characters in the stimulation paradigm interface;
the first-level interface adopts a flashing paradigm of flashing only one option at a time, and the 3 options to be selected are sequentially flashed;
dividing the secondary interface into 3 areas, selecting characters in different areas to form a combination, flashing the characters in the combination successively at a minimum time interval, replacing the characters in the combination and flashing again after all the characters in the combination are flashed, and repeating the steps until the same character is included in the combination after the blinking signals are detected twice;
the three-level interface and the four-level interface adopt a flashing paradigm of flashing only one option at a time, and syllables forming common words are preferentially selected to flash.
Further, the method for calculating the blink threshold value comprises the following steps:
collecting electroencephalogram data generated by blinking of a user, and setting amplitude data of k natural blinks as N ═ N1,n2,n3,...,nkThe amplitude data of k self-blinking is M ═ M1,m2,m3,...,mk}; selecting a maximum value N of natural blink NmaxAnd minimum value M in self blinking Mmin
nmax=max{n1,n2,n3,...,nk}
mmin=min{m1,m2,m3,...,mk}
The conditions for determining whether the current signal is a self-blinking are:
Figure BDA0002716395230000031
wherein A is the amplitude of the current signal, and TH represents the threshold value for judging the amplitude of the signal of the self-blinking; the time T at which the current signal is above the threshold needs to be satisfied:
200ms≤T≤300ms。
further, the method for calculating the blink threshold value further comprises the following steps:
after the user confirms the output correctly once, the output is marked as an autonomous blink signal, and the signal is recorded; after confirming the error made by the output, marking the error as a non-self blink signal, and recording the signal; selecting a maximum value and a minimum value for every five recorded homogeneous signals for real-time threshold correction.
Further, the method for calculating the eye-closing Alpha threshold value comprises the following steps:
collecting electroencephalogram Alpha wave data generated when a user opens and closes eyes, and setting the collected k-segment Alpha total energy value data set as Z ═ Z1,z2,z3,...,zkK total energy value data set when eyes are closed is B ═ B1,b2,b3,...,bk}; selecting a maximum value Z in Alpha Total energy value data set Z at eye openingmaxAnd the minimum value B in the total energy value data set B in the eye-closingmin
zmax=max{z1,z2,z3,...,zk}
bmin=min{b1,b2,b3,...,bk}
The conditions for judging whether the current user has a closed-eye rest are as follows:
Figure BDA0002716395230000041
and when the power is detected to be obviously reduced, judging that the eyes of the user are open, and judging that the conditions are as follows:
Figure BDA0002716395230000042
where P is the power over a certain period of time, PTHIs the eye-closure Alpha threshold.
Based on the multi-modal character input based on the electroencephalogram and electrooculogram, the invention also provides a multi-modal character input method based on the electroencephalogram and the electrooculogram, which comprises the following steps:
displaying characters used for character input, and controlling the flicker of the characters;
acquiring an electroencephalogram signal, preprocessing the electroencephalogram signal, performing data encapsulation on the preprocessed electroencephalogram signal, and calling the electroencephalogram signal subjected to data encapsulation as electroencephalogram encapsulation data;
analyzing the electroencephalogram encapsulation data, identifying signals, and outputting a control instruction according to an identification result of the signal identification; and displaying the input result;
the control instructions comprise a blinking instruction and a closed-eye instruction, the blinking instruction is used for controlling the selection and withdrawal of characters, and the closed-eye instruction is used for controlling the suspension and recovery of the system;
the characters comprise letters and letter combinations; the flashing mode of the letter or the letter combination obeys a preset stimulation paradigm, and the stimulation paradigm is designed by adopting an option coding mode based on pinyin syllables.
Further, the method for acquiring the electroencephalogram signals comprises the following steps: a single channel EEG extraction chip is used which measures the brain electrical signal at the forehead of the person, thereby acquiring the raw acquisition signal.
The method for preprocessing the electroencephalogram signals comprises the following steps: and converting the originally acquired signal from a time domain to a frequency domain by adopting fast Fourier transform, performing spectrum analysis, filtering and amplification on the signal converted into the frequency domain, and performing inverse Fourier transform to restore the signal in the frequency domain into a time domain signal so as to acquire the filtered electroencephalogram signal.
The method for carrying out data encapsulation on the electroencephalogram signals comprises the following steps: the filtering electroencephalogram signals are packaged into a data packet with a preset format, the data packet comprises electroencephalogram signals with different frequency bands, concentration degrees and meditation degrees, and the data packet forms the electroencephalogram packaging data.
Further, the method for analyzing the electroencephalogram encapsulation data comprises the following steps: analyzing the electroencephalogram encapsulation data to obtain target data, wherein the target data comprises the filtering electroencephalogram data and electroencephalogram Alpha wave data.
The signal identification method comprises the following steps: and comparing the target data acquired in the data analysis module with a data threshold of a corresponding target signal so as to identify the corresponding target signal, wherein the target signal comprises a preset electroencephalogram signal and a preset electro-oculogram signal, and the data threshold comprises a closed-eye Alpha threshold and a blink threshold.
When the target signal is identified as the preset electroencephalogram signal, outputting an eye closing instruction; and outputting a blinking instruction when the target signal is identified as the preset eye electric signal.
Further, the method for displaying the characters used for character input comprises the following steps: dividing a display level into four stimulation paradigm interfaces, wherein the stimulation paradigm interfaces comprise a first level interface, a second level interface, a third level interface and a fourth level interface;
the primary interface is a syllable set, and the syllable set is divided into 3 options to be selected according to the pinyin characteristics of the Chinese characters, namely an initial consonant set, a vowel set and an integral reading syllable set;
the secondary interface is a specific syllable selection interface, and the specific syllable is selected by entering the to-be-selected option from the secondary interface;
the third-level interface is the vowel set interface, and vowels meeting grammar rules are selected from the third-level interface;
the four-level interface is the final collection interface, and additional syllables are selected from the four-level interface or the final collection interface is directly ended.
Further, the method for controlling the flickering of the character comprises the following steps: the first-level interface adopts a flashing paradigm of flashing only one option at a time, and the 3 options to be selected are sequentially flashed;
dividing the secondary interface into 3 areas, selecting characters in different areas to form a combination, flashing the characters in the combination successively at a minimum time interval, replacing the characters in the combination and flashing again after all the characters in the combination are flashed, and repeating the steps until the same character is included in the combination after the blinking signals are detected twice;
the three-level interface and the four-level interface adopt a flashing paradigm of flashing only one option at a time, and syllables forming common words are preferentially selected to flash.
Further, the method for calculating the blink threshold value comprises the following steps:
collecting electroencephalogram data generated by blinking of a user, and setting amplitude data of k natural blinks as N ═ N1,n2,n3,...,nkThe amplitude data of k self-blinking is M ═ M1,m2,m3,...,mk}; selecting a maximum value N of natural blink NmaxAnd minimum value M in self blinking Mmin
nmax=max{n1,n2,n3,...,nk}
mmin=min{m1,m2,m3,...,mk}
The conditions for determining whether the current signal is a self-blinking are:
Figure BDA0002716395230000061
wherein A is the amplitude of the current signal, and TH represents the threshold value for judging the amplitude of the signal of the self-blinking; the time T at which the current signal is above the threshold needs to be satisfied:
200ms≤T≤300ms;
after the user confirms the output correctly once, the output is marked as an autonomous blink signal, and the signal is recorded; after confirming the error made by the output, marking the error as a non-self blink signal, and recording the signal; selecting a maximum and a minimum for every five of said recorded homogeneous signals for correcting said blink threshold in real time.
Further, the method for calculating the eye-closing Alpha threshold value comprises the following steps:
collecting electroencephalogram Alpha wave data generated when a user opens and closes eyes, and setting the collected k-segment Alpha total energy value data set as Z ═ Z1,z2,z3,...,zkK total energy value data set when eyes are closed is B ═ B1,b2,b3,...,bk}; selecting a maximum value Z in Alpha Total energy value data set Z at eye openingmaxAnd the minimum value B in the total energy value data set B in the eye-closingmin
zmax=max{z1,z2,z3,...,zk}
bmin=min{b1,b2,b3,...,bk}
The conditions for judging whether the current user has a closed-eye rest are as follows:
Figure BDA0002716395230000062
and when the power is detected to be obviously reduced, judging that the eyes of the user are open, and judging that the conditions are as follows:
Figure BDA0002716395230000071
where P is the power over a certain period of time, PTHIs the eye-closure Alpha threshold.
For a better understanding and practice, the invention is described in detail below with reference to the accompanying drawings.
Drawings
FIG. 1 is a schematic structural diagram of a multi-modal electroencephalogram and electrooculogram-based character input system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a stimulation paradigm interface;
FIG. 3 is a schematic diagram of asynchronous, same domain scintillation;
FIG. 4 is a graph of Alpha band data for a sample taken over a period of time.
Detailed Description
Referring to fig. 1, an example of the multi-modal character input system based on electroencephalogram and electrooculogram provided by the present invention mainly includes a stimulation display unit, a signal acquisition unit, a signal analysis unit, and a result output unit. The following is a detailed description of each unit.
The stimulation display unit is configured to display characters used for inputting the characters, where the characters include letters or letter combinations, and a stimulation paradigm is designed by using an option coding method based on pinyin syllables, please refer to fig. 2, where a stimulation paradigm frame includes 4 levels of stimulation paradigm interfaces, i.e., a first level interface, a second level interface, a third level interface, and a fourth level interface, and each stimulation paradigm interface is described in detail below.
The primary interface is a syllable set, and the syllable set is divided into 3 options to be selected according to the pinyin characteristics of the Chinese characters, namely an initial consonant set, a vowel set and an integral reading syllable set.
The secondary interface is a specific syllable selection interface, and the to-be-selected option is entered from the secondary interface and a specific syllable is selected. According to the grammar rule of Chinese pinyin, if a vowel is selected or a syllable is recognized as a whole, the pinyin spelling of a certain Chinese character is finished, and after a user confirms input, the input is finished; if the initial consonant is selected, the final consonant meeting the grammar rule is also selected, and the three-level interface is entered.
The third-level interface is the vowel set interface, and vowels meeting grammar rules are selected in the third-level interface. The final following the initial in the pinyin may be composed of a plurality of syllables, for example, the "lower" final is composed of "i" and "a", and the "desired" final is composed of "i" and "ang". Therefore, in this level of interface, if the selected syllable needs to add other syllables, the final needs to enter the final set interface again for final selection; if the selected syllable does not need to be added with other syllables, the process is ended directly. Whether other syllables are needed or not, the interface of the fourth level is entered after a vowel is selected in the interface of the third level;
and the fourth-level interface is still the interface of the final collection, and additional syllables are selected from the fourth-level interface or the interface is finished directly. For example, the spelling process of "you" is to select the initial "n" and then select the final syllable "i", but since the spelling process coincides with the first half of the spelling process of "bird" (the initial "n" is selected first, the final syllable "i" is selected later, and then the final syllable "ao") still enters the four-level interface, but the "end" option is selected in the interface of this level, and then the input is ended; and for bird, selecting the final syllable "ao" in the four-level interface, and ending the input.
The blinking control sub-module is configured to control blinking of the syllables in the stimulation paradigm interface, please refer to fig. 2, because the numbers of options presented in different stages of the stimulation paradigm interface are different, different blinking manners are selected when the character layouts of the stimulation paradigm interface are different, and a blinking order is optimized.
Specifically, the number of the selection items in the first-level interface is 3, and the three selection items are sequentially flashed by adopting a flashing paradigm of flashing only one selection item at a time. The blinking paradigm of only one selection item at a time is referred to as a one-choice paradigm. In the secondary interface, an initial interface has 23 options, a final interface has 12 options, and the whole syllable of reading has 16 options. The single option paradigm may still be employed because the vowel interface and the overall syllable interface have fewer options. However, for the initial interface, because the number of options is large, the single-option paradigm is continuously adopted, and the initial interface flickers one by one from beginning to end, and the total flicking time of the entire initial set is relatively long, thereby affecting the character input speed. In view of the above, the present invention provides a different domain synchronous flashing, that is, each two rows of initial consonants are divided into a region, each character in different regions is selected to form a combination, each character in the combination flashes successively at a very small time interval, each character in the combination is regarded as completing 1 flash after all the characters in the combination are flashed, 6 flashes are one cycle, and two cycles of flashing are performed, as shown in fig. 3. And ensuring that the combinations are different when flickering every time, namely, the flickering combinations cannot synchronously flicker again, and judging that the same syllable in the two syllable combinations of the detected self-blinking signals is the target syllable. In the three-level interface, 8-20 vowel syllables are displayed by eliminating through grammar rules, the specific number is determined according to actual conditions, and the single character paradigm is adopted uniformly to avoid confusion of users. The number of options in the four-level interface is up to 5, so the single character paradigm is also employed.
Further, the frequency of use of each chinese character is not the same, and it is inevitable that some characters are often used and some characters are hardly used. For example, Chinese character "Qi" composed of initial consonant "n" and final "ou", Chinese character "" composed of initial consonant "n" and final "un" has extremely low frequency of use in daily life; the Chinese character "you" formed by the initial consonant "n" and the final sound "i" is often used. Therefore, except that the flashing sequence of the first-level interface is performed in sequence, the flashing sequence of other interfaces selects the syllables forming the common words to flash preferentially, so that the spelling rate can be greatly improved. The definition of the common words is based on a modern Chinese common word table, which comprises 3500 words: it is divided into commonly used words (2500) and sub-commonly used words (1000). Specifically, when the order of the flashing of the initial consonants is determined, the pinyin syllables of the common words and the secondary common words are firstly split, each initial consonant is marked, the number of the initial consonants of the common words or the secondary common words is counted every time the initial consonants of the common words or the secondary common words appear, a flashing sequence is set, and the initial consonants are arranged in the flashing sequence according to the counting from large to small. For the vowel and the whole syllable, the construction method of the flashing sequence is the same as that of the initial consonant, namely, firstly, the pinyin syllables of the common words and the second common words are split, each vowel or the whole syllable is marked, the counting is carried out when the vowel of the common words or the second common words or the whole syllable appears once, the flashing sequences are respectively arranged aiming at the vowel and the whole syllable, and the vowel and the whole syllable are respectively arranged according to the counting from big to small in the two flashing sequences.
Further, after the target letter or letter combination is detected, the confirmation time of 3s is provided for the user, if the current system predicts correctly, the input is confirmed, and if the current system predicts correctly, the input is cancelled. Furthermore, after each character is input, the system will adjust the syllable with the wrong prediction in the flashing sequence, and adjust to the last flashing in the next round of re-flashing, so as to shorten the time for the user to wait for the appearance of the target syllable and increase the spelling rate.
The signal acquisition unit is used for acquiring and processing electroencephalogram signals and comprises 3 modules, namely a signal extraction chip, a signal preprocessing module and a data packaging module, and the modules are described in detail below.
The signal extraction chip is used for collecting Electroencephalogram signals, is a single-channel EEG (Electroencephalogram) extraction chip, can measure high-precision Electroencephalogram signals at the forehead position of a person, and the measured Electroencephalogram signals are called as original collection signals.
The signal preprocessing module is used for carrying out interference elimination and noise elimination on the original acquisition signal. Specifically, the electroencephalogram signal is interfered by a 50Hz working frequency signal in the acquisition process, noise is generated due to movement of equipment, scalp impedance and the like, and the really effective electroencephalogram signal needs to be separated from the noise and the interference. In order to restore the truth of the signals to the maximum extent, the signal preprocessing module adopts Fast Fourier Transform (FFT) to convert the originally acquired signals from a time domain to a frequency domain, performs spectrum analysis, filtering, amplification and the like on the signals converted into the frequency domain, and then performs inverse Fourier transform to restore the signals in the frequency domain into time domain signals, wherein the time domain signals are called as filtered electroencephalogram signals.
And the data packaging module is used for carrying out data packaging on the filtering electroencephalogram signal. Specifically, the single-channel EEG extraction chip extracts the filtered electroencephalogram signal at a certain frequency, and the data encapsulation module encapsulates the filtered electroencephalogram signal into data of a certain format in a preset format after each signal extraction. For example, the filtered electroencephalogram signal is packaged into a certain number of data packets, and each data packet comprises small packet data and large packet data, wherein the small packet data is time domain signal data of the filtered electroencephalogram signal; the big packet data comprises brain wave signals of different frequency bands, and the big packet data is frequency domain data obtained after time-frequency transformation is carried out on the filtering brain wave signals. Preferably, the large packet data includes brain wave signals of eight frequency bands (LowAlpha wave, HighAlpha wave, LowBeta wave, HighBeta wave, Delta wave, Theta wave, LowGamma wave, and middlegama wave, respectively), concentration (Attention), and meditation (Mediation). The encapsulated filtered electroencephalogram signals are called electroencephalogram encapsulation data.
The signal analysis unit is used for analyzing and identifying the electroencephalogram encapsulation data and outputting a control instruction to the result output unit according to a signal identification result. The signal analysis unit comprises a data analysis module, a signal identification module and a signal learning module. These three modules are described in detail below.
The data analysis module is used for analyzing the electroencephalogram encapsulation data to obtain target data. Specifically, the data analysis module obtains required target data according to the preset format of the electroencephalogram encapsulation data, where the target data includes the filtered electroencephalogram signal in the small packet data and an Alpha wave signal (including a LowAlpha wave and a HighAlpha wave) in the large packet data.
The signal identification module is used for identifying the target signal and outputting a control instruction according to the identification result. Specifically, the target data acquired in the data analysis module is compared with a data threshold of a corresponding target signal, so as to identify the corresponding target signal, where the target signal includes a preset electroencephalogram signal and a preset electrooculogram signal. Preferably, the preset electroencephalogram signal is an electroencephalogram signal generated during closed-eye meditation, the preset electro-ocular signal is an electro-ocular signal generated due to blinking, and the data threshold comprises a closed-eye Alpha threshold and a blinking threshold. Correspondingly, when the target signal is identified as the preset electroencephalogram signal, outputting an eye closing instruction; and outputting a blinking instruction when the target signal is identified as the preset eye electric signal. The recognition of the preset electroencephalogram signal is hereinafter referred to as eye closure recognition, and the recognition of the preset eye electrical signal is hereinafter referred to as blink recognition.
Specifically, the invention adopts an eye electrical signal generated by blinking to realize the selection and withdrawal of characters, and uses an electroencephalogram signal Alpha wave generated during the closed-eye meditation to control the pause and the recovery of the system. The working principle of blink recognition and eye closure recognition in the present invention is described below.
First, for blink recognition, a blink causes a relatively large spike in the electroencephalogram compared to the base signal without the blink, and a large change occurs in the peak and valley values of the acquired original signal waveform, which enables us to detect blinks by detecting the threshold value of the electroencephalogram signal waveform. And the signal amplitude generated by the natural blink and the conscious blink has larger difference, so that the natural blink and the conscious blink can be further distinguished.
Because of personal difference, different angles or different positions of the signal acquisition module when the signal acquisition module is worn, the electroencephalogram amplitude acquired by the signal acquisition module is different, so that the self blinking can not be judged by setting a fixed threshold value, and the signal learning module is just used for solving the problem. The signal learning module is used for calculating and recording a target signal data threshold of a user, the target signal comprises a preset electroencephalogram signal and a preset eye electrical signal, and the data threshold comprises an eye-closing Alpha threshold, a blink threshold and an eye-opening signal threshold.
Specifically, before starting to use the system, the user is required to wear the device of the system and perform autonomous activity for 1 minute, and amplitude data of natural blink and autonomous blink of the user are acquired by the signal acquisition module.
Setting the collected k natural blinking amplitude data as N ═ N1,n2,n3,...,nkThe amplitude data of k self-blinking is M ═ M1,m2,m3,...,mk}. Selecting a maximum value N of natural blink NmaxAnd minimum value M in self blinking Mmin
nmax=max{n1,n2,n3,...,nk} (1)
mmin=min{m1,m2,m3,...,mk} (2)
The conditions for determining whether the current signal is a self-blinking are:
Figure BDA0002716395230000111
wherein a is the amplitude of the current signal, and TH represents a threshold value for determining the amplitude of the signal for self-blinking, i.e. the blink threshold value. In addition, the time T at which the current signal is above the threshold needs to satisfy:
200ms≤T≤300ms (4)
when the eyes of a user are in an exhausted state, peaks of natural blinking and self-blinking are reduced, and if the set threshold value is not changed along with the peak, the recognition accuracy of the system is reduced. Therefore, preferably, after the user confirms the output correctly once, the signal which indicates the current recognition is the self-blinking signal, and the signal learning module records the signal; the output error confirmation shows that the current identified signal is not the self-blinking signal, the misjudged natural blinking signal is the self-blinking signal, and the signal learning module records the same. The signal learning module will make a maximum and minimum selection of every five recorded homogeneous signals for real-time correction of the threshold.
Secondly, for closed-eye recognition, the invention realizes the closed-eye recognition by recognizing Alpha waves. Referring to fig. 4, the Alpha wave has the energy value which is the lowest in the open eye state, but has the characteristic that the energy value index rises to a certain peak value and rapidly falls to the normal state value when attention is focused or blinking activity is performed; on the other hand, when the eyes are closed and relaxed, the Alpha wave energy value is regularly raised and lowered, the total energy value is kept at a high level, and the Alpha wave energy value is obviously higher under the condition of eye closure than under the condition of eye opening. Therefore, the Alpha wave signal is judged by detecting the waveform energy value, so that the eye-closing signal is judged.
According to the characteristic that the frequency band of the Alpha wave is 8Hz to 12Hz, analyzing the big packet data in the electroencephalogram encapsulation data to obtain data of the Alpha wave band, according to the preset format of the electroencephalogram encapsulation data, representing each effective data as an unsigned integer byte, wherein the effective data has no metering unit, and representing the energy value of the Alpha wave according to the size of a numerical value. Similarly to the processing method of the blink threshold, the user is required to perform the eye-opening and eye-closing resting actions for the free activity time before using the system to obtain the threshold for determining the energy value of the Alpha wave.
Setting the collected Alpha total energy value data set when k segments of eyes are open as Z ═ Z1,z2,z3,...,zkK total energy value data set when eyes are closed is B ═ B1,b2,b3,...,bk}. Selection of A when eyes are openMaximum Z in the alpha total energy value data set ZmaxAnd the minimum value B in the total energy value data set B in the eye-closingmin
zmax=max{z1,z2,z3,...,zk} (5)
bmin=min{b1,b2,b3,...,bk} (6)
The conditions for judging whether the current user has a closed-eye rest are as follows:
Figure BDA0002716395230000121
and when the power is detected to be obviously reduced, judging that the eyes of the user are open, wherein the judgment conditions are as follows:
Figure BDA0002716395230000122
where P is the power over a certain period of time, PTHIs the eye-closure Alpha threshold.
The signal learning module calculates a blink signal threshold value, a closed eye signal threshold value and an open eye signal threshold value of the user according to equations (1) to (8) by acquiring signals generated by the user when the user blinks and closes the eyes in advance, records the blink signal threshold value, the closed eye signal threshold value and the open eye signal threshold value, and uses the blink signal threshold value, the closed eye signal threshold value and the open eye signal threshold value as a judgment standard for target signal identification in the signal identification module.
The result output unit receives a control instruction from the signal analysis unit, wherein the control instruction comprises a blinking instruction and a closed-eye instruction. And the result output unit displays a corresponding output result according to the received control instruction and by combining with the flashing letters or letter combinations in the stimulation display unit.
The following describes how to use the multi-modal character input system based on brain electricity and eye electricity (hereinafter, referred to as the present system) provided by the present invention by way of example.
Neurosky Mindwave Mobile equipment is selected as the signal acquisition unit, a Neurosky TGAM chip in the equipment is a single-channel EEG extraction chip, collected bioelectricity signals are preprocessed, and original electroencephalogram data, brain wave signals of eight frequency bands and two eSense index values can be obtained by analyzing data packets.
A room with proper temperature, good ventilation, normal light, silence and comfort is used as an experimental environment, 6 male and female subjects are randomly selected as experimental subjects, the age of each experimental subject is 19-23 years old, the vision is normal or is normal after correction, the average age is 21.25 years old, and the age variance is 2.02. The subject is required to participate in a preliminary experiment and a formal experiment, and the contents of the completion of the two experiments are the same. The function of performing the preliminary experiment one day before performing the formal experiment is to enable the user to know the experiment content in advance, familiarize the equipment wearing method and the notice, and eliminate the strangeness to reduce the error as much as possible. Before the formal experiment is started, the equipment is worn in advance, and the self-blinking, natural blinking and eye-closing meditation actions are performed according to the instruction of an operator. In addition, the subject is required to sit still and well in mind. After the device is worn, the device cannot be taken down or the device has violent limb movement in the experimental process. When the eyes do not blink autonomously, the eyes do not blink as few as possible, and the eyes can be closed to rest and suspend the system when necessary.
The testee requests to spell five Chinese Pinyin, namely, "wan", "shang", "chi", "shen" and "me", and the experimental results are shown in Table 1.
Table 1 data of experimental results
Figure BDA0002716395230000131
The load indices in the table are the knowledge, physical strength, or attention that the user subjectively feels would need to spend in completing the system task throughout the use of the system. Typically in the form of a questionnaire after the user has completed use. In the experiment, the NASA-TLX developed by the NASA is used for evaluating the workload of the user, the evaluation dimensionality is divided into six dimensionalities of psychological demand, physical demand, time demand, performance, effort degree and frustration degree, and the load index is finally presented in a numerical mode.
The experimental result shows that the average identification accuracy of the system is 78.39%. Although the recognition rate is not high, the system provides a function of canceling the output, that is, when the character determined by the system is not the target character of the user, a command of canceling the output may be issued. In addition, the average information transmission rate of the system is 2.02/min, so that the time spent in secondary output is ensured to be smaller than the time spent in the first-time correct output of other systems, and the defect of low system identification rate can be overcome to a certain extent. In addition, the average load index is 8.20, which is much smaller than the full score of 20, indicating that the proposed paradigm of the system is not only within an acceptable range, but also gives the user a relatively easy experience. Furthermore, the device employed in this patent is sold at 99.99 dollars in its official website, whereas the related art systems described in the background and the unreferenced devices employed are sold at thousands or even hundreds of thousands of prices, which are fees not paid by many patients. Although the system has no obvious advantages in recognition accuracy, the same application level is achieved on the premise of using low-cost equipment, the practicability is high, the application and popularization in daily life are facilitated, and the system can be moved out of a laboratory and moved to real life.
Based on the multi-modal character input system based on the electroencephalogram and electrooculogram, the invention also provides a multi-modal character input method based on the electroencephalogram and the electrooculogram, which comprises the following steps:
displaying characters used for character input, and controlling the flicker of the characters;
acquiring an electroencephalogram signal, preprocessing the electroencephalogram signal, performing data encapsulation on the preprocessed electroencephalogram signal, and calling the electroencephalogram signal subjected to data encapsulation as electroencephalogram encapsulation data;
analyzing the electroencephalogram encapsulation data, identifying signals, and outputting a control instruction according to an identification result of the signal identification; and displaying the input result.
Specifically, the characters include letters and letter combinations; the flashing mode of the letter or the letter combination obeys a preset stimulation paradigm, and the stimulation paradigm is designed by adopting an option coding mode based on pinyin syllables; the stimulation paradigm comprises a first-level interface, a second-level interface, a third-level interface and a fourth-level interface, wherein the first-level interface, the second-level interface, the third-level interface and the fourth-level interface are respectively equal to the first-level interface, the second-level interface, the third-level interface and the fourth-level interface of the multi-modal character input system based on the electroencephalogram and the electrooculogram.
The electroencephalogram signal acquisition method specifically comprises the following steps: the method comprises the steps of measuring high-precision electroencephalogram signals at the forehead of a person through a single-channel EEG extraction chip, wherein the measured electroencephalogram signals are called as original acquisition signals.
The pre-processing of the brain electrical signal comprises: and carrying out interference removal and noise elimination on the original acquisition signal. Specifically, the electroencephalogram signal is interfered by a 50Hz working frequency signal in the acquisition process, noise is generated due to movement of equipment, scalp impedance and the like, and the really effective electroencephalogram signal needs to be separated from the noise and the interference. In order to restore the truth of the signals to the maximum extent, the signal preprocessing module adopts fast Fourier transform to convert the originally acquired signals from a time domain to a frequency domain, performs spectrum analysis, filtering, amplification and the like on the signals converted into the frequency domain, and then performs inverse Fourier transform to restore the signals in the frequency domain into time domain signals, wherein the time domain signals are called filtered electroencephalogram signals.
The data packaging method of the electroencephalogram signals comprises the following steps: the single-channel EEG extraction chip extracts the filtering EEG signals at a certain frequency, and encapsulates the filtering EEG signals into data in a certain format in a preset format after each signal extraction. For example, the filtered electroencephalogram signal is packaged into a certain number of data packets, and each data packet comprises small packet data and large packet data, wherein the small packet data is time domain signal data of the filtered electroencephalogram signal; the big packet data comprises brain wave signals of different frequency bands, and the big packet data is frequency domain data obtained after time-frequency transformation is carried out on the filtering brain wave signals. Preferably, the large packet data includes brain wave signals of eight frequency bands (LowAlpha wave, HighAlpha wave, LowBeta wave, HighBeta wave, Delta wave, Theta wave, LowGamma wave, and middlegama wave, respectively), concentration (Attention), and meditation (Mediation). The encapsulated filtered electroencephalogram signals are called electroencephalogram encapsulation data.
The method for analyzing the electroencephalogram encapsulation data comprises the following steps: and acquiring required target data according to the preset format of the electroencephalogram encapsulation data, wherein the target data comprise the filtered electroencephalogram signals in the small packet data and Alpha wave signals (comprising LowAlpha waves and HighAlpha waves) in the large packet data.
The method for carrying out the signal identification on the electroencephalogram encapsulation data comprises the following steps: comparing the acquired target data with a data threshold of a corresponding target signal, thereby identifying the corresponding target signal, wherein the target signal comprises a preset electroencephalogram signal and a preset electrooculogram signal. Preferably, the preset electroencephalogram signal is an electroencephalogram signal generated during closed-eye meditation, the preset electro-ocular signal is an electro-ocular signal generated due to blinking, and the data threshold comprises a closed-eye Alpha threshold and a blinking threshold. The recognition of the preset electroencephalogram signal is hereinafter referred to as eye closure recognition, and the recognition of the preset eye electrical signal is hereinafter referred to as blink recognition.
The control instruction comprises a blinking instruction and a closed-eye instruction; when the target signal is identified as the preset electroencephalogram signal, outputting an eye closing instruction; and outputting a blinking instruction when the target signal is identified as the preset eye electric signal.
The method for displaying the input result comprises the following steps: and displaying a corresponding output result according to the received control instruction and the letters or the letter combinations which are flickering.
In a specific application example, the user uses the multi-modal character input method based on the electroencephalogram and the electrooculogram to input characters, and the operation method is equivalent to the example listed in the multi-modal character input system based on the electroencephalogram and the electrooculogram.
The invention adopts blinking as a control source for character selection, and has obvious signal characteristics, low signal acquisition cost and high system practicability for special eye electrical signals generated by blinking. Aiming at the fatigue problem caused by repeated blinking, the invention adopts two solutions, firstly, the blinking times are reduced fundamentally, namely, an option coding mode based on pinyin syllables is provided to improve the input rate and the accuracy of the system from multiple aspects; and secondly, the user can pause the system for rest when feeling tired, the electroencephalogram Alpha waves are generated by the closed-eye meditation, and the system state is controlled by the Alpha blocking effect. According to the invention, the multi-mode control is realized by combining the blinking eye and the electroencephalogram Alpha waves generated by eye closure, so that the fatigue problem of a user caused by blinking can be solved, and the defect of few control instructions generated by an eye electrical signal is overcome.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.

Claims (10)

1. A multimodal character input system based on electroencephalogram and electrooculogram, characterized in that: the method comprises the following steps:
the stimulation display unit is used for displaying characters and controlling the flickering of the characters;
the signal acquisition unit is used for acquiring electroencephalogram signals of a user, preprocessing the electroencephalogram signals and then packaging the electroencephalogram signals into electroencephalogram packaging data;
the signal analysis unit is used for analyzing and identifying the electroencephalogram packaging data and outputting a control instruction according to an identification result of the signal identification; the control instructions comprise a blinking instruction and a closed-eye instruction, the blinking instruction controls the selection and withdrawal of characters, and the closed-eye instruction controls the suspension and recovery of the system; and
and the result output unit is used for receiving the control instruction from the signal analysis unit and simultaneously outputting a corresponding result by combining the characters which are currently flickering in the stimulation display unit.
2. The brain electricity and eye electricity based multi-modal character input system of claim 1, wherein: the signal acquisition unit comprises a signal extraction chip, a signal preprocessing module and a data packaging module;
the signal extraction chip is a single-channel EEG extraction chip which measures the forehead measurement EEG signal of a user to obtain an original acquisition signal;
the signal preprocessing module converts the original acquisition signal from a time domain to a frequency domain by adopting fast Fourier transform, performs spectrum analysis, filtering and amplification on the signal converted into the frequency domain, and then performs inverse Fourier transform to restore the signal in the frequency domain into a time domain signal so as to obtain a filtered electroencephalogram signal;
the data packaging module packages the filtered electroencephalogram signals into a data packet with a preset format, the data packet comprises electroencephalogram signals with different frequency bands, concentration degrees and meditation degrees, and the data packet forms the electroencephalogram packaging data.
3. The electroencephalography and electrooculography based multimodal character input system according to claim 2, wherein: the signal analysis unit comprises a data analysis module, a signal identification module and a signal learning module;
the data analysis module is used for analyzing the electroencephalogram encapsulation data to obtain target data, and the target data comprises the filtering electroencephalogram data and electroencephalogram Alpha wave data;
the signal learning module is used for calculating and recording a data threshold of a target signal of a user, the target signal comprises a preset electroencephalogram signal and a preset eye electric signal, and the data threshold comprises a closed-eye Alpha threshold and a blink threshold;
the signal identification module compares the target data to a data threshold of a corresponding target signal to identify the corresponding target signal;
when the target signal is identified as the preset electroencephalogram signal, the signal identification module outputs an eye closing instruction; when the target signal is identified as the preset eye electric signal, the signal identification module outputs a blinking instruction.
4. The brain electricity and eye electricity based multi-modal character input system of claim 3, wherein: the stimulation display unit comprises a stimulation paradigm interface used for displaying characters;
the stimulation paradigm interface comprises a first-level interface, a second-level interface, a third-level interface and a fourth-level interface;
the primary interface is a syllable set, and the syllable set is divided into 3 options to be selected according to the pinyin characteristics of the Chinese characters, namely an initial consonant set, a vowel set and an integral reading syllable set;
the secondary interface is a specific syllable selection interface, and a user can enter the option to be selected from the secondary interface and select a specific syllable;
the three-level interface is the vowel set interface, and a user can select vowels which accord with grammar rules in the three-level interface;
the level four interface is the final collection interface, and a user can select additional syllables or directly finish the syllable adding in the level four interface.
5. The brain electricity and eye electricity based multi-modal character input system of claim 4, wherein: the stimulation display unit further comprises a flicker control module, and the flicker control module is used for controlling the flicker of the characters in the stimulation paradigm interface;
the first-level interface adopts a flashing paradigm of flashing only one option at a time, and the 3 options to be selected are sequentially flashed;
the secondary interface is divided into 3 areas, characters in different areas are selected to form a combination, the characters in the combination flicker successively at a minimum time interval, after all the characters in the combination flicker, the flicker control module replaces the characters in the combination and flickers again, and the steps are repeated until the same character is included in the combination after the blinking signal is detected twice;
the three-level interface and the four-level interface adopt a flashing paradigm of flashing only one option at a time, and syllables forming common words are preferentially selected to flash.
6. The brain electricity and eye electricity based multi-modal character input system of claim 5, wherein: the blink threshold calculation method comprises the following steps:
collecting electroencephalogram data generated by blinking of a user, and setting amplitude data of k natural blinks to be N ═ N1,n2,n3,...,nkThe amplitude data of k self-blinking is M ═ M1,m2,m3,...,mk}; selecting a maximum value N of natural blink NmaxAnd minimum value M in self blinking Mmin
nmax=max{n1,n2,n3,...,nk}
mmin=min{m1,m2,m3,...,mk}
The conditions for determining whether the current signal is a self-blinking are:
Figure FDA0002716395220000021
wherein A is the amplitude of the current signal, and TH represents the threshold value for judging the amplitude of the signal of the self-blinking; the time T at which the current signal is above the threshold needs to be satisfied:
200ms≤T≤300ms。
7. the electroencephalography and electrooculography based multimodal character input system according to claim 6, wherein: the method for calculating the blink threshold value further comprises the following steps:
after the user confirms the output once correctly, the output is marked as an autonomous blink signal, and the signal is recorded; after confirming the error made by the output, marking the error as a non-self blink signal, and recording the signal; selecting a maximum value and a minimum value for every five recorded homogeneous signals for real-time threshold correction.
8. The electroencephalography and electrooculography based multimodal character input system according to claim 6 or 7, wherein: the method for calculating the eye-closing Alpha threshold value comprises the following steps:
collecting electroencephalogram Alpha wave data generated when the eyes of a user are opened and closed, and setting the collected k-segment total energy value data set when the eyes are opened as Z ═ Z1,z2,z3,...,zkK total energy value data set when eyes are closed is B ═ B1,b2,b3,...,bk}; selecting a maximum value Z in Alpha Total energy value data set Z at eye openingmaxAnd the minimum value B in the total energy value data set B in the eye-closingmin
zmax=max{z1,z2,z3,...,zk}
bmin=min{b1,b2,b3,...,bk}
The conditions for judging whether the current user has a closed-eye rest are as follows:
Figure FDA0002716395220000031
and when the power is detected to be obviously reduced, judging that the eyes of the user are open, and judging that the conditions are as follows:
Figure FDA0002716395220000032
where P is the power over a certain period of time, PTHAlpha threshold for eye closureThe value is obtained.
9. A multimode character input method based on brain electricity and eye electricity is characterized in that: the method comprises the following steps:
displaying characters and controlling the flickering of the characters;
collecting electroencephalogram signals of a user, preprocessing the electroencephalogram signals, and then packaging the electroencephalogram signals into electroencephalogram packaging data;
analyzing the electroencephalogram encapsulation data, identifying signals, and outputting a control instruction according to an identification result of the signal identification; the control instructions comprise a blinking instruction and a closed-eye instruction, the blinking instruction controls the selection and withdrawal of characters, and the closed-eye instruction controls the suspension and recovery of the system;
and simultaneously, according to the control instruction and the information of the character which is flickering currently, obtaining a selection result of the user and outputting and displaying the selection result.
10. The multimodal character input method based on electroencephalogram and electrooculography according to claim 9, wherein:
the method for collecting the electroencephalogram signals of the user comprises the following steps: measuring a forehead measurement electroencephalogram signal of a user by adopting a single-channel EEG extraction chip to obtain an original acquisition signal;
the method for preprocessing the electroencephalogram signals comprises the following steps: converting the original acquisition signal from a time domain to a frequency domain by adopting fast Fourier transform, performing spectrum analysis, filtering and amplification on the signal converted into the frequency domain, and then performing inverse Fourier transform to restore the signal in the frequency domain into a time domain signal so as to obtain a filtered electroencephalogram signal;
the method for carrying out data encapsulation on the electroencephalogram signals comprises the following steps: the filtering electroencephalogram signals are packaged into a data packet with a preset format, the data packet comprises electroencephalogram signals with different frequency bands, concentration degrees and meditation degrees, and the data packet forms the electroencephalogram packaging data;
the method for analyzing the electroencephalogram encapsulation data comprises the following steps: analyzing the electroencephalogram encapsulation data to obtain target data, wherein the target data comprises the filtered electroencephalogram data and electroencephalogram Alpha wave data;
the signal identification method comprises the following steps: comparing the target data with a data threshold of a corresponding target signal, so as to identify the corresponding target signal, wherein the target signal comprises a preset electroencephalogram signal and a preset eye electrical signal, and the data threshold comprises a closed-eye Alpha threshold and a blink threshold;
when the target signal is identified as the preset electroencephalogram signal, outputting an eye closing instruction; when the target signal is identified as the preset eye electric signal, outputting a blinking instruction;
the method for displaying the characters used for character input comprises the following steps: dividing a display level into four stimulation paradigm interfaces, wherein the stimulation paradigm interfaces comprise a first level interface, a second level interface, a third level interface and a fourth level interface;
the primary interface is a syllable set, and the syllable set is divided into 3 options to be selected according to the pinyin characteristics of the Chinese characters, namely an initial consonant set, a vowel set and an integral reading syllable set;
the secondary interface is a specific syllable selection interface, and a user can enter the option to be selected from the secondary interface and select a specific syllable;
the three-level interface is the vowel set interface, and a user can select vowels which accord with grammar rules in the three-level interface;
the level four interface is the final set interface, and a user can select additional syllables in the level four interface or directly finish the syllable addition;
the method for controlling the flickering of the characters comprises the following steps: the first-level interface adopts a flashing paradigm of flashing only one option at a time, and the 3 options to be selected are sequentially flashed;
dividing the secondary interface into 3 areas, selecting characters in different areas to form a combination, flashing the characters in the combination successively at a minimum time interval, replacing the characters in the combination and flashing again after all the characters in the combination are flashed, and repeating the steps until the same character is included in the combination after the blinking signals are detected twice;
the three-level interface and the four-level interface adopt a flashing paradigm that only one option flashes at a time, and syllables forming common words are preferentially selected to flash;
the blink threshold calculation method comprises the following steps:
collecting electroencephalogram data generated by blinking of a user, and setting amplitude data of k natural blinks to be N ═ N1,n2,n3,...,nkThe amplitude data of k self-blinking is M ═ M1,m2,m3,...,mk}; selecting a maximum value N of natural blink NmaxAnd minimum value M in self blinking Mmin
nmax=max{n1,n2,n3,...,nk}
mmin=min{m1,m2,m3,...,mk}
The conditions for determining whether the current signal is a self-blinking are:
Figure FDA0002716395220000051
wherein A is the amplitude of the current signal, and TH represents the threshold value for judging the amplitude of the signal of the self-blinking; the time T at which the current signal is above the threshold needs to be satisfied:
200ms≤T≤300ms;
after the user confirms the output once correctly, the output is marked as an autonomous blink signal, and the signal is recorded; after confirming the error made by the output, marking the error as a non-self blink signal, and recording the signal; selecting a maximum value and a minimum value for every five of the recorded homogeneous signals for correcting the blink threshold value in real time;
the method for calculating the eye-closing Alpha threshold value comprises the following steps:
collecting electroencephalogram Alpha wave data generated when the eyes of a user are opened and closed, and setting the data to be collectedThe total Alpha energy value data set when k segments are open is Z ═ Z1,z2,z3,...,zkK total energy value data set when eyes are closed is B ═ B1,b2,b3,...,bk}; selecting a maximum value Z in Alpha Total energy value data set Z at eye openingmaxAnd the minimum value B in the total energy value data set B in the eye-closingmin
zmax=max{z1,z2,z3,...,zk}
bmin=min{b1,b2,b3,...,bk}
The conditions for judging whether the current user has a closed-eye rest are as follows:
Figure FDA0002716395220000061
and when the power is detected to be obviously reduced, judging that the eyes of the user are open, and judging that the conditions are as follows:
Figure FDA0002716395220000062
where P is the power over a certain period of time, PTHIs the eye-closure Alpha threshold.
CN202011074925.2A 2020-10-09 2020-10-09 Multi-mode character input system and method based on electroencephalogram and electrooculogram Pending CN112328072A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011074925.2A CN112328072A (en) 2020-10-09 2020-10-09 Multi-mode character input system and method based on electroencephalogram and electrooculogram

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011074925.2A CN112328072A (en) 2020-10-09 2020-10-09 Multi-mode character input system and method based on electroencephalogram and electrooculogram

Publications (1)

Publication Number Publication Date
CN112328072A true CN112328072A (en) 2021-02-05

Family

ID=74314733

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011074925.2A Pending CN112328072A (en) 2020-10-09 2020-10-09 Multi-mode character input system and method based on electroencephalogram and electrooculogram

Country Status (1)

Country Link
CN (1) CN112328072A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115054795A (en) * 2022-05-25 2022-09-16 厦门猫一个文化创意有限公司 Meditation support device and meditation support system
CN115890655A (en) * 2022-10-11 2023-04-04 人工智能与数字经济广东省实验室(广州) Head posture and electro-oculogram-based mechanical arm control method, device and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206563942U (en) * 2016-12-08 2017-10-17 华南理工大学 A kind of characters spells system based on blink
CN107390869A (en) * 2017-07-17 2017-11-24 西安交通大学 Efficient brain control Chinese character input method based on movement vision Evoked ptential
CN110018743A (en) * 2019-04-12 2019-07-16 福州大学 Brain control Chinese pinyin tone input method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206563942U (en) * 2016-12-08 2017-10-17 华南理工大学 A kind of characters spells system based on blink
CN107390869A (en) * 2017-07-17 2017-11-24 西安交通大学 Efficient brain control Chinese character input method based on movement vision Evoked ptential
CN110018743A (en) * 2019-04-12 2019-07-16 福州大学 Brain control Chinese pinyin tone input method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
OTAVIO G. LINS等: "Ocular Artifacts in EEG and Event-Related Potentials I:", 《BRAIN TOPOGRAPHY》 *
唐秀雯等: "基于眼电的字符输入***", 《计算机***应用***》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115054795A (en) * 2022-05-25 2022-09-16 厦门猫一个文化创意有限公司 Meditation support device and meditation support system
CN115054795B (en) * 2022-05-25 2024-02-06 厦门猫一个文化创意有限公司 Meditation assistance device and meditation assistance system
CN115890655A (en) * 2022-10-11 2023-04-04 人工智能与数字经济广东省实验室(广州) Head posture and electro-oculogram-based mechanical arm control method, device and medium
CN115890655B (en) * 2022-10-11 2024-02-09 人工智能与数字经济广东省实验室(广州) Mechanical arm control method, device and medium based on head gesture and electrooculogram

Similar Documents

Publication Publication Date Title
US11468288B2 (en) Method of and system for evaluating consumption of visual information displayed to a user by analyzing user's eye tracking and bioresponse data
CN110765920B (en) Motor imagery classification method based on convolutional neural network
He et al. Real-time detection of acute cognitive stress using a convolutional neural network from electrocardiographic signal
Pfurtscheller et al. Graz-BCI: state of the art and clinical applications
CN111568446A (en) Portable electroencephalogram depression detection system combined with demographic attention mechanism
CN112259237B (en) Depression evaluation system based on multi-emotion stimulus and multi-stage classification model
KR101854812B1 (en) Psychiatric symptoms rating scale system using multiple contents and bio-signal analysis
CN111920420B (en) Patient behavior multi-modal analysis and prediction system based on statistical learning
CN112890834A (en) Attention-recognition-oriented machine learning-based eye electrical signal classifier
CN113197579A (en) Intelligent psychological assessment method and system based on multi-mode information fusion
CN112328072A (en) Multi-mode character input system and method based on electroencephalogram and electrooculogram
US20220313172A1 (en) Prediabetes detection system and method based on combination of electrocardiogram and electroencephalogram information
CN104771164A (en) Method utilizing event-related potentials equipment to assist in screening mild cognitive impairment
CN114343672A (en) Partial collection of biological signals, speech-assisted interface cursor control based on biological electrical signals, and arousal detection based on biological electrical signals
Kim et al. Meaning based covert speech classification for brain-computer interface based on electroencephalography
Li et al. Multi-modal emotion recognition based on deep learning of EEG and audio signals
CN108491792B (en) Office scene human-computer interaction behavior recognition method based on electro-oculogram signals
KR101130761B1 (en) Thematic Apperception Test device Based on BCI.
CN114145745B (en) Graph-based multitasking self-supervision emotion recognition method
CN213423727U (en) Intelligent home control device based on TGAM
Mantri et al. Real time multimodal depression analysis
CN114983434A (en) System and method based on multi-mode brain function signal recognition
DOLEŽAL et al. Exploiting temporal context in high-resolution movement-related EEG classification
Murad et al. Unveiling Thoughts: A Review of Advancements in EEG Brain Signal Decoding into Text
CN111897428B (en) Gesture recognition method based on moving brain-computer interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210205