CN116035592B - Method, system, equipment and medium for identifying turning intention based on deep learning - Google Patents

Method, system, equipment and medium for identifying turning intention based on deep learning Download PDF

Info

Publication number
CN116035592B
CN116035592B CN202310033903.9A CN202310033903A CN116035592B CN 116035592 B CN116035592 B CN 116035592B CN 202310033903 A CN202310033903 A CN 202310033903A CN 116035592 B CN116035592 B CN 116035592B
Authority
CN
China
Prior art keywords
electroencephalogram
intention
feature extraction
signals
turning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310033903.9A
Other languages
Chinese (zh)
Other versions
CN116035592A (en
Inventor
王党校
张志毫
余济凡
张曜玺
张玉茹
郭卫东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202310033903.9A priority Critical patent/CN116035592B/en
Publication of CN116035592A publication Critical patent/CN116035592A/en
Application granted granted Critical
Publication of CN116035592B publication Critical patent/CN116035592B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Psychiatry (AREA)
  • Evolutionary Computation (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Theoretical Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Fuzzy Systems (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Psychology (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a turning intention recognition method, a system, equipment and a medium based on deep learning, which relate to the field of man-machine interaction, and the method comprises the following steps: inputting the brain electrical signal to be detected into a turning intention recognition model to obtain a turning intention recognition result; the turning intention recognition model is determined by training the deep learning network according to an electroencephalogram signal data set; the deep learning network comprises a space optimization module, a time optimization module, a feature extraction module and a feature fusion module which are connected in sequence; the space optimization module reduces the channels of the input electroencephalogram signals; the time optimization module extracts electroencephalogram signals of a first time period and a second time period before the turning intention occurs from the signals output by the space optimization module; the feature extraction module is used for carrying out feature extraction on the electroencephalogram signals in the first time period and the electroencephalogram signals in the second time period respectively to obtain a first electroencephalogram feature and a second electroencephalogram feature. The invention improves the accuracy of the swivel intention recognition.

Description

Method, system, equipment and medium for identifying turning intention based on deep learning
Technical Field
The invention relates to the technical field of man-machine interaction, in particular to a turning intention recognition method, a system, equipment and a medium based on deep learning.
Background
The existing motion brain signal decoding technology solves the problems of limb motor imagination or motion execution decoding classification, usually performs recognition of less classification such as two classification or four classification, but the problems brought by the methods are that the accuracy of multi-classification is seriously reduced, and the multi-classification result is not ideal, namely the accuracy of classification is to be improved.
Disclosure of Invention
The invention aims to provide a method, a system, equipment and a medium for identifying the turning intention based on deep learning, which improve the accuracy of the turning intention identification.
In order to achieve the above object, the present invention provides the following solutions:
a turn intention recognition method based on deep learning, comprising:
Acquiring an electroencephalogram signal to be detected;
Inputting the brain electrical signal to be detected into a turning intention recognition model to obtain a turning intention recognition result; the turning intention recognition model is determined by training a deep learning network according to an electroencephalogram signal data set; the sample data in the electroencephalogram data set comprises an electroencephalogram signal and a turning intention state corresponding to the electroencephalogram signal;
The deep learning network comprises a space optimization module, a time optimization module, a feature extraction module and a feature fusion module which are connected in sequence; the space optimization module is used for reducing the channels of the input electroencephalogram signals; the time optimization module is used for extracting electroencephalogram signals of a first time period and a second time period before the turning intention occurs from the signals output by the space optimization module; the feature extraction module comprises a first feature extraction branch and a second feature extraction branch, wherein the first feature extraction branch is used for carrying out feature extraction on the electroencephalogram signals in the first time period to obtain first electroencephalogram features, and the second feature extraction branch is used for carrying out feature extraction on the electroencephalogram signals in the second time period to obtain second electroencephalogram features; the feature fusion module is used for fusing the first electroencephalogram feature and the second electroencephalogram feature and outputting a turning intention recognition result.
Optionally, the turning intention recognition result includes no turning intention, a turning intention to the left, a turning intention to the right, a head-up intention, a low-turning intention, a turning intention to turn back from the left, a turning intention to turn back from the right, a turning intention to turn back from the upper side, and a turning back intention to turn back from the lower side.
Optionally, the construction process of the electroencephalogram signal data set includes:
An electroencephalogram cap device is adopted to collect 9 electroencephalogram signals before and after turning the head;
acquiring signals of the head movement angle of the user according to the inertial measurement unit, and extracting electroencephalogram signals corresponding to the turning intention from the electroencephalogram signals of the user according to the signals of the head movement angle of the user;
preprocessing an electroencephalogram signal corresponding to the intention of the rotating head to obtain a preprocessed electroencephalogram signal;
The preprocessed electroencephalogram signals and the tag data form one sample data of the electroencephalogram signal data set; the label data is the turning intention corresponding to the preprocessed electroencephalogram signals.
Optionally, the first feature extraction branch and the second feature extraction branch have the same structure and each comprise a convolutional neural network, a first bidirectional long-short-term memory network and a second bidirectional long-short-term memory network which are sequentially connected.
Optionally, the first period is 950ms to 650ms before the action occurs, and the second period is 350ms to 50ms before the action occurs.
Optionally, the turning intention recognition model training process includes:
According to the electroencephalogram data set, taking an electroencephalogram as input, and taking a turning intention state corresponding to the electroencephalogram as output training deep learning network; a ten-fold cross validation method is used as a loss function for training the deep learning network.
The invention also discloses a turning intention recognition system based on deep learning, which comprises the following steps:
The electroencephalogram signal acquisition module is used for acquiring electroencephalogram signals to be detected;
The turning intention recognition module is used for inputting the brain electrical signals to be detected into a turning intention recognition model to obtain a turning intention recognition result; the turning intention recognition model is determined by training a deep learning network according to an electroencephalogram signal data set; the sample data in the electroencephalogram data set comprises an electroencephalogram signal and a turning intention state corresponding to the electroencephalogram signal;
The deep learning network comprises a space optimization module, a time optimization module, a feature extraction module and a feature fusion module which are connected in sequence; the space optimization module is used for reducing the channels of the input electroencephalogram signals; the time optimization module is used for extracting electroencephalogram signals of a first time period and a second time period before the turning intention occurs from the signals output by the space optimization module; the feature extraction module comprises a first feature extraction branch and a second feature extraction branch, wherein the first feature extraction branch is used for carrying out feature extraction on the electroencephalogram signals in the first time period to obtain first electroencephalogram features, and the second feature extraction branch is used for carrying out feature extraction on the electroencephalogram signals in the second time period to obtain second electroencephalogram features; the feature fusion module is used for fusing the first electroencephalogram feature and the second electroencephalogram feature and outputting a turning intention recognition result.
The invention also discloses an electronic device, which comprises a memory and a processor, wherein the memory is used for storing a computer program, and the processor runs the computer program to enable the electronic device to execute the turning intention recognition method based on deep learning.
The invention also discloses a computer readable storage medium storing a computer program which when executed by a processor implements the turn intention recognition method based on deep learning.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
The invention reduces the channels of the input brain electrical signals through the space optimization module, and selects the channels with strong brain characteristic activation as output; extracting electroencephalogram signals of a first time period and a second time period before the turning intention occurs from the signals output by the space optimization module through the time optimization module; and the electroencephalogram signals in the first time period and the electroencephalogram signals in the second time period are respectively subjected to feature extraction and then are fused, so that feature extraction of the space dimension and the time dimension of the electroencephalogram signals is realized, accuracy of turning intention recognition is improved, model complexity is reduced, and prediction speed is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a turning intention recognition method based on deep learning;
FIG. 2 is a schematic diagram of a turning intent recognition model structure according to the present invention;
fig. 3 is a schematic diagram of a turning intention recognition system based on deep learning.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention aims to provide a method, a system, equipment and a medium for identifying the turning intention based on deep learning, which improve the accuracy of the turning intention identification.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Example 1
As shown in fig. 1, the turning intention recognition method based on deep learning of the present invention includes:
Step 101: and acquiring an electroencephalogram signal to be detected.
The step 101 specifically includes:
An electroencephalogram signal of a user is acquired by adopting an electroencephalogram cap device.
And extracting the electroencephalogram signals 1000ms before the head movement time point from the electroencephalogram signals of the user according to the inertia measurement unit.
Preprocessing the extracted electroencephalogram signals 1000ms before the head movement time point to obtain the electroencephalogram signals to be detected.
The specific steps of pretreatment comprise electrode positioning, removing useless electrodes, re-referencing, filtering, segmenting and baseline correction.
The electroencephalogram signals collected by the electroencephalogram cap equipment are originally 64 channels.
Electrode positioning: the registration of the spatial channels is performed by loading a channel position information matching the recorded data, i.e. each channel name is checked in alignment with a spatial position (electrode position of the electroencephalogram cap device).
Reject the useless electrode: some unwanted and non-informative channels are removed, and the electroencephalogram signals of the remaining 59 channels are removed.
Re-referencing: taking the average voltage of the mastoid at two sides as a reference, and making a difference between the data of each channel and the average voltage of the mastoid at two sides to obtain the relative value of each channel.
And (3) filtering: filtering with a band-pass filter of 1-40 Hz.
Segmentation: and carrying out segmented extraction on the electroencephalogram data of 2s before the action required in the acquired electroencephalogram data, and removing other data.
Baseline correction: taking the average value of the first section of the electroencephalogram data as a reference value, and subtracting the reference value from the subsequent electroencephalogram data.
The baseline corrected electroencephalogram signal is the preprocessed electroencephalogram signal.
Step 102: inputting the brain electrical signal to be detected into a turning intention recognition model to obtain a turning intention recognition result; the turning intention recognition model is determined by training a deep learning network according to an electroencephalogram signal data set; the sample data in the electroencephalogram data set comprises electroencephalogram signals and turning intention states corresponding to the electroencephalogram signals.
As shown in fig. 2, the deep learning network includes a space optimization module, a time optimization module, a feature extraction module and a feature fusion module which are sequentially connected; the space optimization module is used for reducing the channels of the input electroencephalogram signals; the time optimization module is used for extracting electroencephalogram signals of a first time period and a second time period before the turning intention occurs from the signals output by the space optimization module; the feature extraction module comprises a first feature extraction branch and a second feature extraction branch, wherein the first feature extraction branch is used for carrying out feature extraction on the electroencephalogram signals in the first time period to obtain first electroencephalogram features, and the second feature extraction branch is used for carrying out feature extraction on the electroencephalogram signals in the second time period to obtain second electroencephalogram features; the feature fusion module is used for fusing the first electroencephalogram feature and the second electroencephalogram feature and outputting a turning intention recognition result.
The space optimization module is used for reducing the input 59-channel electroencephalogram signals to 30-channel electroencephalogram signals and outputting the electroencephalogram signals, so that the reduction of the space channels is realized. According to the known objective rule of the brain under the condition of carrying out certain exercise intention tasks, namely the rule of intensity distribution of the activation degree of each brain region before and after the intention is generated before exercise, the activation intensity of certain brain regions before exercise is obtained. And sequencing each region of the brain according to the activation intensity to obtain regions with the largest activation degree difference of different action intentions, and selecting corresponding 30 channel signals from the regions as substitutes for the whole brain signals. That is, 30 channels with the strongest brain feature activation were selected from the 59 channels as outputs. I.e. 30 channels output in the spatial optimization module are 30 channels preset.
The time optimization module performs time dimension slicing, and obtains the activation intensity of a certain time period before the movement occurs according to the known objective law of the brain under the condition of performing a certain head movement intention task, namely the intensity change law of the brain activation degree along with time in the time period before and after the movement intention occurs. The 2 most characteristic activated time periods are extracted, namely, the electroencephalogram signals of 950ms before 650ms (first time period) and 350ms before 50ms (second time period) when the action occurs.
The first feature extraction branch and the second feature extraction branch are of parallel structures, and the purpose is to extract the head movement intention features of two periods of time respectively. The first feature extraction branch and the second feature extraction branch have the same structure and comprise a convolutional neural network, a first bidirectional long-short-term memory network and a second bidirectional long-short-term memory network which are sequentially connected. Each branch structure has the following functions: the Convolutional Neural Network (CNN) performs preliminary extraction of the head movement intention characteristics, and extracts the electroencephalogram signals of 30 channels into a plurality of time characteristic sequences with certain characteristics. And then re-extracting the time sequence characteristics of the brain electricity by using two layers of two-way long-short-period memory networks (BiLSTM).
The feature fusion module fuses the extracted electroencephalogram features (the first electroencephalogram feature and the second electroencephalogram feature) in 2 time periods through a fully connected network.
The turning intention recognition result includes no turning intention, a turning intention to the left, a turning intention to the right, a head-up intention (turning intention upward), a low turning intention (turning intention downward), a turning intention to turn from the left, a turning intention to turn from the right, a turning intention to turn from the upper, and a turning intention to turn from the lower.
The construction process of the electroencephalogram data set is a 9-class training process, and specifically comprises the following steps:
step a: an electroencephalogram cap device is adopted to collect 9 electroencephalogram signals before and after turning the head.
Step b: and acquiring signals of the head movement angle of the user according to the inertial measurement unit, and extracting electroencephalogram signals corresponding to the turning intention from the electroencephalogram signals of the user according to the signals of the head movement angle of the user.
Wherein, the steps a and b specifically comprise: the number of electroencephalogram channels actually usable is 59, and generally includes electroencephalogram channels of all brain regions. The acquisition experiment comprises the step of randomly prompting left, right, up and down turning directions on a computer screen, and in order to eliminate visual factor interference, a user performs voluntary left, right, up and down turning movements according to the prompts after two seconds, and then performs corresponding turning movements after two seconds. In the process, the electroencephalogram signals are acquired through the 1000Hz electroencephalogram signal acquisition equipment, meanwhile, the signal acquisition of the head movement angle is completed through serial communication of the 1000Hz inertial measurement unit (InertialMeasurementUnit, IMU), and the IMU signal acquisition target is the accurate calibration of the time phase of the electroencephalogram signals.
The acquired electroencephalogram signals in the resting state and 8 head movement intention states are calibrated on a computer, and the calibration method is that the determination of the angle mutation points through IMU signals is to obtain accurate head movement time points and then extract electroencephalogram signals 1000ms before the head movement time points.
Step c: preprocessing the brain electrical signals corresponding to the turning head intention to obtain preprocessed brain electrical signals.
The preprocessing comprises electrode positioning, unnecessary electrode elimination, re-reference, filtering, segmentation and baseline correction processing of the brain electrical signals corresponding to the turning head intention.
Step d: the preprocessed electroencephalogram signals and the tag data form one sample data of the electroencephalogram signal data set; the label data is the turning intention corresponding to the preprocessed electroencephalogram signals.
The turning intention recognition model training process comprises the following steps:
According to the electroencephalogram data set, taking an electroencephalogram as input, and taking a turning intention state corresponding to the electroencephalogram as output training deep learning network; a ten-fold cross validation method is used as a loss function for training the deep learning network.
According to the invention, through space dimension optimization, time dimension optimization processing and feature fusion, an optimization model with higher accuracy and higher operation speed than an original model, namely a turning intention recognition model, can be obtained.
Compared with the existing brain electrolysis code technology, the brain electrolysis code classifying method has the advantages that the classifying accuracy rate can still be guaranteed under the condition of multiple classifications. Meanwhile, the training time for building the tested head movement model is short. After the tested head movement intention decoding model is established, the time consumption for the follow-up head movement intention recognition is short.
According to the invention, accurate state division is obtained by an IMU signal calibration method, so that more accurate training data is obtained; the most obvious characteristic can be obtained by simply preprocessing the electroencephalogram signal, and the influence of the redundant interference characteristic is filtered; the space optimization module optimizes signals of each channel of the brain according to the physiological activity space law of the brain under the task of executing certain movement intention, and only selects the most critical channel data as the substitute of the whole brain data, so that the model parameter is further reduced to obtain less training cost and faster classification speed; the time optimization module precisely searches the electroencephalogram signals of a plurality of time periods which are most suitable as classification samples by utilizing priori knowledge, namely physiological activity time law of the brain under the condition of executing a certain exercise intention task, improves the accuracy rate while controlling the size of input data, and directly obtains the optimal classification model by the characteristic that the characteristic extraction module is better in fitting the spatial characteristics and the characteristic that the characteristic extraction is better in time characteristic of the bidirectional long-short-term memory network through the convolutional neural network. The feature fusion module performs feature fusion on the features output in the 2 time periods with the strongest activation, and can output a better classification result.
Example 2
Fig. 3 is a schematic structural diagram of a turning intention recognition system based on deep learning according to the present invention, as shown in fig. 3, a turning intention recognition system based on deep learning, including:
The electroencephalogram signal to be detected acquisition module 201 is configured to acquire an electroencephalogram signal to be detected.
The turning intention recognition module 202 is configured to input the electroencephalogram signal to be detected into a turning intention recognition model, and obtain a turning intention recognition result; the turning intention recognition model is determined by training a deep learning network according to an electroencephalogram signal data set; the sample data in the electroencephalogram data set comprises electroencephalogram signals and turning intention states corresponding to the electroencephalogram signals.
The deep learning network comprises a space optimization module, a time optimization module, a feature extraction module and a feature fusion module which are connected in sequence; the space optimization module is used for reducing the channels of the input electroencephalogram signals; the time optimization module is used for extracting electroencephalogram signals of a first time period and a second time period before the turning intention occurs from the signals output by the space optimization module; the feature extraction module comprises a first feature extraction branch and a second feature extraction branch, wherein the first feature extraction branch is used for carrying out feature extraction on the electroencephalogram signals in the first time period to obtain first electroencephalogram features, and the second feature extraction branch is used for carrying out feature extraction on the electroencephalogram signals in the second time period to obtain second electroencephalogram features; the feature fusion module is used for fusing the first electroencephalogram feature and the second electroencephalogram feature and outputting a turning intention recognition result.
Example 3
An embodiment of the present invention provides an electronic device including a memory for storing a computer program and a processor that runs the computer program to cause the electronic device to perform the method of embodiment 1.
Alternatively, the electronic device may be a server.
In addition, the embodiment of the present invention also provides a computer-readable storage medium storing a computer program, which when executed by a processor, implements the method of embodiment 1.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (9)

1. A turn intention recognition method based on deep learning, comprising:
Acquiring an electroencephalogram signal to be detected;
Inputting the brain electrical signal to be detected into a turning intention recognition model to obtain a turning intention recognition result; the turning intention recognition model is determined by training a deep learning network according to an electroencephalogram signal data set; the sample data in the electroencephalogram data set comprises an electroencephalogram signal and a turning intention state corresponding to the electroencephalogram signal;
The deep learning network comprises a space optimization module, a time optimization module, a feature extraction module and a feature fusion module which are connected in sequence; the space optimization module is used for reducing the channels of the input electroencephalogram signals; the time optimization module is used for extracting electroencephalogram signals of a first time period and a second time period before the turning intention occurs from the signals output by the space optimization module; the feature extraction module comprises a first feature extraction branch and a second feature extraction branch, wherein the first feature extraction branch is used for carrying out feature extraction on the electroencephalogram signals in the first time period to obtain first electroencephalogram features, and the second feature extraction branch is used for carrying out feature extraction on the electroencephalogram signals in the second time period to obtain second electroencephalogram features; the feature fusion module is used for fusing the first electroencephalogram feature and the second electroencephalogram feature and outputting a turning intention recognition result.
2. The deep learning-based turn intention recognition method according to claim 1, wherein the turn intention recognition result includes no turn intention, a turn intention to the left, a turn intention to the right, a turn intention to the head up, a turn intention to the low, a turn intention to turn back from the left, a turn intention to turn back from the right, a turn intention to turn back from the upper side, and a turn intention to turn back from the lower side.
3. The deep learning-based turning intent recognition method as claimed in claim 1, wherein the construction process of the electroencephalogram data set includes:
An electroencephalogram cap device is adopted to collect 9 electroencephalogram signals before and after turning the head;
acquiring signals of the head movement angle of the user according to the inertial measurement unit, and extracting electroencephalogram signals corresponding to the turning intention from the electroencephalogram signals of the user according to the signals of the head movement angle of the user;
preprocessing an electroencephalogram signal corresponding to the intention of the rotating head to obtain a preprocessed electroencephalogram signal;
The preprocessed electroencephalogram signals and the tag data form one sample data of the electroencephalogram signal data set; the label data is the turning intention corresponding to the preprocessed electroencephalogram signals.
4. The deep learning-based turn around intention recognition method of claim 1, wherein the first feature extraction branch and the second feature extraction branch have the same structure, and each comprise a convolutional neural network, a first bidirectional long-short term memory network and a second bidirectional long-short term memory network which are sequentially connected.
5. The deep learning based turning intent recognition method of claim 1, wherein the first time period is 950ms to 650ms before the action occurs, and the second time period is 350ms to 50ms before the action occurs.
6. The deep learning based turn around intent recognition method of claim 1, wherein the turn around intent recognition model training process comprises:
According to the electroencephalogram data set, taking an electroencephalogram as input, and taking a turning intention state corresponding to the electroencephalogram as output training deep learning network; a ten-fold cross validation method is used as a loss function for training the deep learning network.
7. A turn intention recognition system based on deep learning, comprising:
The electroencephalogram signal acquisition module is used for acquiring electroencephalogram signals to be detected;
The turning intention recognition module is used for inputting the brain electrical signals to be detected into a turning intention recognition model to obtain a turning intention recognition result; the turning intention recognition model is determined by training a deep learning network according to an electroencephalogram signal data set; the sample data in the electroencephalogram data set comprises an electroencephalogram signal and a turning intention state corresponding to the electroencephalogram signal;
The deep learning network comprises a space optimization module, a time optimization module, a feature extraction module and a feature fusion module which are connected in sequence; the space optimization module is used for reducing the channels of the input electroencephalogram signals; the time optimization module is used for extracting electroencephalogram signals of a first time period and a second time period before the turning intention occurs from the signals output by the space optimization module; the feature extraction module comprises a first feature extraction branch and a second feature extraction branch, wherein the first feature extraction branch is used for carrying out feature extraction on the electroencephalogram signals in the first time period to obtain first electroencephalogram features, and the second feature extraction branch is used for carrying out feature extraction on the electroencephalogram signals in the second time period to obtain second electroencephalogram features; the feature fusion module is used for fusing the first electroencephalogram feature and the second electroencephalogram feature and outputting a turning intention recognition result.
8. An electronic device comprising a memory for storing a computer program and a processor that runs the computer program to cause the electronic device to perform the method according to any one of claims 1 to 6.
9. A computer-readable storage medium, characterized in that it stores a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 6.
CN202310033903.9A 2023-01-10 2023-01-10 Method, system, equipment and medium for identifying turning intention based on deep learning Active CN116035592B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310033903.9A CN116035592B (en) 2023-01-10 2023-01-10 Method, system, equipment and medium for identifying turning intention based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310033903.9A CN116035592B (en) 2023-01-10 2023-01-10 Method, system, equipment and medium for identifying turning intention based on deep learning

Publications (2)

Publication Number Publication Date
CN116035592A CN116035592A (en) 2023-05-02
CN116035592B true CN116035592B (en) 2024-06-14

Family

ID=86116788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310033903.9A Active CN116035592B (en) 2023-01-10 2023-01-10 Method, system, equipment and medium for identifying turning intention based on deep learning

Country Status (1)

Country Link
CN (1) CN116035592B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105051647A (en) * 2013-03-15 2015-11-11 英特尔公司 Brain computer interface (bci) system based on gathered temporal and spatial patterns of biophysical signals
CN110532887A (en) * 2019-07-31 2019-12-03 郑州大学 A kind of method for detecting fatigue driving and system based on facial characteristics fusion

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10242486B2 (en) * 2017-04-17 2019-03-26 Intel Corporation Augmented reality and virtual reality feedback enhancement system, apparatus and method
CN112008725B (en) * 2020-08-27 2022-05-31 北京理工大学 Human-computer fusion brain-controlled robot system
CN114027855B (en) * 2021-12-13 2022-09-23 北京航空航天大学 Electroencephalogram signal decoding method and system for recognizing head movement intention

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105051647A (en) * 2013-03-15 2015-11-11 英特尔公司 Brain computer interface (bci) system based on gathered temporal and spatial patterns of biophysical signals
CN110532887A (en) * 2019-07-31 2019-12-03 郑州大学 A kind of method for detecting fatigue driving and system based on facial characteristics fusion

Also Published As

Publication number Publication date
CN116035592A (en) 2023-05-02

Similar Documents

Publication Publication Date Title
WO2021184619A1 (en) Human body motion attitude identification and evaluation method and system therefor
CN107564025B (en) Electric power equipment infrared image semantic segmentation method based on deep neural network
CN108446678B (en) Dangerous driving behavior identification method based on skeletal features
EP3642696B1 (en) Method and device for detecting a user input on the basis of a gesture
CN110309813B (en) Model training method, detection method and device for human eye state detection based on deep learning, mobile terminal equipment and server
CN110333783B (en) Irrelevant gesture processing method and system for robust electromyography control
CN106778851B (en) Social relationship prediction system and method based on mobile phone evidence obtaining data
CN110646425B (en) Tobacco leaf online auxiliary grading method and system
CN112732092B (en) Surface electromyogram signal identification method based on double-view multi-scale convolution neural network
CN111368824B (en) Instrument identification method, mobile device and storage medium
US11908137B2 (en) Method, device and equipment for identifying and detecting macular region in fundus image
CN116035592B (en) Method, system, equipment and medium for identifying turning intention based on deep learning
CN110728287A (en) Image recognition method and device, electronic equipment and storage medium
CN113496176B (en) Action recognition method and device and electronic equipment
CN113284563A (en) Screening method and system for protein mass spectrum quantitative analysis result
CN117407748A (en) Electroencephalogram emotion recognition method based on graph convolution and attention fusion
CN105631395A (en) Iris recognition-based terminal control method and device
CN114027855B (en) Electroencephalogram signal decoding method and system for recognizing head movement intention
CN116350239A (en) Electroencephalogram signal concentration degree classification method and system
CN112949544A (en) Action time sequence detection method based on 3D convolutional network
CN114241363A (en) Process identification method, process identification device, electronic device, and storage medium
CN114387678A (en) Method and apparatus for evaluating language readability using non-verbal body symbols
CN112450946A (en) Electroencephalogram artifact restoration method based on loop generation countermeasure network
CN109635776A (en) Pass through the method for procedure identification human action
CN113570566B (en) Product appearance defect development cognition detection method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant