Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/reshalfahsi/eeg-motor-imagery-classification

EEG Motor Imagery Classification Using CNN, Transformer, and MLP
https://github.com/reshalfahsi/eeg-motor-imagery-classification

eeg-classification mne-python motor-imagery-classification

Last synced: about 2 months ago
JSON representation

EEG Motor Imagery Classification Using CNN, Transformer, and MLP

Awesome Lists containing this project

README

        

# EEG Motor Imagery Classification Using CNN, Transformer, and MLP


colab


CNN-Transformer-MLP An illustration of the CNN-Transformer-MLP model.

The electroencephalogram, or EEG for short, is one of the biosignals that display brain activity in the form of time-series data. EEG can be used to help amputees or paralyzed people move their prosthetic arms via a brain-computer interface (BCI). In order to identify the correct limbs to control from the EEG signal, a combination of CNN, Transformer, and MLP is utilized in this work for motor imagery (MI) classification. CNN converts the epoched EEG signal into meaningful representation in accordance with the signal's non-stationary nature. Transformer finds the global relationship of the given representation from CNN. MLP classifies the expected upper limbs to move based on the extracted information from the Transformer. To gauge the capability of the CNN-Transformer-MLP model, PhysioNet's EEG Motor Movement/Imagery Dataset is used. The model attains an accuracy of ``76.4%`` on the test set.

## Experiment

To run the experiment, [click here](https://github.com/reshalfahsi/eeg-motor-imagery-classification/blob/master/EEG_Motor_Imagery_Classification_Using_CNN_Transformer_and_MLP.ipynb).

## Result

### Quantitative Result
To quantitatively validate the capability of the CNN-Transformer-MLP model, certain evaluation metrics are employed: accuracy and loss. Accuracy measures how many times the model makes a correct prediction in a particular split of the dataset. Loss quantifies how close the prediction is to the actual label. The loss calculation is utilized in the training stage as well. In this work, the binary cross-entropy (BCE) loss is adopted for the loss function.

Dataset Split | Accuracy | Loss
------------ | ------------- | -------------
Train | 82.9% | **0.201**
Validation | **84.4%** | 0.269
Test | 76.4% | 0.856

### Accuracy and Loss Curve

acc_curve
Accuracy curve on the train set and the validation set.



loss_curve
Loss curve on the train set and the validation set.

### Qualitative Result

Here, the qualitative performance of the model is presented.

true_right
Correct prediction on the right arm class.

true_left
Correct prediction on the left arm class.

false_right
False prediction on the left arm class.

## Citation

If you think this repository is helpful for your research, you may cite it:

```
@misc{eegal-fahsi,
title = {EEG Motor Imagery Classification Using CNN, Transformer, and MLP},
url = {https://github.com/reshalfahsi/eeg-motor-imagery-classification},
author = {Resha Dwika Hefni Al-Fahsi},
}
```

## Credit

- [EEG Motor Movement/Imagery Dataset](https://physionet.org/content/eegmmidb/1.0.0/)
- [Electroencephalogram Signal Classification for action identification](https://keras.io/examples/timeseries/eeg_signal_classification/)
- [Timeseries Classification With a Transformer Model](https://keras.io/examples/timeseries/timeseries_classification_transformer/)
- [EEG Classification](https://github.com/DavidSilveraGabriel/EEG-classification/blob/master/Using_mne_and_braindecode.ipynb)
- [Loading EEG Data](https://neuro.inf.unibe.ch/AlgorithmsNeuroscience/Tutorial_files/DataLoading.html)
- [MNE-Python](https://mne.tools/stable/glossary.html)
- [Self-Attention and Positional Encoding](https://d2l.ai/chapter_attention-mechanisms-and-transformers/self-attention-and-positional-encoding.html)