Please use this identifier to cite or link to this item:
https://www.um.edu.mt/library/oar/handle/123456789/137793| Title: | Mind to message : textual decoding of EEG patterns |
| Authors: | Farrugia, Jeremy (2025) |
| Keywords: | Electroencephalography -- Malta Data sets -- Malta Communication -- Malta Self-help devices for people with disabilities -- Malta People with disabilities -- Malta |
| Issue Date: | 2025 |
| Citation: | Farrugia, J. (2025). Mind to message: textual decoding of EEG patterns (Bachelor's dissertation). |
| Abstract: | Communication is central to human interaction. However, millions of individuals with severe speech or motor impairments, including those with amyotrophic lateral sclerosis (ALS), locked-in syndrome (LIS), or nonverbal autism, face significant challenges in expressing themselves. Assistive technologies aim to bridge this communication gap, and brain-computer interfaces (BCIs) offer a promising approach by enabling users to convey information using neural signals alone. Many modern BCIs use electroencephalography (EEG) to interpret brain signals, but often depend on stimulus-evoked responses or virtual keyboards, which can be slow and unintuitive. This project explores a more direct approach: converting expressive imagined speech (internal articulation of a word without actual vocalisation) into text, bypassing the need for external stimuli or physical interaction. To address this, a custom EEG dataset was collected from 21 participants (15 male), each imagining 20 commonly used words selected for their utility. The model architecture used was EEG-Deformer, which includes a shallow convolutional encoder for low-level features and deeper transformer-based modules for temporal dynamics. In addition to training a baseline model from scratch, this study presents the use of cross-task transfer learning for imagined speech decoding, a novel approach in this domain The approach involved pretraining models on motor imagery and visual evoked potential datasets, followed by fine-tuning on imagined speech data. The baseline model achieved a mean within-subject classification accuracy of 84.1%. Models pretrained on motor imagery and visual evoked potentials achieved 54.0% and 77.5% respectively. While pretrained models showed slightly faster convergence, they did not outperform the baseline, indicating limited knowledge transfer from the chosen tasks. This work presents a domain-specific imagined speech BCI using a small, fixed vocabulary and EEG signals collected using a low-cost device. It highlights both the potential of imagined speech decoding for faster, more intuitive communication and the current limitations of cross-task transfer learning in this domain. |
| Description: | B.Sc. (Hons) ICT(Melit.) |
| URI: | https://www.um.edu.mt/library/oar/handle/123456789/137793 |
| Appears in Collections: | Dissertations - FacICT - 2025 Dissertations - FacICTAI - 2025 |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 2508ICTICT390900015020_1.PDF Restricted Access | 17.67 MB | Adobe PDF | View/Open Request a copy |
Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.
