<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>OAR@UM Collection:</title>
  <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/521" />
  <subtitle />
  <id>https://www.um.edu.mt/library/oar/handle/123456789/521</id>
  <updated>2026-04-23T16:38:46Z</updated>
  <dc:date>2026-04-23T16:38:46Z</dc:date>
  <entry>
    <title>Word-specific properties affect classification performance in brain computer interfaces for decoding imagined speech from EEG</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/144577" />
    <author>
      <name>Türk, Stefanie</name>
    </author>
    <author>
      <name>Padfield, Natasha</name>
    </author>
    <author>
      <name>Mujahid, Kamran</name>
    </author>
    <author>
      <name>Camilleri, Tracey A.</name>
    </author>
    <author>
      <name>Camilleri, Kenneth P.</name>
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/144577</id>
    <updated>2026-03-04T09:21:06Z</updated>
    <published>2025-01-01T00:00:00Z</published>
    <summary type="text">Title: Word-specific properties affect classification performance in brain computer interfaces for decoding imagined speech from EEG
Authors: Türk, Stefanie; Padfield, Natasha; Mujahid, Kamran; Camilleri, Tracey A.; Camilleri, Kenneth P.
Abstract: Decoding imagined speech from brain signals has become one of the most significant fields for BCI applications. One of the current challenges that researchers face is an insufficient classification performance for real-world applications. In this study, we investigate for the first time the effect of word-specific properties known to modulate brain signals on classification performance. We chose 16 word prompts that vary in age of acquisition (AoA) and word frequency, two word-specific properties known to modulate speech processing, and investigated their classification performance for speech imagery (SI) trials compared to the idle state using a random forest classifier and 10-fold cross-validation. We found highly significant effects of AoA, word frequency and their interaction on classification performance. Our results yield evidence that the word frequency and AoA of word prompts used in SI paradigms significantly influence the classification accuracy in a BCI application when SI trials are compared to the idle state.</summary>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Registration of long-term recordings of thermographic video applied to foot temperature monitoring</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/144560" />
    <author>
      <name>Gauci, Jean</name>
    </author>
    <author>
      <name>Falzon, Owen</name>
    </author>
    <author>
      <name>Camilleri, Kenneth P.</name>
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/144560</id>
    <updated>2026-03-04T06:39:19Z</updated>
    <published>2025-01-01T00:00:00Z</published>
    <summary type="text">Title: Registration of long-term recordings of thermographic video applied to foot temperature monitoring
Authors: Gauci, Jean; Falzon, Owen; Camilleri, Kenneth P.
Abstract: Dynamic thermal imaging of human subjects presents unique challenges to automated &#xD;
data processing. Variations in background-foreground contrast and diverse patterns on &#xD;
regions of interest of the body mean that classical processing techniques which were &#xD;
developed for RGB images might not be suitable for this kind of data. Additionally, &#xD;
subject movement during recording complicates the process further and necessitates &#xD;
correction for accurate thermal video analysis. In this study, a method for registering &#xD;
thermal video data is presented, allowing each pixel to correspond to the same anatomical location throughout the video. This registration facilitates subsequent processing, &#xD;
such as ROI extraction. The proposed registration method has two steps: the first &#xD;
addresses large linear deformations, while the second uses deep learning based on &#xD;
the SynthMorph architecture to register smaller, elastic deformations. This method &#xD;
manages to reduce the mean displacement of salient points by 71.5% on our test &#xD;
dataset. The algorithm was tested on thermal video data of the plantar aspect of &#xD;
human feet but has the potential to be implemented on other greyscale images and &#xD;
in other medical applications.</summary>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Real-time EOG signal baseline drift estimation using passive VOG data</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/144557" />
    <author>
      <name>Mifsud, Matthew</name>
    </author>
    <author>
      <name>Camilleri, Tracey A.</name>
    </author>
    <author>
      <name>Camilleri, Kenneth P.</name>
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/144557</id>
    <updated>2026-03-03T15:06:50Z</updated>
    <published>2025-01-01T00:00:00Z</published>
    <summary type="text">Title: Real-time EOG signal baseline drift estimation using passive VOG data
Authors: Mifsud, Matthew; Camilleri, Tracey A.; Camilleri, Kenneth P.
Abstract: One of the main challenges when it comes to electrooculography (EOG)-based eye gaze tracking for the control&#xD;
of human-computer interface systems is the drifting baseline.&#xD;
This slow wander in the signal leads to erroneous gaze angle&#xD;
estimates and over time, can make operating an application&#xD;
impossible. Baseline component estimation techniques have&#xD;
been proposed in the literature in order to model and remove&#xD;
the baseline drift component, however, most of these can only&#xD;
be carried out in an offline manner. In this work, we propose a&#xD;
novel drift mitigation technique which may be used to de-drift&#xD;
EOG signals in real-time without requiring users to fixate at&#xD;
known target locations. The proposed approach makes use of&#xD;
a low-sampling rate passive videooculography (VOG) source to&#xD;
model and remove the EOG signal baseline whilst preserving&#xD;
the signal’s original morphology. It’s performance, in terms&#xD;
of the horizontal and vertical gaze angle estimation error is&#xD;
evaluated against standard baseline estimation techniques using&#xD;
data from ten subjects, demonstrating improved performance.</summary>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>EEG-based speech imagery decoding by dynamic hypergraph learning within projected and selected feature subspaces</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/144556" />
    <author>
      <name>Li, Yibing</name>
    </author>
    <author>
      <name>Zhao, Zhenye</name>
    </author>
    <author>
      <name>Liu, Jiangchuan</name>
    </author>
    <author>
      <name>Peng, Yong</name>
    </author>
    <author>
      <name>Camilleri, Kenneth P.</name>
    </author>
    <author>
      <name>Kong, Wanzeng</name>
    </author>
    <author>
      <name>Cichocki, Andrzej</name>
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/144556</id>
    <updated>2026-03-03T15:03:08Z</updated>
    <published>2025-01-01T00:00:00Z</published>
    <summary type="text">Title: EEG-based speech imagery decoding by dynamic hypergraph learning within projected and selected feature subspaces
Authors: Li, Yibing; Zhao, Zhenye; Liu, Jiangchuan; Peng, Yong; Camilleri, Kenneth P.; Kong, Wanzeng; Cichocki, Andrzej
Abstract: Objective. Speech imagery is a nascent paradigm that is receiving widespread attention in current&#xD;
brain–computer interface (BCI) research. By collecting the electroencephalogram (EEG) data&#xD;
generated when imagining the pronunciation of a sentence or word in human mind, machine&#xD;
learning methods are used to decode the intention that the subject wants to express. Among&#xD;
existing decoding methods, graph is often used as an effective tool to model the data structure;&#xD;
however, in the field of BCI research, the correlations between EEG samples may not be fully&#xD;
characterized by simple pairwise relationships. Therefore, this paper attempts to employ a more&#xD;
effective data structure to model EEG data. Approach. In this paper, we introduce hypergraph to&#xD;
describe the high-order correlations between samples by viewing feature vectors extracted from&#xD;
each sample as vertices and then connecting them through hyperedges. We also dynamically&#xD;
update the weights of hyperedges, the weights of vertices and the structure of the hypergraph in&#xD;
two transformed subspaces, i.e. projected and feature-weighted subspaces. Accordingly, two&#xD;
dynamic hypergraph learning models, i.e. dynamic hypergraph semi-supervised learning within&#xD;
projected subspace (DHSLP) and dynamic hypergraph semi-supervised learning within selected&#xD;
feature subspace (DHSLF), are proposed for speech imagery decoding. Main results. To validate the&#xD;
proposed models, we performed a series of experiments on two EEG datasets. The obtained results&#xD;
demonstrated that both DHSLP and DHSLF have statistically significant improvements in&#xD;
decoding imagined speech intentions to existing studies. Specifically, DHSLP achieved accuracies&#xD;
of 78.40% and 66.64% on the two datasets, while DHSLF achieved accuracies of 71.07% and&#xD;
63.94%. Significance. Our study indicates the effectiveness of the learned hypergraphs in&#xD;
characterizing the underlying semantic information of imagined contents; besides, interpretable&#xD;
results on quantitatively exploring the discriminative EEG channels in speech imagery decoding&#xD;
are obtained, which lay the foundation for further exploration of the physiological mechanisms&#xD;
during speech imagery.</summary>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </entry>
</feed>

