<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>OAR@UM Collection:</title>
  <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/8369" />
  <subtitle />
  <id>https://www.um.edu.mt/library/oar/handle/123456789/8369</id>
  <updated>2026-04-23T18:04:02Z</updated>
  <dc:date>2026-04-23T18:04:02Z</dc:date>
  <entry>
    <title>An investigation of foot temperature deviations in individuals with diabetes : insights from wearable in-shoe technology</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/143046" />
    <author>
      <name>Borg, Mark</name>
    </author>
    <author>
      <name>Mizzi, Stephen</name>
    </author>
    <author>
      <name>Farrugia, Robert</name>
    </author>
    <author>
      <name>Mifsud, Tiziana</name>
    </author>
    <author>
      <name>Mizzi, Anabelle</name>
    </author>
    <author>
      <name>Bajada, Josef</name>
    </author>
    <author>
      <name>Falzon, Owen</name>
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/143046</id>
    <updated>2026-01-23T13:22:18Z</updated>
    <published>2025-07-01T00:00:00Z</published>
    <summary type="text">Title: An investigation of foot temperature deviations in individuals with diabetes : insights from wearable in-shoe technology
Authors: Borg, Mark; Mizzi, Stephen; Farrugia, Robert; Mifsud, Tiziana; Mizzi, Anabelle; Bajada, Josef; Falzon, Owen
Abstract: Plantar foot temperature is a valuable indicator&#xD;
of diabetes-related complications, but traditional assessment&#xD;
methods, such as infrared thermography and contact&#xD;
thermometers, require unshod feet and controlled conditions,&#xD;
limiting their practicality for continuous monitoring. In&#xD;
this study, we employ a smart insole with 21 embedded&#xD;
temperature sensors to capture plantar temperature data&#xD;
from shod feet. We introduce a novel approach that leverages&#xD;
per-foot relative temperature values—normalized to the foot’s&#xD;
mean—rather than absolute values or inter-foot asymmetry.&#xD;
Using data collected during static postures (lying, sitting, and&#xD;
standing), we evaluate multiple machine learning classifiers,&#xD;
with Random Forest achieving the highest accuracy (83.20%),&#xD;
alongside high sensitivity (93.75%) but moderate specificity&#xD;
(63.6%). To enhance explainability, we apply SHAP analysis&#xD;
to interpret model predictions and identify key sensor&#xD;
contributions. Additionally, we derive simple decision rules&#xD;
from the Random Forest model, finding that two medial&#xD;
arch sensors can achieve near-equivalent accuracy (80.38%&#xD;
and 79.82%) to the full model. These results suggest that&#xD;
deviations in plantar temperature patterns could serve as an&#xD;
indicator of diabetes-related foot health changes. Future work&#xD;
will expand this approach to ambulatory activities, integrating&#xD;
static and dynamic features to develop an insole-based system&#xD;
for continuous foot health monitoring in real-world settings.</summary>
    <dc:date>2025-07-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Multimodal data fusion for enhanced smart contract reputability analysis</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/141943" />
    <author>
      <name>Malik, Cyrus</name>
    </author>
    <author>
      <name>Ellul, Joshua</name>
    </author>
    <author>
      <name>Bajada, Josef</name>
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/141943</id>
    <updated>2025-12-04T14:04:21Z</updated>
    <published>2025-06-01T00:00:00Z</published>
    <summary type="text">Title: Multimodal data fusion for enhanced smart contract reputability analysis
Authors: Malik, Cyrus; Ellul, Joshua; Bajada, Josef
Abstract: The evaluation of smart contract reputability is&#xD;
essential to foster trust in decentralized ecosystems. However,&#xD;
existing methods that rely solely on static code analysis&#xD;
or transactional data, offer limited insight into evolving&#xD;
trustworthiness.We propose a multimodal data fusion framework&#xD;
that integrates static code features with transactional data&#xD;
to enhance reputability prediction. Our framework initially&#xD;
focuses on static code analysis, utilizing GAN-augmented opcode&#xD;
embeddings to address class imbalance, achieving 97.67%&#xD;
accuracy and a recall of 0.942 in detecting illicit contracts,&#xD;
surpassing traditional oversampling methods. This forms the&#xD;
crux of a reputability-centric fusion strategy, where combining&#xD;
static and transactional data improves recall by 7.25% over&#xD;
single-source models, demonstrating robust performance across&#xD;
validation sets. By providing a holistic view of smart contract&#xD;
behaviour, our approach enhances the model’s ability to&#xD;
assess reputability, identify fraudulent activities, and predict&#xD;
anomalous patterns. These capabilities contribute to more&#xD;
accurate reputability assessments, proactive risk mitigation, and&#xD;
enhanced blockchain security.</summary>
    <dc:date>2025-06-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>MRTMD : a multi-resolution dataset for evaluating object detection in traffic monitoring systems</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/141933" />
    <author>
      <name>Bugeja, Mark</name>
    </author>
    <author>
      <name>Bartolo, Matthias</name>
    </author>
    <author>
      <name>Montebello, Matthew</name>
    </author>
    <author>
      <name>Seychell, Dylan</name>
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/141933</id>
    <updated>2025-12-04T10:32:46Z</updated>
    <published>2025-01-01T00:00:00Z</published>
    <summary type="text">Title: MRTMD : a multi-resolution dataset for evaluating object detection in traffic monitoring systems
Authors: Bugeja, Mark; Bartolo, Matthias; Montebello, Matthew; Seychell, Dylan
Abstract: Traffic monitoring reduces congestion, improves safety, and supports environmental&#xD;
sustainability. Real-time flow tracking, anomaly detection, and efficient management are key. Convolutional&#xD;
Neural Networks (CNNs) have become integral due to their compact size and easy deployment. However,&#xD;
their effectiveness depends heavily on the quality of the input data, especially image resolution. With highresolution&#xD;
cameras, especially 4K, balancing image quality, detection accuracy, and system efficiency is&#xD;
critical. We propose the Multi-Resolution Traffic Monitoring Dataset (MRTMD), which captures transport&#xD;
scenes at resolutions ranging from 2160p to 360p. This dataset serves as a benchmark for standard object&#xD;
detection models, enabling the development of more efficient and cost-effective traffic monitoring solutions.&#xD;
MRTMD will be freely available on GitHub, offering a valuable resource for researchers and practitioners.&#xD;
We evaluate leading object detection models—YOLOv9, YOLOv8, YOLOv7, Faster R-CNN, FCOS,&#xD;
SSD, and RT-DETR—across varied resolutions. Our analysis focuses on mean Average Precision (mAP),&#xD;
recall, and processing time. We also assess the accuracy of Number Plate Recognition (NPR) for tasks&#xD;
that require fine-grained detail extraction. Our findings show that detection performance typically varies&#xD;
within ±0.01 to ±0.03 in mAP and recall across resolutions, suggesting higher resolutions are not always&#xD;
advantageous. However, they remain crucial for tasks like NPR. The multi-resolution dataset enables a&#xD;
comprehensive evaluation of the trade-off between image quality and task performance. Ultimately, our&#xD;
analysis highlights the importance of resolution selection in large-scale deployments, informing system&#xD;
designers and policymakers. This dataset is a vital tool for balancing performance, cost, and practical&#xD;
constraints in real-world traffic monitoring.</summary>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Advancing experiential learning through generative AI-powered virtual reality</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/141907" />
    <author>
      <name>Borg, Gabriel</name>
    </author>
    <author>
      <name>Azzopardi, Keith</name>
    </author>
    <author>
      <name>Cini, Karl</name>
    </author>
    <author>
      <name>Cardona, Luke</name>
    </author>
    <author>
      <name>Caruana, Richard</name>
    </author>
    <author>
      <name>Camilleri, Vanessa</name>
    </author>
    <author>
      <name>Seychell, Dylan</name>
    </author>
    <author>
      <name>Montebello, Matthew</name>
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/141907</id>
    <updated>2025-12-03T14:02:55Z</updated>
    <published>2025-01-01T00:00:00Z</published>
    <summary type="text">Title: Advancing experiential learning through generative AI-powered virtual reality
Authors: Borg, Gabriel; Azzopardi, Keith; Cini, Karl; Cardona, Luke; Caruana, Richard; Camilleri, Vanessa; Seychell, Dylan; Montebello, Matthew
Abstract: The accelerating complexity of professional practice requires higher education institutions to adopt &#xD;
innovative pedagogical approaches that bridge knowledge acquisition and authentic skills application. &#xD;
This paper presents the WAVE project, an educational innovation that integrates Generative Artificial &#xD;
Intelligence (AI) with Virtual Reality (VR) to create adaptive, immersive training environments for water-rescue education. Designed as a proof-of-concept, WAVE addresses key limitations of traditional &#xD;
training including limited scenario variability, resource constraints, and safety risks by leveraging &#xD;
Generative AI to dynamically construct diverse, context-rich emergency situations. Central to WAVE’s &#xD;
design is a generative scenario engine that produces highly realistic virtual environments and variable &#xD;
rescue challenges, adapting to learner profiles, competencies, and progression. The system captures &#xD;
real-time performance data, such as decision-making, response time, and physiological indicators and &#xD;
uses these inputs to personalise the learning pathway, ensuring that each training session evolves &#xD;
according to individual needs and skill development goals. This continuous adaptation supports &#xD;
experiential learning by exposing trainees to an extensive range of lifelike scenarios that would be &#xD;
impractical or unsafe to reproduce physically. The paper outlines the instructional design framework &#xD;
guiding the development of WAVE, with particular attention to how Generative AI enhances experiential &#xD;
learning, reflective practice, and mastery of critical decision-making. Preliminary pilot studies involving &#xD;
water-rescue trainees demonstrate promising outcomes, including increased situational awareness, &#xD;
improved procedural accuracy, and heightened learner engagement. Furthermore, participants report &#xD;
strong perceptions of realism, relevance, and motivation, highlighting the system’s potential to foster &#xD;
deeper learning. Beyond its immediate application to water-rescue training, WAVE offers broader &#xD;
implications for higher education. The modular architecture and adaptive capabilities of Generative AI-powered VR can be extended to various disciplines requiring complex skill acquisition, including &#xD;
healthcare, engineering, crisis management, and teacher education. The paper concludes by discussing &#xD;
scalability, ethical considerations in AI-generated training content, and the essential role of human &#xD;
oversight to ensure pedagogical soundness and learner well-being. This contribution aims to stimulate &#xD;
dialogue on how Generative AI and VR can reshape experiential learning in higher education, offering &#xD;
scalable, safe, and personalised alternatives to traditional skills training.</summary>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </entry>
</feed>

