<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>OAR@UM Collection:</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/8369</link>
    <description />
    <pubDate>Sat, 02 May 2026 00:10:35 GMT</pubDate>
    <dc:date>2026-05-02T00:10:35Z</dc:date>
    <item>
      <title>An investigation of foot temperature deviations in individuals with diabetes : insights from wearable in-shoe technology</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/143046</link>
      <description>Title: An investigation of foot temperature deviations in individuals with diabetes : insights from wearable in-shoe technology
Authors: Borg, Mark; Mizzi, Stephen; Farrugia, Robert; Mifsud, Tiziana; Mizzi, Anabelle; Bajada, Josef; Falzon, Owen
Abstract: Plantar foot temperature is a valuable indicator&#xD;
of diabetes-related complications, but traditional assessment&#xD;
methods, such as infrared thermography and contact&#xD;
thermometers, require unshod feet and controlled conditions,&#xD;
limiting their practicality for continuous monitoring. In&#xD;
this study, we employ a smart insole with 21 embedded&#xD;
temperature sensors to capture plantar temperature data&#xD;
from shod feet. We introduce a novel approach that leverages&#xD;
per-foot relative temperature values—normalized to the foot’s&#xD;
mean—rather than absolute values or inter-foot asymmetry.&#xD;
Using data collected during static postures (lying, sitting, and&#xD;
standing), we evaluate multiple machine learning classifiers,&#xD;
with Random Forest achieving the highest accuracy (83.20%),&#xD;
alongside high sensitivity (93.75%) but moderate specificity&#xD;
(63.6%). To enhance explainability, we apply SHAP analysis&#xD;
to interpret model predictions and identify key sensor&#xD;
contributions. Additionally, we derive simple decision rules&#xD;
from the Random Forest model, finding that two medial&#xD;
arch sensors can achieve near-equivalent accuracy (80.38%&#xD;
and 79.82%) to the full model. These results suggest that&#xD;
deviations in plantar temperature patterns could serve as an&#xD;
indicator of diabetes-related foot health changes. Future work&#xD;
will expand this approach to ambulatory activities, integrating&#xD;
static and dynamic features to develop an insole-based system&#xD;
for continuous foot health monitoring in real-world settings.</description>
      <pubDate>Tue, 01 Jul 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/143046</guid>
      <dc:date>2025-07-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Multimodal data fusion for enhanced smart contract reputability analysis</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/141943</link>
      <description>Title: Multimodal data fusion for enhanced smart contract reputability analysis
Authors: Malik, Cyrus; Ellul, Joshua; Bajada, Josef
Abstract: The evaluation of smart contract reputability is&#xD;
essential to foster trust in decentralized ecosystems. However,&#xD;
existing methods that rely solely on static code analysis&#xD;
or transactional data, offer limited insight into evolving&#xD;
trustworthiness.We propose a multimodal data fusion framework&#xD;
that integrates static code features with transactional data&#xD;
to enhance reputability prediction. Our framework initially&#xD;
focuses on static code analysis, utilizing GAN-augmented opcode&#xD;
embeddings to address class imbalance, achieving 97.67%&#xD;
accuracy and a recall of 0.942 in detecting illicit contracts,&#xD;
surpassing traditional oversampling methods. This forms the&#xD;
crux of a reputability-centric fusion strategy, where combining&#xD;
static and transactional data improves recall by 7.25% over&#xD;
single-source models, demonstrating robust performance across&#xD;
validation sets. By providing a holistic view of smart contract&#xD;
behaviour, our approach enhances the model’s ability to&#xD;
assess reputability, identify fraudulent activities, and predict&#xD;
anomalous patterns. These capabilities contribute to more&#xD;
accurate reputability assessments, proactive risk mitigation, and&#xD;
enhanced blockchain security.</description>
      <pubDate>Sun, 01 Jun 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/141943</guid>
      <dc:date>2025-06-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>MRTMD : a multi-resolution dataset for evaluating object detection in traffic monitoring systems</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/141933</link>
      <description>Title: MRTMD : a multi-resolution dataset for evaluating object detection in traffic monitoring systems
Authors: Bugeja, Mark; Bartolo, Matthias; Montebello, Matthew; Seychell, Dylan
Abstract: Traffic monitoring reduces congestion, improves safety, and supports environmental&#xD;
sustainability. Real-time flow tracking, anomaly detection, and efficient management are key. Convolutional&#xD;
Neural Networks (CNNs) have become integral due to their compact size and easy deployment. However,&#xD;
their effectiveness depends heavily on the quality of the input data, especially image resolution. With highresolution&#xD;
cameras, especially 4K, balancing image quality, detection accuracy, and system efficiency is&#xD;
critical. We propose the Multi-Resolution Traffic Monitoring Dataset (MRTMD), which captures transport&#xD;
scenes at resolutions ranging from 2160p to 360p. This dataset serves as a benchmark for standard object&#xD;
detection models, enabling the development of more efficient and cost-effective traffic monitoring solutions.&#xD;
MRTMD will be freely available on GitHub, offering a valuable resource for researchers and practitioners.&#xD;
We evaluate leading object detection models—YOLOv9, YOLOv8, YOLOv7, Faster R-CNN, FCOS,&#xD;
SSD, and RT-DETR—across varied resolutions. Our analysis focuses on mean Average Precision (mAP),&#xD;
recall, and processing time. We also assess the accuracy of Number Plate Recognition (NPR) for tasks&#xD;
that require fine-grained detail extraction. Our findings show that detection performance typically varies&#xD;
within ±0.01 to ±0.03 in mAP and recall across resolutions, suggesting higher resolutions are not always&#xD;
advantageous. However, they remain crucial for tasks like NPR. The multi-resolution dataset enables a&#xD;
comprehensive evaluation of the trade-off between image quality and task performance. Ultimately, our&#xD;
analysis highlights the importance of resolution selection in large-scale deployments, informing system&#xD;
designers and policymakers. This dataset is a vital tool for balancing performance, cost, and practical&#xD;
constraints in real-world traffic monitoring.</description>
      <pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/141933</guid>
      <dc:date>2025-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Advancing experiential learning through generative AI-powered virtual reality</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/141907</link>
      <description>Title: Advancing experiential learning through generative AI-powered virtual reality
Authors: Borg, Gabriel; Azzopardi, Keith; Cini, Karl; Cardona, Luke; Caruana, Richard; Camilleri, Vanessa; Seychell, Dylan; Montebello, Matthew
Abstract: The accelerating complexity of professional practice requires higher education institutions to adopt &#xD;
innovative pedagogical approaches that bridge knowledge acquisition and authentic skills application. &#xD;
This paper presents the WAVE project, an educational innovation that integrates Generative Artificial &#xD;
Intelligence (AI) with Virtual Reality (VR) to create adaptive, immersive training environments for water-rescue education. Designed as a proof-of-concept, WAVE addresses key limitations of traditional &#xD;
training including limited scenario variability, resource constraints, and safety risks by leveraging &#xD;
Generative AI to dynamically construct diverse, context-rich emergency situations. Central to WAVE’s &#xD;
design is a generative scenario engine that produces highly realistic virtual environments and variable &#xD;
rescue challenges, adapting to learner profiles, competencies, and progression. The system captures &#xD;
real-time performance data, such as decision-making, response time, and physiological indicators and &#xD;
uses these inputs to personalise the learning pathway, ensuring that each training session evolves &#xD;
according to individual needs and skill development goals. This continuous adaptation supports &#xD;
experiential learning by exposing trainees to an extensive range of lifelike scenarios that would be &#xD;
impractical or unsafe to reproduce physically. The paper outlines the instructional design framework &#xD;
guiding the development of WAVE, with particular attention to how Generative AI enhances experiential &#xD;
learning, reflective practice, and mastery of critical decision-making. Preliminary pilot studies involving &#xD;
water-rescue trainees demonstrate promising outcomes, including increased situational awareness, &#xD;
improved procedural accuracy, and heightened learner engagement. Furthermore, participants report &#xD;
strong perceptions of realism, relevance, and motivation, highlighting the system’s potential to foster &#xD;
deeper learning. Beyond its immediate application to water-rescue training, WAVE offers broader &#xD;
implications for higher education. The modular architecture and adaptive capabilities of Generative AI-powered VR can be extended to various disciplines requiring complex skill acquisition, including &#xD;
healthcare, engineering, crisis management, and teacher education. The paper concludes by discussing &#xD;
scalability, ethical considerations in AI-generated training content, and the essential role of human &#xD;
oversight to ensure pedagogical soundness and learner well-being. This contribution aims to stimulate &#xD;
dialogue on how Generative AI and VR can reshape experiential learning in higher education, offering &#xD;
scalable, safe, and personalised alternatives to traditional skills training.</description>
      <pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/141907</guid>
      <dc:date>2025-01-01T00:00:00Z</dc:date>
    </item>
  </channel>
</rss>

