<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/8337">
    <title>OAR@UM Community:</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/8337</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://www.um.edu.mt/library/oar/handle/123456789/144591" />
        <rdf:li rdf:resource="https://www.um.edu.mt/library/oar/handle/123456789/140933" />
        <rdf:li rdf:resource="https://www.um.edu.mt/library/oar/handle/123456789/140266" />
        <rdf:li rdf:resource="https://www.um.edu.mt/library/oar/handle/123456789/140265" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-04T22:28:37Z</dc:date>
  </channel>
  <item rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/144591">
    <title>Multitemporal and multispectral data fusion for super-resolution of Sentinel-2 images</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/144591</link>
    <description>Title: Multitemporal and multispectral data fusion for super-resolution of Sentinel-2 images
Authors: Tarasiewicz, Tomasz; Nalepa, Jakub; Farrugia, Reuben A.; Valentino, Gianluca; Chen, Mang; Briffa, Johann A.; Kawulok, Michal
Abstract: Multispectral Sentinel-2 (S-2) images are a valuable&#xD;
source of Earth observation data; however, spatial resolution&#xD;
of their spectral bands limited to 10-, 20-, and 60-m ground&#xD;
sampling distance (GSD) remains insufficient in many cases. This&#xD;
problem can be addressed with super-resolution (SR), aimed&#xD;
at reconstructing a high-resolution (HR) image from a low-resolution&#xD;
(LR) observation. For S-2, spectral information fusion&#xD;
allows for enhancing the 20- and 60-m bands to the 10-m resolution.&#xD;
Also, there were attempts to combine multitemporal stacks&#xD;
of individual S-2 bands; however, these two approaches have not&#xD;
been combined so far. In this article, we introduce DeepSent—a&#xD;
new deep network for super-resolving multitemporal series of&#xD;
multispectral S-2 images. It is underpinned with information&#xD;
fusion performed simultaneously in the spectral and temporal&#xD;
dimensions to generate an enlarged multispectral image (MSI).&#xD;
In our extensive experimental study, we demonstrate that our&#xD;
solution outperforms other state-of-the-art techniques that realize&#xD;
either multitemporal or multispectral data fusion. Furthermore,&#xD;
we show that the advantage of DeepSent results from how these&#xD;
two fusion types are combined in a single architecture, which&#xD;
is superior to performing such fusion in a sequential manner.&#xD;
Importantly, we have applied our method to super-resolve real-world&#xD;
S-2 images, enhancing the spatial resolution of all the&#xD;
spectral bands to 3.3-m nominal GSD, and we compare the&#xD;
outcome with very HR WorldView-2 images. We have made our&#xD;
implementation publicly available, and we expect it will increase the possibilities of exploiting super-resolved S-2 images in real-life&#xD;
applications.</description>
    <dc:date>2023-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/140933">
    <title>Model-driven federated learning for channel estimation in millimeter-wave massive MIMO systems</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/140933</link>
    <description>Title: Model-driven federated learning for channel estimation in millimeter-wave massive MIMO systems
Authors: Yi, Qin; Yang, Ping; Liu, Zilong; Huang, Yiqian; Zammit, Saviour
Abstract: This paper investigates the model-driven federated learning (FL) for channel estimation in multi-user millimeter-wave (mmWave) massive multiple-input multiple-output (MIMO) systems. Firstly, we formulate it as a sparse signal recovery problem by exploiting the beamspace domain sparsity of the mmWave channels. Then, we propose an FL-based learned approximate message passing (LAMP) channel estimation scheme, namely FL-LAMP, where the LAMP network is trained by an FL framework. Specifically, the base station (BS) and users jointly train the LAMP network, where the users update the local LAMP network parameters by local datasets consisting of measurement signals and beamspace channels, and the BS calculates the global LAMP network parameters by aggregating the local network parameters from all the users. The beamspace channel can thus be obtained in real time from the measurement signal based on the parameters of the trained LAMP network. Simulation results demonstrate that the proposed FL-LAMP scheme can achieve better channel estimation accuracy than the existing orthogonal matching pursuit (OMP) and approximate message passing (AMP) schemes, and provides satisfactory prediction capability for multipath channels.</description>
    <dc:date>2024-04-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/140266">
    <title>Real-time multi-camera tracking and od-matrix estimation of vehicles</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/140266</link>
    <description>Title: Real-time multi-camera tracking and od-matrix estimation of vehicles
Abstract: With computer vision, it is possible to capture data which is of great use to urban planners and infrastructure engineers. Informed decisions can then be taken to evolve existing and new infrastructure in a more robust and greener way. Data can be captured with the use of a single-camera tracker, which detects and tracks vehicles and pedestrians in the camera view. However, in more complex scenarios, such as a roundabout or intersection, the use of a single camera is not sufficient. For this study, a single-camera tracker, developed by Greenroads Ltd, is readily available [...]
Description: M.Sc. ICT(Melit.)</description>
    <dc:date>2024-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/140265">
    <title>Detecting anomalies from roadside video streams</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/140265</link>
    <description>Title: Detecting anomalies from roadside video streams
Abstract: The interconnected nature of road networks implies that anomalies on narrow residential roads can ripple through the entire traffic system, particularly in high‐ traffic areas as common for the Maltese Islands. Detecting anomalies in such en‐ vironments using roadside cameras is challenging due to the multitude of normal and anomalous events, changes in illumination, obstructions, complex anomalies, and difficult viewing angles. This thesis investigates anomaly detection methods tailored to the realistic road and data limitations typical of Maltese urban roads. Classical anomaly detection, which identifies anomalies from structured data, and deep learning‐based techniques, which detect anomalies directly from video input, were evaluated. The literature review revealed limited evaluations on realistic datasets for both methods. The classical method was developed to filter out ID switch artifacts and identify specific anomalies using a combination of filtering, DBSCAN clustering, masking, and rule‐based techniques. For the deep learning method, an AE model with the STAE [1] architecture was chosen for its ability to capture temporal rep‐ resentation. Both methods were evaluated on video datasets collected in Malta and a relabeled Street Scene [2] dataset. The classical method demonstrated high reliability in detecting anomalies in structured data, achieving an 82% true positive rate and a 3% false positive rate for a local dataset. However, the data acquisition method did not accurately record all anomalies, reducing the true positive rate for actual video anomalies. The deep learning method showed strong performance across all datasets, achiev‐ ing an 83% AUC and a 25% EER for a dataset recorded in the same location. Per‐ formance was slightly reduced for locations with heavy shadows, as shown on a second local dataset. Segmenting frames into tiles and augmenting datasets improved performance in shadow‐affected conditions, as did masking irrelevant regions. An event‐level comparison showed both methods performed similarly in detecting non‐typical vehicle paths. The classical method excelled at identifying non‐typical object locations and was more robust against changes in scene dynam‐ ics, is more modular, and easier to debug. The deep learning method was better at detecting non‐typical slow‐moving and non‐typical vehicles and was more resilient to variations in the data acquisition method within the Intelligent Traffic System (ITS). However, neither method effectively detected unforeseen anomalies. Over‐ all, this thesis provides valuable insights and guidance for choosing the most ap‐ propriate anomaly detection methods tailored to different types of anomalies in complex urban road environments.
Description: M.Sc. ICT(Melit.)</description>
    <dc:date>2024-01-01T00:00:00Z</dc:date>
  </item>
</rdf:RDF>

