<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/27307">
    <title>OAR@UM Community:</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/27307</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://www.um.edu.mt/library/oar/handle/123456789/144793" />
        <rdf:li rdf:resource="https://www.um.edu.mt/library/oar/handle/123456789/144765" />
        <rdf:li rdf:resource="https://www.um.edu.mt/library/oar/handle/123456789/144762" />
        <rdf:li rdf:resource="https://www.um.edu.mt/library/oar/handle/123456789/138669" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-05T07:26:16Z</dc:date>
  </channel>
  <item rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/144793">
    <title>Towards a model of affect in architectural experience : using virtual environments as spatial elicitors to capture real-time and continuous observer feedback</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/144793</link>
    <description>Title: Towards a model of affect in architectural experience : using virtual environments as spatial elicitors to capture real-time and continuous observer feedback
Abstract: How do spaces make us feel? What is the perceived emotional impact of built forms? Is it possible to model this relationship? Estimating the affective impact of space has long been a challenge in understanding human-environment interaction. Most of the theories link environmental preference to instincts and evolution with preferred spaces displaying features of refuge, naturalness and ease of cognitive processing. Many studies study the effects of space focusing on specific elements and recording observer reactions. However, many of these studies rely on passive stimuli, such as photographs or static computer-generated imagery, to gather affective data. While other approaches employ invasive and specialized equipment, such as fMRI or EEG, these methods are often impractical for widespread application. Advancements in Affective Computing offer effective and non-invasive means of capturing continuous affect annotations of dynamic media like movies, games, and 360-degree content. This thesis adopts a similar perspective, treating architectural experiences as continuous and evolving media experiences, akin to those in interactive media. It conceptualizes the emotional responses elicited by built environments as unfolding over time, shaped by key spatial and temporal elements. The aim of this dissertation is two-fold: (1) to understand the affective impact of spatial key elements under the temporal scope, and (2) to formalize and quantify this relationship. Key questions include: Can we reliably estimate the impact of spatial elements on human affect? To what extent can the affective impact of architectural experience—considering its temporal dimension—be modeled? How can we collect first-person annotation in response to spatial stimuli? To address these questions, four user studies were conducted using three types of stimuli and two approaches to affect annotation. The findings provide insights into the dynamic interplay between architectural design and human emotion, contributing to both theoretical understanding and practical applications in Architecture and Affective Computing.
Description: Ph.D.(Melit.)</description>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/144765">
    <title>Procedural content creation in the age of generative AI</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/144765</link>
    <description>Title: Procedural content creation in the age of generative AI
Abstract: Within the domain of Computational Creativity, Procedural Content Generation (PCG) in multimodal domains presents challenges in modal alignment, diversity, and coherence. Although recent generative AI models, including Large Language Models (LLMs), have demonstrated improved cross-modal capabilities, they frequently struggle with balancing autonomy and control to produce creative, high-quality artefacts. This thesis explores how generative AI can be structured to enhance diversity, coherence, and evaluation in multimodal PCG, with a focus on text and image generation. This work addresses the fundamental question: how can AI be guided to generate novel, high-quality, and semantically aligned artefacts across modalities? To this end, the research introduces four key approaches: Firstly, evolutionary quality diversity algorithms are combined with generative models to improve output diversity in image generation while maintaining semantic relevance. Secondly, the MAP-Elites algorithm is augmented with Transverse Assessment to explore multimodal search spaces, discover diverse solutions, and improve coherence across text and image generation. The third approach demonstrates CrawLLM, a zero-shot generative pipeline that uses LLMs to orchestrate the creation of game levels, narrative elements, and visual assets for a video game, ensuring thematic consistency and content structure. The last approach builds on CrawLLM by incorporating the LLM-driven pipeline into an adaptive evaluation loop to dynamically assess and regenerate underperforming artefacts, improving overall quality and thematic fit. These contributions advance methodologies for multimodal PCG by integrating generative AI with structured evaluation and optimisation techniques. The findings provide novel insights into search-based creativity, LLM-driven generative orchestration, and AI-assisted evaluation frameworks, with applications in computational creativity, game design, and digital media generation.
Description: Ph.D.(Melit.)</description>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/144762">
    <title>Advancing affect modelling via representation learning</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/144762</link>
    <description>Title: Advancing affect modelling via representation learning
Abstract: Affect modelling, the process of constructing computational models capable of recognising and interpreting human emotions, has seen significant advancements with the rise of machine learning. However, key challenges still need to be addressed, particularly in learning generalisable affective representations across different modalities and scenarios, especially in contexts where data is scarce or incomplete. This thesis explores these challenges through the lens of representation learning, with a specific focus on contrastive learning principles. The research is structured across three main parts. First, we investigate the use of supervised contrastive learning to model affective states. Through the development of novel methods, we demonstrate improvements in learning multimodal representations of affect, as evidenced by experiments on datasets such as RECOLA and AGAIN. The second part addresses the challenge of missing modalities in affective data. By leveraging privileged information during training, we introduce techniques that bridge the gap between controlled and in-the-wild affect modelling. Additional experiments demonstrate the robustness of these techniques across multiple modalities and datasets. Finally, the thesis tackles the problem of learning affective representations from a small number of samples, proposing a novel approach using contrastive learning to generate robust representations even in data-constrained environments. This work demonstrates the applicability of these methods across various contexts, including cross-game engagement prediction. The thesis concludes with a discussion of the limitations of the proposed methods and potential directions for future research, including the exploration of more diverse datasets and techniques to further enhance model generalisation and robustness in affective computing.
Description: Ph.D.(Melit.)</description>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/138669">
    <title>The relation between rhythm and combat in action games as explored through the case studies of Sekiro: Shadows Die Twice and Black Myth: Wukong</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/138669</link>
    <description>Title: The relation between rhythm and combat in action games as explored through the case studies of Sekiro: Shadows Die Twice and Black Myth: Wukong
Abstract: This dissertation seeks to explore the relation between rhythm and combat in action games. Many modern titles place great importance on the player’s ability to react and time their movements to succeed in a given combat encounter; the enemy presented to the player tends to follow a pattern of attacks, and the player must in turn react by inputting the correct sequence of button presses to successfully counter the presented attack pattern. This dissertation argues that these sequences can be understood for their rhythmical value, by applying concepts of rhythms drawn from music studies and game studies alike. Following this, the presence of rhythm in two case studies, Sekiro: Shadows Die Twice and Black Myth: Wukong, will be explored to identify a connection between rhythm and combat in action games. This connection will be further discussed in the conclusion, wherein the rhythmic affordances discussed in both case studies will be directly compared, showing that rhythm both is present and aids the player in progressing in a combat encounter.
Description: M.Sc.(Melit.)</description>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </item>
</rdf:RDF>

