<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>OAR@UM Community:</title>
  <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/19739" />
  <subtitle />
  <id>https://www.um.edu.mt/library/oar/handle/123456789/19739</id>
  <updated>2026-04-04T19:45:54Z</updated>
  <dc:date>2026-04-04T19:45:54Z</dc:date>
  <entry>
    <title>Towards a model of affect in architectural experience : using virtual environments as spatial elicitors to capture real-time and continuous observer feedback</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/144793" />
    <author>
      <name />
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/144793</id>
    <updated>2026-03-11T13:03:40Z</updated>
    <published>2025-01-01T00:00:00Z</published>
    <summary type="text">Title: Towards a model of affect in architectural experience : using virtual environments as spatial elicitors to capture real-time and continuous observer feedback
Abstract: How do spaces make us feel? What is the perceived emotional impact of built forms? Is it possible to model this relationship? Estimating the affective impact of space has long been a challenge in understanding human-environment interaction. Most of the theories link environmental preference to instincts and evolution with preferred spaces displaying features of refuge, naturalness and ease of cognitive processing. Many studies study the effects of space focusing on specific elements and recording observer reactions. However, many of these studies rely on passive stimuli, such as photographs or static computer-generated imagery, to gather affective data. While other approaches employ invasive and specialized equipment, such as fMRI or EEG, these methods are often impractical for widespread application. Advancements in Affective Computing offer effective and non-invasive means of capturing continuous affect annotations of dynamic media like movies, games, and 360-degree content. This thesis adopts a similar perspective, treating architectural experiences as continuous and evolving media experiences, akin to those in interactive media. It conceptualizes the emotional responses elicited by built environments as unfolding over time, shaped by key spatial and temporal elements. The aim of this dissertation is two-fold: (1) to understand the affective impact of spatial key elements under the temporal scope, and (2) to formalize and quantify this relationship. Key questions include: Can we reliably estimate the impact of spatial elements on human affect? To what extent can the affective impact of architectural experience—considering its temporal dimension—be modeled? How can we collect first-person annotation in response to spatial stimuli? To address these questions, four user studies were conducted using three types of stimuli and two approaches to affect annotation. The findings provide insights into the dynamic interplay between architectural design and human emotion, contributing to both theoretical understanding and practical applications in Architecture and Affective Computing.
Description: Ph.D.(Melit.)</summary>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Procedural content creation in the age of generative AI</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/144765" />
    <author>
      <name />
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/144765</id>
    <updated>2026-03-10T09:18:57Z</updated>
    <published>2025-01-01T00:00:00Z</published>
    <summary type="text">Title: Procedural content creation in the age of generative AI
Abstract: Within the domain of Computational Creativity, Procedural Content Generation (PCG) in multimodal domains presents challenges in modal alignment, diversity, and coherence. Although recent generative AI models, including Large Language Models (LLMs), have demonstrated improved cross-modal capabilities, they frequently struggle with balancing autonomy and control to produce creative, high-quality artefacts. This thesis explores how generative AI can be structured to enhance diversity, coherence, and evaluation in multimodal PCG, with a focus on text and image generation. This work addresses the fundamental question: how can AI be guided to generate novel, high-quality, and semantically aligned artefacts across modalities? To this end, the research introduces four key approaches: Firstly, evolutionary quality diversity algorithms are combined with generative models to improve output diversity in image generation while maintaining semantic relevance. Secondly, the MAP-Elites algorithm is augmented with Transverse Assessment to explore multimodal search spaces, discover diverse solutions, and improve coherence across text and image generation. The third approach demonstrates CrawLLM, a zero-shot generative pipeline that uses LLMs to orchestrate the creation of game levels, narrative elements, and visual assets for a video game, ensuring thematic consistency and content structure. The last approach builds on CrawLLM by incorporating the LLM-driven pipeline into an adaptive evaluation loop to dynamically assess and regenerate underperforming artefacts, improving overall quality and thematic fit. These contributions advance methodologies for multimodal PCG by integrating generative AI with structured evaluation and optimisation techniques. The findings provide novel insights into search-based creativity, LLM-driven generative orchestration, and AI-assisted evaluation frameworks, with applications in computational creativity, game design, and digital media generation.
Description: Ph.D.(Melit.)</summary>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Advancing affect modelling via representation learning</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/144762" />
    <author>
      <name />
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/144762</id>
    <updated>2026-03-10T10:50:49Z</updated>
    <published>2025-01-01T00:00:00Z</published>
    <summary type="text">Title: Advancing affect modelling via representation learning
Abstract: Affect modelling, the process of constructing computational models capable of recognising and interpreting human emotions, has seen significant advancements with the rise of machine learning. However, key challenges still need to be addressed, particularly in learning generalisable affective representations across different modalities and scenarios, especially in contexts where data is scarce or incomplete. This thesis explores these challenges through the lens of representation learning, with a specific focus on contrastive learning principles. The research is structured across three main parts. First, we investigate the use of supervised contrastive learning to model affective states. Through the development of novel methods, we demonstrate improvements in learning multimodal representations of affect, as evidenced by experiments on datasets such as RECOLA and AGAIN. The second part addresses the challenge of missing modalities in affective data. By leveraging privileged information during training, we introduce techniques that bridge the gap between controlled and in-the-wild affect modelling. Additional experiments demonstrate the robustness of these techniques across multiple modalities and datasets. Finally, the thesis tackles the problem of learning affective representations from a small number of samples, proposing a novel approach using contrastive learning to generate robust representations even in data-constrained environments. This work demonstrates the applicability of these methods across various contexts, including cross-game engagement prediction. The thesis concludes with a discussion of the limitations of the proposed methods and potential directions for future research, including the exploration of more diverse datasets and techniques to further enhance model generalisation and robustness in affective computing.
Description: Ph.D.(Melit.)</summary>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Playing with the dead : dead pools and the case of Fantamorto</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/143910" />
    <author>
      <name>Gualeni, Stefano</name>
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/143910</id>
    <updated>2026-02-20T12:53:04Z</updated>
    <published>2026-02-16T00:00:00Z</published>
    <summary type="text">Title: Playing with the dead : dead pools and the case of Fantamorto
Authors: Gualeni, Stefano
Abstract: Dead people can take part in gameplay. These playful possibilities reflect two broad trends in the social perception of human finitude: death acceptance and death denial. Some of the playful practices discussed in this article align with perspectives that regard mortality as a foundational -- and even affirmative -- aspect of human existence. These are games and videogames designed to sustain a sense of continuity and familiarity with the departed. Other games adopt a more antagonistic stance toward the dead, trivializing and commodifying their personal and historical significance. Among the latter, this article devotes particular attention to a family of playful practices known as “dead pool games,” playful folk phenomena that have thus far been overlooked within game studies.&#xD;
&#xD;
Foregrounding the representational and ethical stakes of playing with the dead, the second half of the article traces the historical development of dead pool games and examines the ethically contentious design decisions that shape this genre. The inquiry culminates in an ethics-oriented analysis of the gameplay and design of Fantamorto -- a popular contemporary Italian instantiation of the dead pool game formula, with particular attention to how its rules frame death and human suffering as ludic resources within a competitive game economy.</summary>
    <dc:date>2026-02-16T00:00:00Z</dc:date>
  </entry>
</feed>

