<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/5429">
    <title>OAR@UM Community:</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/5429</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://www.um.edu.mt/library/oar/handle/123456789/145275" />
        <rdf:li rdf:resource="https://www.um.edu.mt/library/oar/handle/123456789/144842" />
        <rdf:li rdf:resource="https://www.um.edu.mt/library/oar/handle/123456789/142795" />
        <rdf:li rdf:resource="https://www.um.edu.mt/library/oar/handle/123456789/141214" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-15T18:52:02Z</dc:date>
  </channel>
  <item rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/145275">
    <title>Do hand gestures increase perceived prominence in naturally produced utterances?</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/145275</link>
    <description>Title: Do hand gestures increase perceived prominence in naturally produced utterances?
Authors: Paggio, Patrizia; Mitterer, Holger; Attard, Greta; Vella, Alexandra
Abstract: This study investigates the effect of visually perceived gestures on the overall (multimodal) prominence of naturally occurring stimuli extracted from a multimodal corpus of Maltese conversations. Experiment participants were required to rate the prominence of target words in sentences presented to them as audiovisual and audio-only stimuli. In half of the stimuli, the target word was accompanied by a co-speech hand gesture. The results of the experiment show (i) that words produced with a co-speech gesture were rated as more prominent than words that were produced without one and (ii) that this was the case independently of whether raters could see those gestures (audiovisual condition) or not (audio-only condition). An acoustic analysis of the data shows that the presence of a co-occurring gesture has a significant effect on the pitch of the target vowel. The study suggests that gestures may provide the listener with an additional but not necessary cue to perceiving prominence.</description>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/144842">
    <title>Investigating the post-vocalic /r/ in Maltese English and its potential intra- and inter-speaker variation</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/144842</link>
    <description>Title: Investigating the post-vocalic /r/ in Maltese English and its potential intra- and inter-speaker variation
Abstract: The postvocalic /r/ is a segment that is an object of much interest in several&#xD;
languages and dialects of English. This dissertation uses a set of eight speakers&#xD;
from the Corpus of Spoken Maltese English to find patterns in the realisation of the&#xD;
postvocalic /r/ that could indicate potential trends in the rhoticity of the dialect. While&#xD;
only 10.8% of the postvocalic /r/s measured across all speakers were realised as&#xD;
rhotic phonemes. All speakers had varying distributions of /r/ realisations,&#xD;
suggesting that rhoticity may be a continuum upon which Maltese English speakers&#xD;
may be found in multiple positions, indicating a somewhat large inter-speaker&#xD;
variation. This is reinforced by the fact that all speakers articulate postalveolar&#xD;
approximant /r/s, but only some articulate alveolar taps, which may indicate a&#xD;
broader pattern of /r/ loss across languages when compared to the previously&#xD;
common trill, which was articulated in the same contexts, thereby demonstrating&#xD;
inter-speaker variation. Intra-speaker variation is also present in the form of the&#xD;
frequency of /r/ articulation over time, as the first minute of each recording is&#xD;
decidedly less rhotic than the rest, which may suggest audience design is a factor.
Description: B.A. (Hons)(Melit.)</description>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/142795">
    <title>Interpreting vision and language generative models with semantic visual priors</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/142795</link>
    <description>Title: Interpreting vision and language generative models with semantic visual priors
Authors: Cafagna, Michele; Rojas-Barahona, Lina M.; van Deemter, Kees; Gatt, Albert
Abstract: When applied to Image-to-text models, explainability methods have two&#xD;
challenges. First, they often provide token-by-token explanations namely, they&#xD;
compute a visual explanation for each token of the generated sequence. This&#xD;
makes explanations expensive to compute and unable to comprehensively explain&#xD;
the model’s output. Second, for models with visual inputs, explainability methods&#xD;
such as SHAP typically consider superpixels as features. Since superpixels do&#xD;
not correspond to semantically meaningful regions of an image, this makes&#xD;
explanations harder to interpret. We develop a framework based on SHAP,&#xD;
that allows for generating comprehensive, meaningful explanations leveraging&#xD;
the meaning representation of the output sequence as a whole. Moreover,&#xD;
by exploiting semantic priors in the visual backbone, we extract an arbitrary&#xD;
number of features that allows the efficient computation of Shapley values&#xD;
on large-scale models, generating at the same time highly meaningful visual&#xD;
explanations. We demonstrate that our method generates semantically more&#xD;
expressive explanations than traditional methods at a lower compute cost and&#xD;
that it can be generalized to a large family of vision-language models.</description>
    <dc:date>2023-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/141214">
    <title>TextFocus : assessing the faithfulness of feature attribution methods explanations in natural language processing</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/141214</link>
    <description>Title: TextFocus : assessing the faithfulness of feature attribution methods explanations in natural language processing
Authors: Mariotti, Ettore; Arias-Duart, Anna; Cafagna, Michele; Gatt, Albert; Garcia-Gasulla, Dario; Alonso-Moral, Jose Maria
Abstract: Among the existing eXplainable AI (XAI) approaches, Feature Attribution methods are a popular option due to their interpretable nature. However, each method leads to a different solution, thus introducing uncertainty regarding their reliability and coherence with respect to the underlying model. This work introduces TextFocus, a metric for evaluating the faithfulness of Feature Attribution methods for Natural Language Processing (NLP) tasks involving classification. To address the absence of ground truth explanations for such methods, we introduce the concept of textual mosaics. A mosaic is composed of a combination of sentences belonging to different classes, which provides an implicit ground truth for attribution. The accuracy of explanations can be then evaluated by comparing feature attribution scores with the known class labels in the mosaic. The performance of six feature attribution methods is systematically compared on three sentence classification tasks by using TextFocus, with Integrated Gradients being the best overall method in terms of faithfulness and computational requirements. The proposed methodology fills a gap in NLP evaluation, by providing an objective way to assess Feature Attribution methods while finding their optimal parameters.</description>
    <dc:date>2024-01-01T00:00:00Z</dc:date>
  </item>
</rdf:RDF>

