<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>OAR@UM Collection:</title>
  <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/46032" />
  <subtitle />
  <id>https://www.um.edu.mt/library/oar/handle/123456789/46032</id>
  <updated>2026-04-11T00:51:16Z</updated>
  <dc:date>2026-04-11T00:51:16Z</dc:date>
  <entry>
    <title>The morphology of Ịzọn : materials for establishing the word as a linguistic unit in Tarakiri</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/71912" />
    <author>
      <name />
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/71912</id>
    <updated>2021-03-23T08:00:56Z</updated>
    <published>2019-01-01T00:00:00Z</published>
    <summary type="text">Title: The morphology of Ịzọn : materials for establishing the word as a linguistic unit in Tarakiri
Abstract: This study explores the morphology of Tarakiri in the context of the notion of word. After&#xD;
discussing the genetic relationship of Tarakiri to its near relations, and providing a&#xD;
sociolinguistic overview of the role it plays in society and the geo-linguistic background of the&#xD;
Ịzọn (ızɔ) people and their language, we discuss the status of Ịzọn as a main language in Nigeria&#xD;
and among the Ijọ languages of the Niger Delta, as well as the ethnography and demography of&#xD;
the Ịzọn. We give a linguistic classification of Ịzọn and provide background information on the&#xD;
Tarakiri people and dialect, as well as a brief history of Tarakiri. We note that Tarakiri is a&#xD;
North-Western and South-Central dialect of Ịzọn, spoken in Ekeremor Local Government Area&#xD;
(LGA), Sagbama LGA and Southern Ijọ LGA of Bayelsa State and in Bomadi LGA of Delta&#xD;
State, by the people of Adọbụ, Agbere, Amatolo, Angalabiri, Ayamasa, Bulou-Orua, Ebedebiri,&#xD;
Egbemọ-Angalabiri, Isampou, Lalagbene, Obololi, Oduofori, Odurubu, Ofoni, Tọrụ-Orua and&#xD;
Bulou-Orua towns. Using data collected by means of Comrie &amp; Smith (1977), participation in&#xD;
everyday life and cultural activities, observation and informal interviews, then collating, and&#xD;
analysing the data as well as making inferences from relevant available raw linguistic data,&#xD;
textbooks, journals and on-line materials, we discuss the research methods and materials, how&#xD;
the available textbooks, journals and online materials were handled and used, and the grounds for&#xD;
choosing the descriptive approach to record what native speakers actually say or write. After a&#xD;
detailed review of the literature on the notion of the word in cross-linguistic typology and in&#xD;
African languages, tone in African languages, Ịzọn, and Tarakiri, as well as lexical and&#xD;
grammatical tone in Tarakiri, we provide a comprehensive description of the morphology of&#xD;
Tarakiri by describing the various word classes in Tarakiri with a view to determining the status&#xD;
of the word in Tarakiri. We discuss nouns and noun morphology including gender, number,&#xD;
determiners/articles, pronouns, adjectives, numerals, verbs and verbal morphology including&#xD;
tense and the formal marking of tense distinctions, aspect, mood, negation, adverbs,&#xD;
postpositions, locatives, ideophones, interjections, greetings, questions, conjunctions,&#xD;
emphasizers, commands and requests. We then discuss the word formation processes in Tarakiri&#xD;
including borrowed words, acronyms, compound morphology and inflectional morphology. We&#xD;
take a preliminary look at materials for establishing the status of the word in Tarakiri, and&#xD;
discuss the problems relating to establishing the word in Tarakiri. We conclude by summarizing&#xD;
the findings of the study in establishing the word as a morphological unit in Tarakiri, giving the&#xD;
salient points necessary for establishing the word and determining its status in Tarakiri, the&#xD;
contributions of this work to knowledge, the applications and the social significance of the&#xD;
research, as well as the relevance of this study on a national and international scale.
Description: M.PHIL.</summary>
    <dc:date>2019-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>mALTADD : an alternative approach to the teaching of Japanese Kanji character On-readings based on mnemonic cues via phonetic components</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/70082" />
    <author>
      <name />
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/70082</id>
    <updated>2021-02-26T13:24:29Z</updated>
    <published>2019-01-01T00:00:00Z</published>
    <summary type="text">Title: mALTADD : an alternative approach to the teaching of Japanese Kanji character On-readings based on mnemonic cues via phonetic components
Abstract: Following the creation of a new kanji database and the application of a distinctive methodology, both of which&#xD;
specifically conceived for the purpose of this work, this study provides an alternative approach to kanji teaching&#xD;
with the aim to provide a further systematic organization of On-readings of Jōyō as well as non-Jōyō kanji using&#xD;
patterns representing minimal differences occurring between kanji On-readings and the main readings of their&#xD;
phonetic components. These patterns, referred to as mnemonic alternations and additions (mALTADD), are&#xD;
also extracted from kanji sharing the same phonetic component and are tentatively used to arrange groups, pairs&#xD;
and individual kanji containing an imprecise etymological or pseudo-phonetic component systematically for an&#xD;
arrangement of kanji via grapheme-based cues. The analysis herein provides further evidence on the utility of&#xD;
phonetic components as a pedagogical tool, showing that only a minor percentage of Jōyō kanji cannot be&#xD;
classified under the most frequently occurring patterns. Furthermore, the present work includes a case study on&#xD;
the application of mALTADD within the Japanese as a foreign language class, the results of which are overall&#xD;
positive.&#xD;
要旨&#xD;
本研究においては、新しい漢字のデータベースを作成した上、従来とは別の方法論を使用し&#xD;
（両方とも本研究のために特別に考案した）、漢字の教学に異なる方法論を提供している。&#xD;
言い換えれば、漢字の音読みとその音符から見出した最小対立体のパターンを通して、日本&#xD;
語の常用漢字及び非常用漢字のために、さらに完成された音読みの音声システムを構築する&#xD;
ことが本研究の目的である。これらのパターンは「音符変換と音符増加記憶法」と呼ばれ、&#xD;
音符を共有する漢字から抽出されたものである。同時に、筆者はこれらのパターンを利用し、&#xD;
多音字または「偽」音符を持つ漢字を系統的に分類して統合し、音符を通して発音の法則を&#xD;
見つける。本論文の分析により、ごく一部の漢字だけが音符、或いはもっとも一般的なパタ&#xD;
ーンに配列することができない。その結果、漢字教育のツールとしての音符の実用性が証明&#xD;
できる。さらに、この研究は「音符変換と音符増加記憶法（mALTADD･ﾏﾙﾀﾄﾞ）」を教学実践&#xD;
にも応用しており、その結果は概して積極的である。
Description: M.A.LINGUISTICS</summary>
    <dc:date>2019-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Morphological process transduction : towards interpretable multi-lingual morphological analysis</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/59678" />
    <author>
      <name />
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/59678</id>
    <updated>2020-08-23T05:46:22Z</updated>
    <published>2019-01-01T00:00:00Z</published>
    <summary type="text">Title: Morphological process transduction : towards interpretable multi-lingual morphological analysis
Abstract: The persistent efforts to make valuable annotated corpora in more diverse, morphologically rich languages has driven research in NLP into considering more explicit techniques to incorporate morphological information into the pipeline. Recent efforts have proposed combined strategies to bring together the transducer paradigm and neural architectures, although ingesting one character at a time in context-agnostic setup. In this thesis, we introduce a technique inspired by the byte-pair-encoding (BPE) compression algorithm in order to obtain transducing actions that resemble word formations more faithfully. Then, we propose a neural transducer architecture that operates over these transducing actions, ingesting one word token at a time and effectively incorporating sentence-level context by encoding per-token action representations in a hierarchical fashion. We investigate the benefit of this word formation representations for the tasks of lemmatization and context-aware morphological tagging for a typologically diverse set of languages.&#xD;
For lemmatization, we use investigate an optimization technique that explores possible action sequences and scores them based on task-specific metrics instead of standard log-likelihood. We find that our approach benefits greatly languages that use less commonly studied morphological processes such as templatic processes, with up to 55.73% error reduction in lemmatization for Arabic. Furthermore, we find that projecting these word formation representations into a common multilingual space enables our models to group together action labels signalling the same phenomena in several languages, e.g. Plurality, irrespective of the language-specific morphological process that may be involved.&#xD;
For morphological tagging, we investigate the effect of different tagging strategies, e.g. bundle vs individual tag prediction, as well as the effect of multilingual action representations. We find that our taggers are able to obtain up to 20% error reduction by leveraging multilingual actions with respect to the monolingual scenario.
Description: M.SC.HUMAN LANG.SC.&amp;TECH.</summary>
    <dc:date>2019-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>On architectures for including visual information in neural language models for image description</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/50207" />
    <author>
      <name />
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/50207</id>
    <updated>2020-04-28T08:19:06Z</updated>
    <published>2019-01-01T00:00:00Z</published>
    <summary type="text">Title: On architectures for including visual information in neural language models for image description
Abstract: A neural language model is a neural network that can be used to generate a sentence by suggesting probable next words given a partially complete sentence (a preﬁx). A recurrent neural network reads in the partial sentence and produces a hidden state vector which represents information about which words can follow. If a likely word from those suggested is selected and attached to the sentence preﬁx, another word after that can be selected as well, and so on until a complete sentence is generated in an iterative word by word fashion. Rather than just generating random sentences, a neural language model can instead be conditioned into generating descriptions for images by also providing visual information apart from the sentence preﬁx. This visual information can be included into the language model through diﬀerent points of entry resulting in diﬀerent neural architectures. We identify four main architectures which we call init-inject, pre-inject, par-inject, and merge. We analyse these four architectures and conclude that the best performing one is init-inject, which is when the visual information is injected into the initial state of the recurrent neural network. We conﬁrm this using both automatic evaluation measures and human annotation. We then analyse how much inﬂuence the images have on each architecture. This is done by measuring how diﬀerent the output probabilities of a model are when a partial sentence is combined with a completely diﬀerent image from the one it is meant to be combined with. We ﬁnd that init-inject tends to quickly become less inﬂuenced by the image as more words are generated. A diﬀerent architecture called merge, which is when the visual information is merged with the recurrent neural network’s hidden state vector prior to output, loses visual inﬂuence much more slowly, suggesting that it would work better for generating longer sentences. We also observe that the merge architecture can have its recurrent neural network pre-trained in a text-only language model (transfer learning) rather than be initialised randomly as usual. This results in even better performance than the other architectures, provided that the source language model is not too good at language modelling or it will overspecialise and be less eﬀective at image description generation. Our work open up new avenues of research in neural architectures, explainable AI, and transfer learning.
Description: PH.D.LINGUISTICS</summary>
    <dc:date>2019-01-01T00:00:00Z</dc:date>
  </entry>
</feed>

