<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/307">
    <title>OAR@UM Community:</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/307</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://www.um.edu.mt/library/oar/handle/123456789/145905" />
        <rdf:li rdf:resource="https://www.um.edu.mt/library/oar/handle/123456789/145904" />
        <rdf:li rdf:resource="https://www.um.edu.mt/library/oar/handle/123456789/145903" />
        <rdf:li rdf:resource="https://www.um.edu.mt/library/oar/handle/123456789/145119" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-26T01:42:19Z</dc:date>
  </channel>
  <item rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/145905">
    <title>Consumer endorsements in the business-to-business (B2B) context : adapting consumer theories and proposing a research agenda</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/145905</link>
    <description>Title: Consumer endorsements in the business-to-business (B2B) context : adapting consumer theories and proposing a research agenda
Authors: Konietzny, Jirka; Caruana, Albert
Abstract: This conceptual paper addresses the gap in Business-to-Business (B2B) literature regarding how endorsements from high-status "celebrity organizations" influence organizational buying behavior. While traditional B2C theories focus on individual celebrities, this framework explores the multi-person Decision Making Unit (DMU) and identifies three distinct influence mechanisms: status-based signals for risk reduction, cognitive fit for perceived competence, and B2B parasocial relationships that bolster champion confidence. The paper also considers the moderating role of Power Distance Belief (PDB) in how these endorsements are processed.</description>
    <dc:date>2026-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/145904">
    <title>Social marketing and the pursuit of a legitimacy strategy</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/145904</link>
    <description>Title: Social marketing and the pursuit of a legitimacy strategy
Authors: Caruana, Albert; Vella, Joseph M.; Konietzny, Jirka
Abstract: State gambling organisations like the British Columbia Lottery Corporation (BCLC) face a fundamental paradox: generating public revenue from an activity associated with social harm. This study examines how BCLC uses corporate sustainability and Environmental, Social, and Governance (ESG) reporting as a social marketing tool to reinforce societal legitimacy. Through a content analysis of reports from 2021 and 2022 using Leximancer software, the research identifies how BCLC frames its efforts—such as Indigenous reconciliation and employee wellbeing—to align with stakeholder expectations and mitigate the stigma of gambling. The findings suggest that BCLC employs strategies of Earning, Bargaining, and Construing legitimacy to recast the organisation as a contributor to public wellbeing.</description>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/145903">
    <title>Squaring a circle? Sustainability reports as a legitimacy-seeking strategy in state gambling monopolies</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/145903</link>
    <description>Title: Squaring a circle? Sustainability reports as a legitimacy-seeking strategy in state gambling monopolies
Authors: Caruana, Albert; Vella, Joseph M.; Konietzny, Jirka
Abstract: State gambling organisations are monopolies, that increasingly proclaim a commitment to sustainability principles, however their profits come at a substantial social cost. Gambling raises an array of economic, social and ethical governance concerns. This study examines the evolution of sustainability, corporate social responsibility (CSR), environmental, social and governance (ESG) practices, and the growing trend of publishing annual sustainability reports. These aspects are considered within the literature on organisational legitimacy, and a framework of legitimacy-seeking strategies is identified. Qualitative research is utilised to analyse sustainability reports published during 2021-2022 by two state gambling monopolies: the British Columbia Lottery Corporation (BCLC) in Canada and Veikkaus in Finland. Using Leximancer software, content analysis identified key themes that can be linked to legitimacy strategies and gambling-related concerns. Findings suggest that while sustainability reports enhance organisational legitimacy, fundamental ethical and social challenges persist, requiring more deliberate managerial approaches.</description>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/145119">
    <title>[Call for Papers] Ethical implications of artificial intelligence (AI) and automation in service industries : addressing algorithmic bias, opacity and unclear accountability mechanisms</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/145119</link>
    <description>Title: [Call for Papers] Ethical implications of artificial intelligence (AI) and automation in service industries : addressing algorithmic bias, opacity and unclear accountability mechanisms
Abstract: Artificial intelligence (AI) and automation technologies are transforming service industries, including finance, healthcare, hospitality, retail, education, public services and digital platforms. While algorithmic decision-making systems, service robots, chatbots, predictive analytics and automated workflows offer enhanced efficiencies, personalization possibilities and scalability potential, these technologies are also raising profound ethical concerns related to their modus operandi and explainability of their outputs (Camilleri, 2024; Hu &amp; Min, 2023).&#xD;
As AI-driven service systems increasingly mediate interactions between organisations and their stakeholders; ethical failures and bias have the potential to reinforce existing social inequalities, undermine their trustworthiness, service quality, organisational legitimacy and broader societal well-being (Camilleri et al., 2024). Moreover, opaque “black-box” models reduce transparency and could erode user trust in these machine learning technologies (Kordzadeh &amp; Ghasemaghaei, 2022). Unclear accountability structures may obscure responsibility for service failures or might facilitate unintended harmful outcomes (Novelli et al., 2024). These challenges are particularly evidenced in service contexts where human–AI interactions are frequent, relational and consequential.&#xD;
Such concerns are clearly illustrated in healthcare services (Procter et al., 2023), where AI-driven diagnostic and triage systems are increasingly used to support clinical decision-making. When these technologies rely on biased or unrepresentative training data, they may systematically underdiagnose or misclassify specific demographic groups. Given the high-stakes and the relational nature of healthcare encounters, limited transparency and explainability can significantly diminish patient trust while raising serious ethical and accountability concerns.&#xD;
Similar issues arise in financial and insurance services (Oke &amp; Cavus, 2025), where automated credit scoring, loan approval and underwriting systems directly influence individuals’ financial inclusion and long-term economic prospects. Algorithmic opacity makes it difficult for customers to understand, question or contest adverse decisions. Therefore, biased models may perpetuate or amplify socioeconomic inequalities. Such an outcome is particularly problematic in service relationships characterised by long-term dependency and trust.&#xD;
Ethical challenges are also conspicuous in customer service and frontline interactions (Han et al., 2023), where chatbots and virtual assistants handle large volumes of customer inquiries across retail, telecommunications and travel services (Lv et al., 2022). Although these systems offer efficiency and scalability benefits, there are instances where they fail to recognise emotional distress, cultural differences, or exceptional circumstances. Excessive automation can therefore undermine relational service quality, especially when customers are unable to escalate complex or sensitive issues to human agents (Yang et al., 2022).&#xD;
In public service contexts, governments are progressively deploying AI systems (Willems et al., 2023) to allocate welfare benefits, determine assess eligibility and detect fraud. In such settings, automated decisions can have profound implications for the citizens’ livelihoods and their inclusion in cohesive societies Ethical concerns become particularly acute when accountability is diffused between public agencies and technology providers, as well as when affected individuals lack meaningful mechanisms for appeal, explanation or redress.&#xD;
Likewise, platform-based and gig economy services are increasingly relying on algorithmic management systems to assign tasks, evaluate performance and to compute remunerations (Kadolkar et al., 2025). These systems often operate as “black boxes,” leaving workers uncertain about how ratings, penalties or income calculations are determined. The resulting lack of transparency and of clear accountability structures can weaken trust, exacerbate power asymmetries and could intensify worker vulnerability within ongoing service relationships.&#xD;
Notwithstanding, more human resource management and recruitment specialists are adopting AI-enabled tools for résumé screening and to assess their candidates’ credentials (Soleimani et al., 2025). Possible bias embedded within these systems may disadvantage certain social groups. Their limited transparency can prevent applicants from understanding how hiring decisions are made. Such practices raise important ethical questions concerning fairness, informed consent and procedural justice within professional service contexts.&#xD;
This special issue seeks to advance novel insights into the above ethical implications of AI and automation in services industries. The guest editors look forward to receiving original, interdisciplinary contributions that critically examine how ethical principles can be embedded into the design, governance, implementation and evaluation of AI-enabled service systems.</description>
    <dc:date>2027-01-01T00:00:00Z</dc:date>
  </item>
</rdf:RDF>

