<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>OAR@UM Collection:</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/308</link>
    <description />
    <pubDate>Wed, 08 Apr 2026 09:31:16 GMT</pubDate>
    <dc:date>2026-04-08T09:31:16Z</dc:date>
    <item>
      <title>[Call for Papers] Ethical implications of artificial intelligence (AI) and automation in service industries : addressing algorithmic bias, opacity and unclear accountability mechanisms</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/145119</link>
      <description>Title: [Call for Papers] Ethical implications of artificial intelligence (AI) and automation in service industries : addressing algorithmic bias, opacity and unclear accountability mechanisms
Abstract: Artificial intelligence (AI) and automation technologies are transforming service industries, including finance, healthcare, hospitality, retail, education, public services and digital platforms. While algorithmic decision-making systems, service robots, chatbots, predictive analytics and automated workflows offer enhanced efficiencies, personalization possibilities and scalability potential, these technologies are also raising profound ethical concerns related to their modus operandi and explainability of their outputs (Camilleri, 2024; Hu &amp; Min, 2023).&#xD;
As AI-driven service systems increasingly mediate interactions between organisations and their stakeholders; ethical failures and bias have the potential to reinforce existing social inequalities, undermine their trustworthiness, service quality, organisational legitimacy and broader societal well-being (Camilleri et al., 2024). Moreover, opaque “black-box” models reduce transparency and could erode user trust in these machine learning technologies (Kordzadeh &amp; Ghasemaghaei, 2022). Unclear accountability structures may obscure responsibility for service failures or might facilitate unintended harmful outcomes (Novelli et al., 2024). These challenges are particularly evidenced in service contexts where human–AI interactions are frequent, relational and consequential.&#xD;
Such concerns are clearly illustrated in healthcare services (Procter et al., 2023), where AI-driven diagnostic and triage systems are increasingly used to support clinical decision-making. When these technologies rely on biased or unrepresentative training data, they may systematically underdiagnose or misclassify specific demographic groups. Given the high-stakes and the relational nature of healthcare encounters, limited transparency and explainability can significantly diminish patient trust while raising serious ethical and accountability concerns.&#xD;
Similar issues arise in financial and insurance services (Oke &amp; Cavus, 2025), where automated credit scoring, loan approval and underwriting systems directly influence individuals’ financial inclusion and long-term economic prospects. Algorithmic opacity makes it difficult for customers to understand, question or contest adverse decisions. Therefore, biased models may perpetuate or amplify socioeconomic inequalities. Such an outcome is particularly problematic in service relationships characterised by long-term dependency and trust.&#xD;
Ethical challenges are also conspicuous in customer service and frontline interactions (Han et al., 2023), where chatbots and virtual assistants handle large volumes of customer inquiries across retail, telecommunications and travel services (Lv et al., 2022). Although these systems offer efficiency and scalability benefits, there are instances where they fail to recognise emotional distress, cultural differences, or exceptional circumstances. Excessive automation can therefore undermine relational service quality, especially when customers are unable to escalate complex or sensitive issues to human agents (Yang et al., 2022).&#xD;
In public service contexts, governments are progressively deploying AI systems (Willems et al., 2023) to allocate welfare benefits, determine assess eligibility and detect fraud. In such settings, automated decisions can have profound implications for the citizens’ livelihoods and their inclusion in cohesive societies Ethical concerns become particularly acute when accountability is diffused between public agencies and technology providers, as well as when affected individuals lack meaningful mechanisms for appeal, explanation or redress.&#xD;
Likewise, platform-based and gig economy services are increasingly relying on algorithmic management systems to assign tasks, evaluate performance and to compute remunerations (Kadolkar et al., 2025). These systems often operate as “black boxes,” leaving workers uncertain about how ratings, penalties or income calculations are determined. The resulting lack of transparency and of clear accountability structures can weaken trust, exacerbate power asymmetries and could intensify worker vulnerability within ongoing service relationships.&#xD;
Notwithstanding, more human resource management and recruitment specialists are adopting AI-enabled tools for résumé screening and to assess their candidates’ credentials (Soleimani et al., 2025). Possible bias embedded within these systems may disadvantage certain social groups. Their limited transparency can prevent applicants from understanding how hiring decisions are made. Such practices raise important ethical questions concerning fairness, informed consent and procedural justice within professional service contexts.&#xD;
This special issue seeks to advance novel insights into the above ethical implications of AI and automation in services industries. The guest editors look forward to receiving original, interdisciplinary contributions that critically examine how ethical principles can be embedded into the design, governance, implementation and evaluation of AI-enabled service systems.</description>
      <pubDate>Fri, 01 Jan 2027 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/145119</guid>
      <dc:date>2027-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Environmental, social and governance (ESG) factors for sustainable tourism development : the way forward toward destination resilience and growth</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/141666</link>
      <description>Title: Environmental, social and governance (ESG) factors for sustainable tourism development : the way forward toward destination resilience and growth
Authors: Camilleri, Mark Anthony
Abstract: Although environmental, social and governance (ESG) performance is increasingly gaining popularity in corporate and financial domains, its application in the tourism industry is still relatively underexplored. Hence, the objectives of this research are fivefold: (1) A systematic review appraises the extant literature on the intersection of the ESG dimensions and sustainable tourism; (2) It provides a synthesis of the content of the extracted articles and maps thematic intersections related to travel destinations' environmental stewardship, social equity and governance frameworks; (3) It assesses ESG-aligned strategies that are intended to address the destinations' challenges including their carrying capacities and overtourism issues, climate risks, sociocultural tensions as well as institutional accountabilities; (4) It provides a holistic conceptual framework that guides policymakers, practitioners and stakeholders in integrating ESG into tourism planning and development, for sustainable and economically viable outcomes. In conclusion, (5) it advances theoretical and managerial implications.</description>
      <pubDate>Tue, 25 Nov 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/141666</guid>
      <dc:date>2025-11-25T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Responsible AI for trustworthy tourism: A framework for mitigating ambiguity and anxiety with generative AI</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/141369</link>
      <description>Title: Responsible AI for trustworthy tourism: A framework for mitigating ambiguity and anxiety with generative AI
Authors: Singu, Hari Babu; Chakraborty, Debarun; Troise, Ciro; Camilleri, Mark Anthony; Bresciani, Stefano
Abstract: Generative AI models are increasingly adopted in tourism marketing content based on text, image, video, and code, which generates new content as per the needs of users. The potential uses of generative AI are promising; nonetheless, it also raises ethical concerns that affect various stakeholders. Therefore, this research, which comprises two experimental studies, aims to investigate the enablers and the inhibitors of generative AI usage. Studies 1 (n = 403 participants) and 2 (n = 379 participants) applied a 2 × 2 between-subjects factorial design in which cognitive load, personalized recommendations, and perceived controllability were independently manipulated. The initial study examined the probability of reducing the cognitive load (reduction/increase) due to the manual search for tourism information. The second study considers the probability of receiving personalized recommendations using generative AI features on tourism websites. Perceived controllability was treated as a moderator in each study. The impact of the cognitive load produced mixed results (i.e., predicting perceived fairness and environmental well-being), with no responsible AI system constructs explaining trust within Study 1. In study 2, personalized recommendations explained each responsible AI system construct, though only perceived fairness and environmental well-being significantly explained trust in generative AI. Perceived controllability was a significant moderator in all relationships within study 2. Hence, to design and execute generative AI systems in the tourism domain, professionals should incorporate ethical concerns and user-empowerment strategies to build trust, thereby supporting the responsible and ethical use of AI that aligns with users and society. From a practical standpoint, the research provides recommendations on increasing user trust through the incorporation of controllability and transparency features in AI-powered platforms within tourism. From a theoretical perspective, it enriches the Technology Threat Avoidance Theory by incorporating ethical design considerations as fundamental factors influencing threat appraisal and trust.</description>
      <pubDate>Thu, 01 Jan 2026 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/141369</guid>
      <dc:date>2026-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Green accommodation choices in the sharing economy</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/141367</link>
      <description>Title: Green accommodation choices in the sharing economy
Authors: Farmaki, Anna; Pappas, Nikolaos; Stergiou, Dimitrios; Apostolakis, Alexandros; Camilleri, Mark Anthony
Abstract: Purpose:&#xD;
This study aims to explore how tourist intention to select green peer-to-peer (P2P) accommodation is shaped by complex, interrelated factors. While past research has largely adopted linear approaches, this study addresses the need for a configurational understanding of behavioural intention in green consumption, particularly within the under-examined P2P accommodation sector.&#xD;
&#xD;
Design/methodology/approach:&#xD;
Adopting a configurational perspective grounded in complexity theory, the study uses fuzzy-set Qualitative Comparative Analysis (fsQCA) and Necessary Condition Analysis (NCA) to explore how configurations of causal conditions – together with key demographic and experiential factors – shape tourists’ intention to select green P2P accommodation.&#xD;
&#xD;
Findings:&#xD;
Three distinct orientations were identified: value- and norm-based, concern–capability and socially reinforced habitual control. These reflect differing pathways through which intention is formed, based on the interplay of environmental concern, attitudes, subjective norms, perceived behavioural control and green behaviour at home. The presence of these conditions across configurations highlights that intention may emerge through different causal paths, each shaped by distinct combinations of influencing factors.&#xD;
&#xD;
Originality/value:&#xD;
To the best of the authors’ knowledge, this is the first study to examine tourist intention to choose green P2P accommodation through the lens of complexity theory. Moving beyond the linear assumptions of past research, it adopts a configurational approach to reveal how multiple interacting factors shape behavioural intention. By applying fsQCA and NCA within this framework, this study uncovers distinct pathways to intention, offering both theoretical advancement in the study of green consumption and practical value for platforms and hosts operating in the landscape of P2P accommodation.</description>
      <pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/141367</guid>
      <dc:date>2025-01-01T00:00:00Z</dc:date>
    </item>
  </channel>
</rss>

