Please use this identifier to cite or link to this item:
https://www.um.edu.mt/library/oar/handle/123456789/141369| Title: | Responsible AI for trustworthy tourism: A framework for mitigating ambiguity and anxiety with generative AI |
| Authors: | Singu, Hari Babu Chakraborty, Debarun Troise, Ciro Camilleri, Mark Anthony Bresciani, Stefano |
| Keywords: | Generative artificial intelligence Natural language generation (Computer science) Tourism Travel -- Planning Artificial intelligence Artificial intelligence -- Moral and ethical aspects |
| Issue Date: | 2026 |
| Publisher: | Elsevier Inc. |
| Citation: | Singu, H. B., Chakraborty, D., Troise, C., Camilleri, M. A. & Bresciani, S. (2025). Responsible AI for trustworthy tourism: A framework for mitigating ambiguity and anxiety with generative AI. Technological Forecasting and Social Change, 223, 124407. |
| Abstract: | Generative AI models are increasingly adopted in tourism marketing content based on text, image, video, and code, which generates new content as per the needs of users. The potential uses of generative AI are promising; nonetheless, it also raises ethical concerns that affect various stakeholders. Therefore, this research, which comprises two experimental studies, aims to investigate the enablers and the inhibitors of generative AI usage. Studies 1 (n = 403 participants) and 2 (n = 379 participants) applied a 2 × 2 between-subjects factorial design in which cognitive load, personalized recommendations, and perceived controllability were independently manipulated. The initial study examined the probability of reducing the cognitive load (reduction/increase) due to the manual search for tourism information. The second study considers the probability of receiving personalized recommendations using generative AI features on tourism websites. Perceived controllability was treated as a moderator in each study. The impact of the cognitive load produced mixed results (i.e., predicting perceived fairness and environmental well-being), with no responsible AI system constructs explaining trust within Study 1. In study 2, personalized recommendations explained each responsible AI system construct, though only perceived fairness and environmental well-being significantly explained trust in generative AI. Perceived controllability was a significant moderator in all relationships within study 2. Hence, to design and execute generative AI systems in the tourism domain, professionals should incorporate ethical concerns and user-empowerment strategies to build trust, thereby supporting the responsible and ethical use of AI that aligns with users and society. From a practical standpoint, the research provides recommendations on increasing user trust through the incorporation of controllability and transparency features in AI-powered platforms within tourism. From a theoretical perspective, it enriches the Technology Threat Avoidance Theory by incorporating ethical design considerations as fundamental factors influencing threat appraisal and trust. |
| URI: | https://www.sciencedirect.com/science/article/pii/S004016252500438X https://www.um.edu.mt/library/oar/handle/123456789/141369 |
| Appears in Collections: | Scholarly Works - FacMKSCC |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Responsible AI in tourism.pdf Restricted Access | 1.69 MB | Adobe PDF | View/Open Request a copy |
Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.
