Well-renowned journal Elsevier's Technological Forecasting and Social Change (CS26.3; IF13.3; ABDC A; ABS 3) has recently published Professor Mark Anthony Camilleri's sole-authored article on Explainable Artificial Intelligence (#XAI) instruments.
This open access contribution advances a systematic review of leading XAI tools, frameworks and best practices.
This publication integrates XAI with socio-technical, governance and trust theories. It links explainability to adoption, ethics, lifecycle dynamics and responsible AI frameworks. In addition, it provides practical, actionable guidance for developers of AI solutions, as well as for professionals, who are responsible for managing data-driven strategies and governance policies. It supports efforts to ensure that AI systems are not only powerful, but also transparent, interpretable and trustworthy. This research serves as a valuable resource for those aiming to move beyond black-box reliance toward more informed, responsible and accountable AI oversight.
Abstract
As artificial intelligence (AI) models are increasingly becoming permeated across various domains, there are instances where they are generating hallucinations, misinformation and erroneous outputs. Various stakeholders, particularly the regulatory ones, are encouraging the developers of machine learning (ML) systems to clarify or justify their models' decisions, actions or predictions in a way that is understandable to their users. In this light, this article raises awareness on Explainable Artificial Intelligence (XAI) principles that are intended to increase transparency, accountability and fairness about the modus operandi of machine learning algorithms. A sys-tematic review of the extant literature identifies key tools, frameworks and best practices that enhance the interpretability of AI models, including open-source techniques like SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), among others. The synthesis of the findings also shed light on XAI challenges and limitations of black-box models. This contribution advances a conceptual framework for the responsible implementation of XAI and offers practical guidelines that promote the interpretability of AI systems, whilst addressing their opacity, as well as their biased outcomes. It puts forward theoretical and managerial implications as well as future research avenues.
Suggested citation
Camilleri, M.A. (2026). Opening the black box: Operational principles, tools and frameworks that advance explainable artificial intelligence (XAI) models, Technological Forecasting and Social Change, https://doi.org/10.1016/j.
This article is available through Elsevier and can also be downloaded via the University of Malta's Open Access Repository.
About the publisher
The journal that published this paper, Technological Forecasting and Social Change (TFSC), is a leading international forum for research on the methodology and practice of disruptive innovation. It brings together timely insights on how technological advances intersect with social or environmental factors. It supports their use as strategic tools for planning and decision-making.
TFSC offers the means for introducing novel or improved products, services and processes that have the potential to provide additional value to societal actors. It is committed to publishing research with a clear technological focus that significantly contributes to both theory and practice.