Artificial intelligence (AI) has seen a significant shift in focus towards designing and developing intelligent systems that are interpretable and explainable. This is due to the complexity of the models, built from data, and the legal requirements imposed by various national and international parliaments. This has echoed both in the research literature and the press, attracting scholars worldwide and a lay audience.
An emerging field within AI is explainable Artificial Intelligence (xAI), devoted to producing intelligent systems that allow humans to understand their inferences, assessments, predictions, recommendations and decisions. Initially devoted to designing post-hoc methods for explainability, xAI is rapidly expanding its boundaries to neuro-symbolic methods for producing self-interpretable models. Research has also shifted the focus on the structure of explanations and human-centred AI since the ultimate users of interactive technologies are humans.
The World Conference on Explainable Artificial Intelligence is an annual event that aims to bring together researchers, academics, and professionals, promoting the sharing and discussion of knowledge, new perspectives, experiences, and innovations in the field of xAI. This event is multidisciplinary and interdisciplinary, bringing together academics and scholars of different disciplines, including Computer Science, Psychology, Philosophy, Law and Social Science, to mention a few, and industry practitioners interested in the practical, social and ethical aspects of the explanation of the models emerging from the discipline of AI.
Several special tracks are proposed for this year's conference. The special track on eXplainable Retrieval Augmented Generation (RAG) and Graph-RAG aims to foster discussion and innovation about mechanisms that can be used to explain RAG and Graph-RAG models.
These systems rely on intricate interactions between retrieved evidence, graph-structured data, and the generative processes that integrate them, making their inner workings challenging to interpret. Achieving explainability in such systems requires uncovering relationships between multi-hop retrieval pathways, the influence of graph nodes and edges, and the dynamics of integrating structured and unstructured information. This effort is critical not only for ensuring trust in these systems but also for aligning their outputs with user needs in knowledge-intensive and high-stakes domains.
Key areas of interest include advanced methods for provenance tracking, counterfactual reasoning, and multi-hop explainability; novel metrics and frameworks for evaluating clarity, usability, and factual accuracy in explanations; and techniques for balancing detailed evidence chains with actionable simplicity. Contributions are encouraged that propose scalable algorithms for explaining large-scale graph systems, demonstrate user-centred designs for interactive explanations, and explore the role of explainability in enhancing decision-making and accountability.
The session invites researchers and practitioners to present theoretical advancements, practical methodologies, and domain-agnostic applications that advance explainability in RAG and Graph-RAG. By delving into these challenges, the track aims to illuminate pathways for building more interpretable, trustworthy, and effective AI systems capable of addressing complex, real-world problems.
An emerging field within AI is explainable Artificial Intelligence (xAI), devoted to producing intelligent systems that allow humans to understand their inferences, assessments, predictions, recommendations and decisions. Initially devoted to designing post-hoc methods for explainability, xAI is rapidly expanding its boundaries to neuro-symbolic methods for producing self-interpretable models. Research has also shifted the focus on the structure of explanations and human-centred AI since the ultimate users of interactive technologies are humans.
The World Conference on Explainable Artificial Intelligence is an annual event that aims to bring together researchers, academics, and professionals, promoting the sharing and discussion of knowledge, new perspectives, experiences, and innovations in the field of xAI. This event is multidisciplinary and interdisciplinary, bringing together academics and scholars of different disciplines, including Computer Science, Psychology, Philosophy, Law and Social Science, to mention a few, and industry practitioners interested in the practical, social and ethical aspects of the explanation of the models emerging from the discipline of AI.
Several special tracks are proposed for this year's conference. The special track on eXplainable Retrieval Augmented Generation (RAG) and Graph-RAG aims to foster discussion and innovation about mechanisms that can be used to explain RAG and Graph-RAG models.
These systems rely on intricate interactions between retrieved evidence, graph-structured data, and the generative processes that integrate them, making their inner workings challenging to interpret. Achieving explainability in such systems requires uncovering relationships between multi-hop retrieval pathways, the influence of graph nodes and edges, and the dynamics of integrating structured and unstructured information. This effort is critical not only for ensuring trust in these systems but also for aligning their outputs with user needs in knowledge-intensive and high-stakes domains.
Key areas of interest include advanced methods for provenance tracking, counterfactual reasoning, and multi-hop explainability; novel metrics and frameworks for evaluating clarity, usability, and factual accuracy in explanations; and techniques for balancing detailed evidence chains with actionable simplicity. Contributions are encouraged that propose scalable algorithms for explaining large-scale graph systems, demonstrate user-centred designs for interactive explanations, and explore the role of explainability in enhancing decision-making and accountability.
The session invites researchers and practitioners to present theoretical advancements, practical methodologies, and domain-agnostic applications that advance explainability in RAG and Graph-RAG. By delving into these challenges, the track aims to illuminate pathways for building more interpretable, trustworthy, and effective AI systems capable of addressing complex, real-world problems.