Publikationen
Mit unseren Publikationen decken wir die unterschiedlichsten Forschungsbereiche ab, die sich im Feld von Mensch, Aufgabe und Technik ergeben. Neben traditionellen Themen der Wirtschaftsinformatik wie Wissensmanagement und Geschäftsprozessmanagement, finden Sie dabei auch Beiträge zu aktuellen Themen wie Blended Learning, Cloud Computing oder Smart Grids. Nutzen Sie diesen Überblick, um sich einen Eindruck über die Bandbreite und Möglichkeiten der Forschung der Wirtschaftsinformatik am Standort Essen zu verschaffen.
Art der Publikation: Beitrag in Zeitschrift
Does this Explanation Help? Designing Local Model-agnostic Explanation Representations and an Experimental Evaluation Using Eye-tracking Technology
- Autor(en):
- Meza Martinez, Miguel Angel; Nadj, Mario; Langner, Moritz; Toreini, Peyman; Maedche, Alexander
- Titel der Zeitschrift:
- ACM Transactions on Interactive Intelligent Systems
- Jahrgang (Veröffentlichung):
- 13 (2023)
- Seiten:
- 1-47
- Ort(e):
- New York, NY, USA
- Schlagworte:
- Machine learning, explainability, user-centric evaluation, eye-tracking
- Digital Object Identifier (DOI):
- doi:10.1145/3607145
- Zitation:
- Download BibTeX
Kurzfassung
In Explainable Artificial Intelligence (XAI) research, various local model-agnostic methods have been proposed to explain individual predictions to users in order to increase the transparency of the underlying Artificial Intelligence (AI) systems. However, the user perspective has received less attention in XAI research, leading to a (1) lack of involvement of users in the design process of local model-agnostic explanations representations and (2) a limited understanding of how users visually attend them. Against this backdrop, we refined representations of local explanations from four well-established model-agnostic XAI methods in an iterative design process with users. Moreover, we evaluated the refined explanation representations in a laboratory experiment using eye-tracking technology as well as self-reports and interviews. Our results show that users do not necessarily prefer simple explanations and that their individual characteristics, such as gender and previous experience with AI systems, strongly influence their preferences. In addition, users find that some explanations are only useful in certain scenarios making the selection of an appropriate explanation highly dependent on context. With our work, we contribute to ongoing research to improve transparency in AI.