Topic offers
How do Doctors explain? – Mapping medical Explanations to Explainable AI
- Type:
- Master Thesis Business Information Systems
- Status:
- offered
- Tutor:
Abstract
The field of Explainable Artificial Intelligence (XAI) has rapidly gained traction over the last years. Motivated by the promise of increased performance through using Artificial Intelligence (AI) and the associated problems of low interpretability hindering adoption in practice, methods, e.g., for generating feature-importance values or heatmaps to indicate important parts of images, have been proposed by researchers aiming to increase the usability of AI. However, these explanation methods have seen criticism about facilitating a “consumer-creator” gap (Ehsan et al., 2024), targeting the needs of data scientists and AI engineers, but not the needs of end users such as medical professionals. Researchers like Miller (2019) have pointed out similar problems with XAI being too static and not considering the nature of human explanations.
In this thesis project, the student will conduct a systematic literature review (SLR) on the role of theories from the social sciences on explanations in XAI research. Using this knowledge as a reference, the student will prepare interview guidelines and conduct a series of semi-structured interviews with doctors to gain a better understanding of how explanations work in the medical context. These interviews will be recorded, transcribed, and analyzed (e.g., via tools like MAXQDA).
Ehsan, U., Passi, S., Liao, Q. V., Chan, L., Lee, I. H., Muller, M., & Riedl, M. O. (2024, May). The who in XAI: how AI background shapes perceptions of AI explanations. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (pp. 1-32).
Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, 267, 1-38.