Abstract
This article positions explainability as an enabler of ethically justified medical decision-making by emphasizing the combination of pragmatically useful explanations and comprehensive validation of AI decision-support systems in real-life clinical settings. In this setting, post hoc medical explainability is defined as practical yet non-exhaustive explanations that facilitate shared decision-making between a physician and a patient in a specific clinical context. However, giving precedence to an explanation-centric approach over a validation-centric one in the domain of AI decision-support systems, it is still pivotal to recognize the inherent tension between the eagerness to deploy AI in healthcare and the necessity for thorough, time-consuming external and prospective validation of AI. Consequently, in clinical decision-making, integrating a retrospectively analyzed and prospectively validated AI system, along with post hoc explanations, can facilitate the explanatory needs of physicians and patients in the context of medical decision-making supported by AI.
| Originalsprog | Engelsk |
|---|---|
| Artikelnummer | 29 |
| Tidsskrift | Discover Artificial Intelligence |
| Vol/bind | 4 |
| Udgave nummer | 1 |
| Antal sider | 7 |
| ISSN | 2731-0809 |
| DOI | |
| Status | Udgivet - apr. 2024 |