The role of explainability in AI-supported medical decision-making

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

Abstract

This article positions explainability as an enabler of ethically justified medical decision-making by emphasizing the combination of pragmatically useful explanations and comprehensive validation of AI decision-support systems in real-life clinical settings. In this setting, post hoc medical explainability is defined as practical yet non-exhaustive explanations that facilitate shared decision-making between a physician and a patient in a specific clinical context. However, giving precedence to an explanation-centric approach over a validation-centric one in the domain of AI decision-support systems, it is still pivotal to recognize the inherent tension between the eagerness to deploy AI in healthcare and the necessity for thorough, time-consuming external and prospective validation of AI. Consequently, in clinical decision-making, integrating a retrospectively analyzed and prospectively validated AI system, along with post hoc explanations, can facilitate the explanatory needs of physicians and patients in the context of medical decision-making supported by AI.

OriginalsprogEngelsk
Artikelnummer29
TidsskriftDiscover Artificial Intelligence
Vol/bind4
Udgave nummer1
Antal sider7
ISSN2731-0809
DOI
StatusUdgivet - apr. 2024

Fingeraftryk

Dyk ned i forskningsemnerne om 'The role of explainability in AI-supported medical decision-making'. Sammen danner de et unikt fingeraftryk.

Citationsformater