When Transparent does not Mean Explainable

Publikation: Bidrag til bog/antologi/rapport/konference-proceedingKonferencebidrag i proceedingsForskningpeer review

Resumé

Based on findings from interactional linguistics, I argue thattransparency is not desirable in all cases, especially not in socialhuman-robot interaction. Three reasons for limited use oftransparency are discussed in more detail: 1) that social humanrobotinteraction always relies on some kind of illusion, whichmay be destroyed if people understand more about the robot’s realcapabilities; 2) that human interaction partners make use ofinference-rich categories in order to inform each other about theircapabilities, whereas these inferences are not applicable to robots;and 3) that in human interaction, people display only informationabout their highest capabilities, so that if robots display low-levelcapabilities, people will understand them as very basic. I thereforesuggest not to aim for transparency or explainability, but to focuson the signaling of affordances instead
OriginalsprogEngelsk
TitelPapers of Workshop on Explainable Robotic Systems
Antal sider3
Publikationsdato5. mar. 2018
StatusUdgivet - 5. mar. 2018
BegivenhedWorkshop on Explainable Robotic Systems - Chicago, USA
Varighed: 5. mar. 20188. mar. 2018

Workshop

WorkshopWorkshop on Explainable Robotic Systems
LandUSA
ByChicago
Periode05/03/201808/03/2018

Fingeraftryk

robot
interaction
transparency
linguistics

Citer dette

Fischer, K. (2018). When Transparent does not Mean Explainable. I Papers of Workshop on Explainable Robotic Systems
Fischer, Kerstin. / When Transparent does not Mean Explainable. Papers of Workshop on Explainable Robotic Systems. 2018.
@inproceedings{d0e44514e48747d0b2004b90af8ce665,
title = "When Transparent does not Mean Explainable",
abstract = "Based on findings from interactional linguistics, I argue thattransparency is not desirable in all cases, especially not in socialhuman-robot interaction. Three reasons for limited use oftransparency are discussed in more detail: 1) that social humanrobotinteraction always relies on some kind of illusion, whichmay be destroyed if people understand more about the robot’s realcapabilities; 2) that human interaction partners make use ofinference-rich categories in order to inform each other about theircapabilities, whereas these inferences are not applicable to robots;and 3) that in human interaction, people display only informationabout their highest capabilities, so that if robots display low-levelcapabilities, people will understand them as very basic. I thereforesuggest not to aim for transparency or explainability, but to focuson the signaling of affordances instead",
author = "Kerstin Fischer",
year = "2018",
month = "3",
day = "5",
language = "English",
booktitle = "Papers of Workshop on Explainable Robotic Systems",

}

Fischer, K 2018, When Transparent does not Mean Explainable. i Papers of Workshop on Explainable Robotic Systems. Workshop on Explainable Robotic Systems, Chicago, USA, 05/03/2018.

When Transparent does not Mean Explainable. / Fischer, Kerstin.

Papers of Workshop on Explainable Robotic Systems. 2018.

Publikation: Bidrag til bog/antologi/rapport/konference-proceedingKonferencebidrag i proceedingsForskningpeer review

TY - GEN

T1 - When Transparent does not Mean Explainable

AU - Fischer, Kerstin

PY - 2018/3/5

Y1 - 2018/3/5

N2 - Based on findings from interactional linguistics, I argue thattransparency is not desirable in all cases, especially not in socialhuman-robot interaction. Three reasons for limited use oftransparency are discussed in more detail: 1) that social humanrobotinteraction always relies on some kind of illusion, whichmay be destroyed if people understand more about the robot’s realcapabilities; 2) that human interaction partners make use ofinference-rich categories in order to inform each other about theircapabilities, whereas these inferences are not applicable to robots;and 3) that in human interaction, people display only informationabout their highest capabilities, so that if robots display low-levelcapabilities, people will understand them as very basic. I thereforesuggest not to aim for transparency or explainability, but to focuson the signaling of affordances instead

AB - Based on findings from interactional linguistics, I argue thattransparency is not desirable in all cases, especially not in socialhuman-robot interaction. Three reasons for limited use oftransparency are discussed in more detail: 1) that social humanrobotinteraction always relies on some kind of illusion, whichmay be destroyed if people understand more about the robot’s realcapabilities; 2) that human interaction partners make use ofinference-rich categories in order to inform each other about theircapabilities, whereas these inferences are not applicable to robots;and 3) that in human interaction, people display only informationabout their highest capabilities, so that if robots display low-levelcapabilities, people will understand them as very basic. I thereforesuggest not to aim for transparency or explainability, but to focuson the signaling of affordances instead

M3 - Article in proceedings

BT - Papers of Workshop on Explainable Robotic Systems

ER -

Fischer K. When Transparent does not Mean Explainable. I Papers of Workshop on Explainable Robotic Systems. 2018