When Transparent does not Mean Explainable

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Abstract

Based on findings from interactional linguistics, I argue thattransparency is not desirable in all cases, especially not in socialhuman-robot interaction. Three reasons for limited use oftransparency are discussed in more detail: 1) that social humanrobotinteraction always relies on some kind of illusion, whichmay be destroyed if people understand more about the robot’s realcapabilities; 2) that human interaction partners make use ofinference-rich categories in order to inform each other about theircapabilities, whereas these inferences are not applicable to robots;and 3) that in human interaction, people display only informationabout their highest capabilities, so that if robots display low-levelcapabilities, people will understand them as very basic. I thereforesuggest not to aim for transparency or explainability, but to focuson the signaling of affordances instead
Original languageEnglish
Title of host publicationPapers of Workshop on Explainable Robotic Systems
Number of pages3
Publication date5. Mar 2018
Publication statusPublished - 5. Mar 2018
EventWorkshop on Explainable Robotic Systems - Chicago, United States
Duration: 5. Mar 20188. Mar 2018

Workshop

WorkshopWorkshop on Explainable Robotic Systems
CountryUnited States
CityChicago
Period05/03/201808/03/2018

Fingerprint

robot
interaction
transparency
linguistics

Cite this

Fischer, K. (2018). When Transparent does not Mean Explainable. In Papers of Workshop on Explainable Robotic Systems
Fischer, Kerstin. / When Transparent does not Mean Explainable. Papers of Workshop on Explainable Robotic Systems. 2018.
@inproceedings{d0e44514e48747d0b2004b90af8ce665,
title = "When Transparent does not Mean Explainable",
abstract = "Based on findings from interactional linguistics, I argue thattransparency is not desirable in all cases, especially not in socialhuman-robot interaction. Three reasons for limited use oftransparency are discussed in more detail: 1) that social humanrobotinteraction always relies on some kind of illusion, whichmay be destroyed if people understand more about the robot’s realcapabilities; 2) that human interaction partners make use ofinference-rich categories in order to inform each other about theircapabilities, whereas these inferences are not applicable to robots;and 3) that in human interaction, people display only informationabout their highest capabilities, so that if robots display low-levelcapabilities, people will understand them as very basic. I thereforesuggest not to aim for transparency or explainability, but to focuson the signaling of affordances instead",
author = "Kerstin Fischer",
year = "2018",
month = "3",
day = "5",
language = "English",
booktitle = "Papers of Workshop on Explainable Robotic Systems",

}

Fischer, K 2018, When Transparent does not Mean Explainable. in Papers of Workshop on Explainable Robotic Systems. Workshop on Explainable Robotic Systems, Chicago, United States, 05/03/2018.

When Transparent does not Mean Explainable. / Fischer, Kerstin.

Papers of Workshop on Explainable Robotic Systems. 2018.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

TY - GEN

T1 - When Transparent does not Mean Explainable

AU - Fischer, Kerstin

PY - 2018/3/5

Y1 - 2018/3/5

N2 - Based on findings from interactional linguistics, I argue thattransparency is not desirable in all cases, especially not in socialhuman-robot interaction. Three reasons for limited use oftransparency are discussed in more detail: 1) that social humanrobotinteraction always relies on some kind of illusion, whichmay be destroyed if people understand more about the robot’s realcapabilities; 2) that human interaction partners make use ofinference-rich categories in order to inform each other about theircapabilities, whereas these inferences are not applicable to robots;and 3) that in human interaction, people display only informationabout their highest capabilities, so that if robots display low-levelcapabilities, people will understand them as very basic. I thereforesuggest not to aim for transparency or explainability, but to focuson the signaling of affordances instead

AB - Based on findings from interactional linguistics, I argue thattransparency is not desirable in all cases, especially not in socialhuman-robot interaction. Three reasons for limited use oftransparency are discussed in more detail: 1) that social humanrobotinteraction always relies on some kind of illusion, whichmay be destroyed if people understand more about the robot’s realcapabilities; 2) that human interaction partners make use ofinference-rich categories in order to inform each other about theircapabilities, whereas these inferences are not applicable to robots;and 3) that in human interaction, people display only informationabout their highest capabilities, so that if robots display low-levelcapabilities, people will understand them as very basic. I thereforesuggest not to aim for transparency or explainability, but to focuson the signaling of affordances instead

M3 - Article in proceedings

BT - Papers of Workshop on Explainable Robotic Systems

ER -

Fischer K. When Transparent does not Mean Explainable. In Papers of Workshop on Explainable Robotic Systems. 2018