Algorithms as partners in crime: a lesson in ethics by design

Sebastian Krügel*, Andreas Ostermaier, Matthias Uhl

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

88 Downloads (Pure)


The human in the loop is often advocated as a panacea against concerns about AI-powered machines, which increasingly take decisions of consequence in all realms of life. However, can we rely on humans to prevent unethical decisions by machines? We run online experiments modeling both the case where the machine serves as a corrective to the human and where the human serves as a corrective to the machine. Our results suggest that, in the former case, humans make similar decisions whether the corrective is a machine or another human. In the latter case, humans take advantage of rather than correct bad decisions by machines, turning into partners in crime. These findings caution us not to count too much on the human in the loop as a moral corrective. Instead, they tend to argue for human–machine decision-making where the human makes the decision and the machine is the corrective.
Original languageEnglish
Article number107483
JournalComputers in Human Behavior
Issue numberJanuary
Number of pages9
Publication statusPublished - Jan 2023


Dive into the research topics of 'Algorithms as partners in crime: a lesson in ethics by design'. Together they form a unique fingerprint.

Cite this