ChatGPT’s inconsistent moral advice influences users’ judgment

Sebastian Krügel*, Andreas Ostermaier, Matthias Uhl

*Kontaktforfatter

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

53 Downloads (Pure)

Abstract

ChatGPT is not only fun to chat with, but it also searches information, answers questions, and gives advice. With consistent moral advice, it can improve the moral judgment and decisions of users. Unfortunately, ChatGPT’s advice is not consistent. Nonetheless, it does influence users’ moral judgment, we find in an experiment, even if they know they are advised by a chatting bot, and they underestimate how much they are influenced. Thus, ChatGPT corrupts rather than improves its users’ moral judgment. While these findings call for better design of ChatGPT and similar bots, we also propose training to improve users’ digital literacy as a remedy. Transparency, however, is not sufficient to enable the responsible use of AI.
OriginalsprogEngelsk
Artikelnummer4569
TidsskriftScientific Reports
Vol/bind13
Sider (fra-til)4569
Antal sider5
ISSN2045-2322
DOI
StatusUdgivet - apr. 2023
  • AI-powered moral advisors

    Ostermaier, A. (Oplægsholder), Krügel, S. (Oplægsholder) & Uhl, M. (Oplægsholder)

    21. sep. 2023

    Aktivitet: Foredrag og mundtlige bidragKonferenceoplæg

Citationsformater