Aktiviteter pr. år
Abstract
ChatGPT is not only fun to chat with, but it also searches information, answers questions, and gives advice. With consistent moral advice, it can improve the moral judgment and decisions of users. Unfortunately, ChatGPT’s advice is not consistent. Nonetheless, it does influence users’ moral judgment, we find in an experiment, even if they know they are advised by a chatting bot, and they underestimate how much they are influenced. Thus, ChatGPT corrupts rather than improves its users’ moral judgment. While these findings call for better design of ChatGPT and similar bots, we also propose training to improve users’ digital literacy as a remedy. Transparency, however, is not sufficient to enable the responsible use of AI.
Originalsprog | Engelsk |
---|---|
Artikelnummer | 4569 |
Tidsskrift | Scientific Reports |
Vol/bind | 13 |
Sider (fra-til) | 4569 |
Antal sider | 5 |
ISSN | 2045-2322 |
DOI | |
Status | Udgivet - apr. 2023 |
Relaterede aktiviteter
- 1 Konferenceoplæg
-
AI-powered moral advisors
Ostermaier, A. (Oplægsholder), Krügel, S. (Oplægsholder) & Uhl, M. (Oplægsholder)
21. sep. 2023Aktivitet: Foredrag og mundtlige bidrag › Konferenceoplæg
Relateret presse/medie
-
ChatGPT’s inconsistent moral advice influences users’ judgment
Ostermaier, A.
20/01/2023 → 14/02/2023
1 element af Mediedækning, 1 Mediebidrag
Presse/medie