Do mturkers collude in interactive online experiments?

Razvan Ghita*

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

2 Downloads (Pure)

Abstract

One of the issues that can potentially affect the internal validity of interactive online experiments that recruit participants using crowdsourcing platforms is collusion: participants could act upon information shared through channels that are external to the experimental design. Using two experiments, I measure how prevalent collusion is among MTurk workers and whether collusion depends on experimental design choices. Despite having incentives to collude, I find no evidence that MTurk workers collude in the treatments that resembled the design of most other interactive online experiments. This suggests collusion is not a concern for data quality in typical interactive online experiments that recruit participants using crowdsourcing platforms. However, I find that approximately 3% of MTurk workers collude when the payoff of collusion is unusually high. Therefore, collusion should not be overlooked as a possible danger to data validity in interactive experiments that recruit participants using crowdsourcing platforms when participants have strong incentives to engage in such behavior.
Original languageEnglish
JournalBehavior Research Methods
Volume56
Issue number5
Pages (from-to)4823-4835
ISSN1554-351X
DOIs
Publication statusPublished - Aug 2024

Keywords

  • Amazon mechanical turk
  • Behavioral research
  • Collusion
  • Experimental methodology
  • Internet interactive experiments
  • Humans
  • Male
  • Crowdsourcing/methods
  • Motivation
  • Adult
  • Female
  • Internet
  • Research Design

Fingerprint

Dive into the research topics of 'Do mturkers collude in interactive online experiments?'. Together they form a unique fingerprint.

Cite this