Abstract
One of the issues that can potentially affect the internal validity of interactive online experiments that recruit participants using crowdsourcing platforms is collusion: participants could act upon information shared through channels that are external to the experimental design. Using two experiments, I measure how prevalent collusion is among MTurk workers and whether collusion depends on experimental design choices. Despite having incentives to collude, I find no evidence that MTurk workers collude in the treatments that resembled the design of most other interactive online experiments. This suggests collusion is not a concern for data quality in typical interactive online experiments that recruit participants using crowdsourcing platforms. However, I find that approximately 3% of MTurk workers collude when the payoff of collusion is unusually high. Therefore, collusion should not be overlooked as a possible danger to data validity in interactive experiments that recruit participants using crowdsourcing platforms when participants have strong incentives to engage in such behavior.
Original language | English |
---|---|
Journal | Behavior Research Methods |
Volume | 56 |
Issue number | 5 |
Pages (from-to) | 4823-4835 |
ISSN | 1554-351X |
DOIs | |
Publication status | Published - Aug 2024 |
Keywords
- Amazon mechanical turk
- Behavioral research
- Collusion
- Experimental methodology
- Internet interactive experiments
- Humans
- Male
- Crowdsourcing/methods
- Motivation
- Adult
- Female
- Internet
- Research Design