Article ID: | iaor20133056 |
Volume: | 57 |
Issue: | 11-12 |
Start Page Number: | 2918 |
End Page Number: | 2932 |
Publication Date: | Jun 2013 |
Journal: | Mathematical and Computer Modelling |
Authors: | Hofeld Tobias, Hirth Matthias, Tran-Gia Phuoc |
Keywords: | game theory |
Crowdsourcing is becoming more and more important for commercial purposes. With the growth of crowdsourcing platforms like Amazon Mechanical Turk or Microworkers, a huge work force and a large knowledge base can be easily accessed and utilized. But due to the anonymity of the workers, they are encouraged to cheat the employers in order to maximize their income. In this paper, we analyze two widely used crowd‐based approaches to validate the submitted work. Both approaches are evaluated with regard to their detection quality, their costs and their applicability to different types of typical crowdsourcing tasks.