Complexity is crucial to characterize tasks performed by humans through computer systems. Yet, the theory and practice of crowdsourcing currently lacks a clear understanding of task complexity, hindering the design of effective and efficient execution interfaces or fair monetary rewards. To understand how complexity is perceived and distributed over crowdsourcing tasks, we instrumented an experiment where we asked workers to evaluate the complexity of 61 real-world re-instantiated crowdsourcing tasks. We show that task complexity, while being subjective, is coherently perceived across workers; on the other hand, it is significantly influenced by task type. Next, we develop a high-dimensional regression model, to assess the influence of three classes of structural features (metadata, content, and visual) on task complexity, and ultimately use them to measure task complexity. Results show that both the appearance and the language used in task description can accurately predict task complexity. Finally, we apply the same feature set to predict task performance, based on a set of 5 years-worth tasks in Amazon MTurk. Results show that features related to task complexity can improve the quality of task performance prediction, thus demonstrating the utility of complexity as a task modeling property.
Original languageEnglish
Title of host publicationProceedings - 4th AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2016
Place of PublicationAustin, TX
PublisherAAAI
Pages249-258
Number of pages10
StatePublished - 1 Oct 2016
Event4th AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2016 - Austin, TX, United States

Conference

Conference4th AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2016
Abbreviated titleHCOMP
CountryUnited States
CityAustin, TX
Period31/10/162/11/16

    Research areas

  • Crowdsourcing, Complexity, Market

ID: 11400936