DOI

  • Soude Fazeli
  • Hendrik Drachsler
  • Marlies Bitter-Rijpkema
  • Francis Brouns
  • Wim van der Vegt
  • Peter B. Sloep

Recommender systems provide users with content they might be interested in. Conventionally, recommender systems are evaluated mostly by using prediction accuracy metrics only. But the ultimate goal of a recommender system is to increase user satisfaction. Therefore, evaluations that measure user satisfaction should also be performed before deploying a recommender system in a real target environment. Such evaluations are laborious and complicated compared to the traditional, data-centric evaluations, though. In this study, we carried out a user-centric evaluation of state-of-the-art recommender systems as well as a graph-based approach in the ecologically valid setting of an authentic social learning platform. We also conducted a data-centric evaluation on the same data to investigate the added value of user-centric evaluations and how user satisfaction of a recommender system is related to its performance in terms of accuracy metrics. Our findings suggest that user-centric evaluation results are not necessarily in line with data-centric evaluation results. We conclude that the traditional evaluation of recommender systems in terms of prediction accuracy only does not suffice to judge performance of recommender systems on the user side. Moreover, the user-centric evaluation provides valuable insights in how candidate algorithms perform on each of the five quality metrics for recommendations: usefulness, accuracy, novelty, diversity, and serendipity.

Original languageEnglish
Pages (from-to)294 - 306
Number of pages13
JournalIEEE Transactions on Learning Technologies
Volume11
Issue number3
DOIs
Publication statusPublished - 2018

    Research areas

  • Recommender Systems, evaluation, social, learning, accuarcy, performance

ID: 44919916