Past research has demonstrated that removing implicit gender information from the user-item matrix does not result in substantial performance losses. Such results point towards promising solutions for protecting users’ privacy without compromising prediction performance, which are of particular interest in multistakeholder environments. Here, we investigate BlurMe, a gender obfuscation technique that has been shown to block classifiers from inferring binary gender from users’ profiles. We first point out a serious shortcoming of BlurMe: Simple data visualizations can reveal that BlurMe has been applied to a data set, including which items have been impacted. We then propose an extension to BlurMe, called BlurM(or)e, that addresses this issue. We reproduce the original BlurMe experiments with the MovieLens data set, and point out the relative advantages of BlurM(or)e.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume2440
Publication statusPublished - 2019
Event2019 Workshop on Recommendation in Multi-Stakeholder Environments, RMSE 2019 - Copenhagen, Denmark
Duration: 20 Sep 2019 → …

    Research areas

  • Data Obfuscation, Privacy, Recommender Systems

ID: 66776751