Feature-Level Domain Adaptation

Wouter Kouw, Laurens van der Maaten, Jesse Krijthe, Marco Loog

Research output: Contribution to journalArticleScientificpeer-review

43 Citations (Scopus)

Abstract

Domain adaptation is the supervised learning setting in which the training and test data are sampled from different distributions: training data is sampled from a source domain, whilst test data is sampled from a target domain. This paper proposes and studies an approach, called feature-level domain adaptation (FLDA), that models the dependence between the two domains by means of a feature-level transfer model that is trained to describe the transfer from source to target domain. Subsequently, we train a domain-adapted classifier by minimizing the expected loss under the resulting transfer model. For linear classifiers and a large family of loss functions and transfer models, this expected loss can be computed or approximated analytically, and minimized efficiently. Our empirical evaluation of FLDA focuses on problems comprising binary and count data in which the transfer can be naturally modeled via a dropout distribution, which allows the classifier to adapt to differences in the marginal probability of features in the source and the target domain. Our experiments on several real- world problems show that FLDA performs on par with state- of- the-art domain-adaptation techniques.
Original languageEnglish
Pages (from-to)1-32
Number of pages32
JournalJournal of Machine Learning Research
Volume17
Publication statusPublished - 2016

Keywords

  • Domain adaptation
  • transfer learning
  • covariate shift
  • risk minimization

Fingerprint

Dive into the research topics of 'Feature-Level Domain Adaptation'. Together they form a unique fingerprint.

Cite this