Boosted negative sampling by quadratically constrained entropy maximization

Taygun Kekec, David Mimno, David M.J. Tax

Research output: Contribution to journalArticleScientificpeer-review

1 Citation (Scopus)

Abstract

Learning probability densities for natural language representations is a difficult problem because language is inherently sparse and high-dimensional. Negative sampling is a popular and effective way to avoid intractable maximum likelihood problems, but it requires correct specification of the sampling distribution. Previous state of the art methods rely on heuristic distributions that appear to do well in practice. In this work, we define conditions for optimal sampling distributions and demonstrate how to approximate them using Quadratically Constrained Entropy Maximization(QCEM). Our analysis shows that state of the art heuristics are restrictive approximations to our proposed framework. To demonstrate the merits of our formulation, we apply QCEM to matching synthetic exponential family distributions and to finding high dimensional word embedding vectors for English. We are able to achieve faster inference on synthetic experiments and improve the correlation on semantic similarity evaluations on the Rare Words dataset by 4.8%.

Original languageEnglish
Pages (from-to)310-317
Number of pages8
JournalPattern Recognition Letters
Volume125
DOIs
Publication statusPublished - 2019

Keywords

  • Contrastive learning
  • Entropy maximization
  • Negative sampling
  • Semantic similarity
  • Word embeddings

Fingerprint

Dive into the research topics of 'Boosted negative sampling by quadratically constrained entropy maximization'. Together they form a unique fingerprint.

Cite this