Adversarial Cross-Modal Retrieval

Bokun Wang, Yang Yang*, Xu Xing, Alan Hanjalic, Heng Tao Shen

*Corresponding author for this work

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

617 Citations (Scopus)
470 Downloads (Pure)

Abstract

Cross-modal retrieval aims to enable flexible retrieval experience across different modalities (e.g., texts vs. images). The core of crossmodal retrieval research is to learn a common subspace where the items of different modalities can be directly compared to each other. In this paper, we present a novel Adversarial Cross-Modal Retrieval (ACMR) method, which seeks an effective common subspace based on adversarial learning. Adversarial learning is implemented as an interplay between two processes. The first process, a feature projector, tries to generate a modality-invariant representation in the common subspace and to confuse the other process, modality classifier, which tries to discriminate between different modalities based on the generated representation. We further impose triplet constraints on the feature projector in order to minimize the gap among the representations of all items from different modalities with same semantic labels, while maximizing the distances among semantically different images and texts. Through the joint exploitation of the above, the underlying cross-modal semantic structure of multimedia data is better preserved when this data is projected into the common subspace. Comprehensive experimental results on four widely used benchmark datasets show that the proposed ACMR method is superior in learning effective subspace representation and that it significantly outperforms the state-of-the-art cross-modal retrieval methods.

Original languageEnglish
Title of host publicationMM 2017 - Proceedings of the 2017 ACM Multimedia Conference
Place of PublicationNew York
PublisherAssociation for Computing Machinery (ACM)
Pages154-162
Number of pages9
ISBN (Electronic)978-1-4503-4906-2
DOIs
Publication statusPublished - 2017
EventMM'17 : 25th ACM Multimedia Conference - Computer History Museum, Mountain View, CA, United States
Duration: 23 Oct 201727 Oct 2017
Conference number: 25
http://www.acmmm.org/2017/

Conference

ConferenceMM'17
Abbreviated titleACM MM 2017
Country/TerritoryUnited States
CityMountain View, CA
Period23/10/1727/10/17
Internet address

Keywords

  • Adversarial learning
  • Cross-modal retrieval
  • Modality gap

Fingerprint

Dive into the research topics of 'Adversarial Cross-Modal Retrieval'. Together they form a unique fingerprint.

Cite this