Video Captioning by Adversarial LSTM

Yang Yang, Jie Zhou, Jiangbo Ai, Yi Bin, Alan Hanjalic, Heng Tao Shen*

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

175 Citations (Scopus)
284 Downloads (Pure)

Abstract

In this paper, we propose a novel approach to video captioning based on adversarial learning and long short-term memory (LSTM). With this solution concept, we aim at compensating for the deficiencies of LSTM-based video captioning methods that generally show potential to effectively handle temporal nature of video data when generating captions but also typically suffer from exponential error accumulation. Specifically, we adopt a standard generative adversarial network (GAN) architecture, characterized by an interplay of two competing processes: a 'generator' that generates textual sentences given the visual content of a video and a 'discriminator' that controls the accuracy of the generated sentences. The discriminator acts as an 'adversary' toward the generator, and with its controlling mechanism, it helps the generator to become more accurate. For the generator module, we take an existing video captioning concept using LSTM network. For the discriminator, we propose a novel realization specifically tuned for the video captioning problem and taking both the sentences and video features as input. This leads to our proposed LSTM-GAN system architecture, for which we show experimentally to significantly outperform the existing methods on standard public datasets.

Original languageEnglish
Article number8410586
Pages (from-to)5600-5611
Number of pages12
JournalIEEE Transactions on Image Processing
Volume27
Issue number11
DOIs
Publication statusPublished - 2018

Bibliographical note

Accepted author manuscript

Keywords

  • adversarial training
  • LSTM
  • Video captioning

Fingerprint

Dive into the research topics of 'Video Captioning by Adversarial LSTM'. Together they form a unique fingerprint.

Cite this