Interactive exploration of journalistic video footage through multimodal semantic matching

Sarah Ibrahimi, Shuo Chen, Devanshu Arya, Arthur Câmara, Yunlu Chen, Tanja Crijns, Maurits Van Der Goes, Thomas Mensink, Emiel Van Miltenburg, More Authors

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

143 Downloads (Pure)

Abstract

This demo presents a system for journalists to explore video footage for broadcasts. Daily news broadcasts contain multiple news items that consist of many video shots and searching for relevant footage is a labor intensive task. Without the need for annotated video shots, our system extracts semantics from footage and automatically matches these semantics to query terms from the journalist. The journalist can then indicate which aspects of the query term need to be emphasized, e.g. the title or its thematic meaning. The goal of this system is to support the journalists in their search process by encouraging interaction and exploration with the system.

Original languageEnglish
Title of host publicationMM 2019 - Proceedings of the 27th ACM International Conference on Multimedia
PublisherAssociation for Computing Machinery (ACM)
Pages2196-2198
Number of pages3
ISBN (Electronic)9781450368896
DOIs
Publication statusPublished - 15 Oct 2019
Event27th ACM International Conference on Multimedia, MM 2019 - Nice, France
Duration: 21 Oct 201925 Oct 2019

Conference

Conference27th ACM International Conference on Multimedia, MM 2019
Country/TerritoryFrance
CityNice
Period21/10/1925/10/19

Keywords

  • Exploration
  • Matching
  • Multimodal
  • Semantics
  • Video

Fingerprint

Dive into the research topics of 'Interactive exploration of journalistic video footage through multimodal semantic matching'. Together they form a unique fingerprint.

Cite this