Decision-theoretic planning under uncertainty with information rewards for active cooperative perception

Matthijs T.J. Spaan*, Tiago S. Veiga, Pedro U. Lima

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

49 Citations (Scopus)

Abstract

Partially observable Markov decision processes (POMDPs) provide a principled framework for modeling an agent’s decision-making problem when the agent needs to consider noisy state estimates. POMDP policies take into account an action’s influence on the environment as well as the potential information gain. This is a crucial feature for robotic agents which generally have to consider the effect of actions on sensing. However, building POMDP models which reward information gain directly is not straightforward, but is important in domains such as robot-assisted surveillance in which the value of information is hard to quantify. Common techniques for uncertainty reduction such as expected entropy minimization lead to non-standard POMDPs that are hard to solve. We present the POMDP with Information Rewards (POMDP-IR) modeling framework, which rewards an agent for reaching a certain level of belief regarding a state feature. By remaining in the standard POMDP setting we can exploit many known results as well as successful approximate algorithms. We demonstrate our ideas in a toy problem as well as in real robot-assisted surveillance, showcasing their use for active cooperative perception scenarios. Finally, our experiments show that the POMDP-IR framework compares favorably with a related approach on benchmark domains.

Original languageEnglish
Pages (from-to)1157-1185
Number of pages29
JournalAutonomous Agents and Multi-Agent Systems
Volume29
Issue number6
DOIs
Publication statusPublished - 2015

Keywords

  • Active cooperative perception
  • Partially observable Markov decision processes
  • Planning under uncertainty for robots

Fingerprint

Dive into the research topics of 'Decision-theoretic planning under uncertainty with information rewards for active cooperative perception'. Together they form a unique fingerprint.

Cite this