Safe Exploration Algorithms for Reinforcement Learning Controllers

Research output: Contribution to journalArticleScientificpeer-review

69 Citations (Scopus)
40 Downloads (Pure)

Abstract

Self-learning approaches, such as reinforcement learning, offer new possibilities for autonomous control of uncertain or time-varying systems. However, exploring an unknown environment under limited prediction capabilities is a challenge for a learning agent. If the environment is dangerous, free exploration can result in physical damage or in an otherwise unacceptable behavior. With respect to existing methods, the main contribution of this paper is the definition of a new approach that does not require global safety functions, nor specific formulations of the dynamics or of the environment, but relies on interval estimation of the dynamics of the agent during the exploration phase, assuming a limited capability of the agent to perceive the presence of incoming fatal states. Two algorithms are presented with this approach. The first is the Safety Handling Exploration with Risk Perception Algorithm (SHERPA), which provides safety by individuating temporary safety functions, called backups. SHERPA is shown in a simulated, simplified quadrotor task, for which dangerous states are avoided. The second algorithm, denominated OptiSHERPA, can safely handle more dynamically complex systems for which SHERPA is not sufficient through the use of safety metrics. An application of OptiSHERPA is simulated on an aircraft altitude control task.
Original languageEnglish
Pages (from-to)1069-1081
Number of pages13
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume29
Issue number4
DOIs
Publication statusPublished - 1 Apr 2018

Keywords

  • Adaptive controllers
  • model-free control
  • reinforcement learning (RL)
  • safe exploration

Fingerprint

Dive into the research topics of 'Safe Exploration Algorithms for Reinforcement Learning Controllers'. Together they form a unique fingerprint.

Cite this