Documents

  • 6.2020-2100

    Final published version, 2 MB, PDF-document

DOI

Reinforcement learning (RL) enables the autonomous formation of optimal, adaptive control laws for systems with complex, uncertain dynamics. This process generally requires a learning agent to directly interact with the system in an online fashion. However, if the system is safety-critical, such as an Unmanned Aerial Vehicle (UAV), learning may result in unsafe behavior. Moreover, irrespective of the safety aspect, learning optimal control policies from scratch can be inefficient and therefore time-consuming. In this research, the safe curriculum learning paradigm is proposed to address the problems of learning safety and efficiency simultaneously. Curriculum learning makes the process of learning more tractable, thereby allowing the intelligent agent to learn desired behavior more effectively. This is achieved by presenting the agent with a series of intermediate learning tasks, where the knowledge gained from earlier tasks is used to expedite learning in succeeding tasks of higher complexity. This framework is united with views from safe learning to ensure that safety constraints are adhered to during the learning curriculum. This principle is first investigated in the context of optimal regulation of a generic mass-spring-damper system using neural networks and is subsequently applied in the context of optimal attitude control of a quadrotor UAV with uncertain dynamics.
Original languageEnglish
Title of host publicationAIAA Scitech 2020 Forum
Subtitle of host publication6-10 January 2020, Orlando, FL
PublisherAIAA
Number of pages23
ISBN (Electronic)978-1-62410-595-1
DOIs
Publication statusPublished - 2020
EventAIAA Scitech 2020 Forum - Orlando, United States
Duration: 6 Jan 202010 Jan 2020

Conference

ConferenceAIAA Scitech 2020 Forum
CountryUnited States
CityOrlando
Period6/01/2010/01/20

ID: 68365603