DOI

The use of Reinforcement Learning (RL) methods in Adaptive Flight Control has been an active research field over the past few years. Controllers that autonomously learn by interacting with the surrounding environment are highly interesting to the aerospace domain due to their adaptive and reconfigurable properties which are directly connected to in-flight safety. Approximate Dynamic Programming (ADP) is a class of RL methods that is able to deal with more complex systems in an online way. The core idea is to provide an approximation to the cost function which is used to determine what action to take such that a certain goal is achieved. In incremental Approximate Dynamic Programming (iADP), a simple quadratic in the state approximation is used together with an incremental form of the nonlinear system. This novel class of controllers is applied to an F-16 aircraft model. A nonlinear rate control system based on full state feedback is first trained offline in order to achieve a baseline controller. Experiments with failures are then carried out in which an online adaptation takes place. Results show a good tracking performance both before and after failure. A simple angle of attack controller based on output feedback is also designed and implemented, showing also a satisfactory tracking performance.

Original languageEnglish
Title of host publicationAIAA Scitech 2019 Forum
Subtitle of host publication7-11 January 2019, San Diego, California, USA
Number of pages16
ISBN (Electronic)978-1-62410-578-4
DOIs
Publication statusPublished - 2019
EventAIAA Scitech Forum, 2019 - San Diego, United States
Duration: 7 Jan 201911 Jan 2019
https://arc.aiaa.org/doi/book/10.2514/MSCITECH19

Conference

ConferenceAIAA Scitech Forum, 2019
CountryUnited States
CitySan Diego
Period7/01/1911/01/19
Internet address

ID: 51941937