Documents

DOI

This paper describes an implementation of a reinforcement learning-based framework applied to the control of a multi-copter rotorcraft. The controller is based on continuous state and action Q-learning. The policy is stored using a radial basis function neural network. Distance-based neuron activation is used to optimize the generalization algorithm for computational performance. The training proceeds off-line, using a reduced-order model of the controlled system. The model is identified and stored in the form of a neural network. The framework incorporates a dynamics inversion controller, based on the identified model. Simulated flight tests confirm the controller’s ability to track the reference state signal and outperform a conventional proportional-derivative(PD) controller. The contributions of the developed framework are a computationally-efficient method to store a Q-function generalization, continuous action selection based on local Q-function approximation and a combination of model identification and offline learning for inner-loop control of a UAV system.

Original languageEnglish
Title of host publicationAIAA Scitech 2019 Forum
Subtitle of host publication7-11 January 2019, San Diego, California, USA
Number of pages19
ISBN (Electronic)978-1-62410-578-4
DOIs
Publication statusPublished - 2019
EventAIAA Scitech Forum, 2019 - San Diego, United States
Duration: 7 Jan 201911 Jan 2019
https://arc.aiaa.org/doi/book/10.2514/MSCITECH19

Conference

ConferenceAIAA Scitech Forum, 2019
CountryUnited States
CitySan Diego
Period7/01/1911/01/19
Internet address

ID: 51888093