Optimal control via reinforcement learning with symbolic policy approximation

Jiří Kubalík, Eduard Alibekov, Robert Babuška

Research output: Contribution to journalConference articleScientificpeer-review

11 Citations (Scopus)
65 Downloads (Pure)

Abstract

Model-based reinforcement learning (RL) algorithms can be used to derive optimal control laws for nonlinear dynamic systems. With continuous-valued state and input variables, RL algorithms have to rely on function approximators to represent the value function and policy mappings. This paper addresses the problem of finding a smooth policy based on the value function represented by means of a basis-function approximator. We first show that policies derived directly from the value function or represented explicitly by the same type of approximator lead to inferior control performance, manifested by non-smooth control signals and steady-state errors. We then propose a novel method to construct a smooth policy represented by an analytic equation, obtained by means of symbolic regression. The proposed method is illustrated on a reference-tracking problem of a 1-DOF robot arm operating under the influence of gravity. The results show that the analytic control law performs at least equally well as the original numerically approximated policy, while it leads to much smoother control signals. In addition, the analytic function is readable (as opposed to black-box approximators) and can be used in further analysis and synthesis of the closed loop.

Original languageEnglish
Pages (from-to)4162-4167
JournalIFAC-PapersOnLine
Volume50
Issue number1
DOIs
Publication statusPublished - 2017
Event20th World Congress of the International Federation of Automatic Control (IFAC), 2017 - Toulouse, France
Duration: 9 Jul 201714 Jul 2017
Conference number: 20
https://www.ifac2017.org

Keywords

  • genetic programming
  • nonlinear model-based control
  • optimal control
  • reinforcement learning
  • symbolic regression
  • value iteration

Fingerprint

Dive into the research topics of 'Optimal control via reinforcement learning with symbolic policy approximation'. Together they form a unique fingerprint.

Cite this