Standard

Discovery of Optimal Solution Horizons in Non-Stationary Markov Decision Processes with Unbounded Rewards. / Neustroev, Greg; de Weerdt, Mathijs; Verzijlbergh, Remco.

Proceedings of the Twenty-Ninth International Conference on Automated Planning and Scheduling. ed. / J. Benton; N. Lipovetzky; E. Onaindia; D.E. Smith; S. Srivastava. Vol. 29 Association for the Advancement of Artificial Intelligence (AAAI), 2019. p. 292-300.

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Harvard

Neustroev, G, de Weerdt, M & Verzijlbergh, R 2019, Discovery of Optimal Solution Horizons in Non-Stationary Markov Decision Processes with Unbounded Rewards. in J Benton, N Lipovetzky, E Onaindia, DE Smith & S Srivastava (eds), Proceedings of the Twenty-Ninth International Conference on Automated Planning and Scheduling. vol. 29, Association for the Advancement of Artificial Intelligence (AAAI), pp. 292-300, 29th International Conference on Automated Planning and Scheduling, Berkeley, United States, 11/07/19.

APA

Neustroev, G., de Weerdt, M., & Verzijlbergh, R. (2019). Discovery of Optimal Solution Horizons in Non-Stationary Markov Decision Processes with Unbounded Rewards. In J. Benton, N. Lipovetzky, E. Onaindia, D. E. Smith, & S. Srivastava (Eds.), Proceedings of the Twenty-Ninth International Conference on Automated Planning and Scheduling (Vol. 29, pp. 292-300). Association for the Advancement of Artificial Intelligence (AAAI).

Vancouver

Neustroev G, de Weerdt M, Verzijlbergh R. Discovery of Optimal Solution Horizons in Non-Stationary Markov Decision Processes with Unbounded Rewards. In Benton J, Lipovetzky N, Onaindia E, Smith DE, Srivastava S, editors, Proceedings of the Twenty-Ninth International Conference on Automated Planning and Scheduling. Vol. 29. Association for the Advancement of Artificial Intelligence (AAAI). 2019. p. 292-300

Author

Neustroev, Greg ; de Weerdt, Mathijs ; Verzijlbergh, Remco. / Discovery of Optimal Solution Horizons in Non-Stationary Markov Decision Processes with Unbounded Rewards. Proceedings of the Twenty-Ninth International Conference on Automated Planning and Scheduling. editor / J. Benton ; N. Lipovetzky ; E. Onaindia ; D.E. Smith ; S. Srivastava. Vol. 29 Association for the Advancement of Artificial Intelligence (AAAI), 2019. pp. 292-300

BibTeX

@inproceedings{b53d955e75d6481f932e3bd98cc5439b,
title = "Discovery of Optimal Solution Horizons in Non-Stationary Markov Decision Processes with Unbounded Rewards",
abstract = "Infinite-horizon non-stationary Markov decision processes provide a general framework to model many real-life decision-making problems, e.g., planning equipment maintenance. Unfortunately, these problems are notoriously difficult to solve, due to their infinite dimensionality. Often, only the optimality of the initial action is of importance to the decision-maker: once it has been identified, the procedure can be repeated to generate a plan of arbitrary length. The optimal initial action can be identified by finding a time horizon so long that data beyond it has no effect on the initial decision. This horizon is known as a solution horizon and can be discovered by considering a series of truncations of the problem until a stopping rule guaranteeing initial decision optimality is satisfied. We present such a stopping rule for problems with unbounded rewards. Given a candidate policy, the rule uses a mathematical program that searches for other possibly optimal initial actions within the space of feasible truncations. If no better action can be found, the candidate action is deemed optimal. Our rule runs faster than comparable rules and discovers shorter, more efficient solution horizons.",
author = "Greg Neustroev and {de Weerdt}, Mathijs and Remco Verzijlbergh",
note = "Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.",
year = "2019",
language = "English",
volume = "29",
pages = "292--300",
editor = "J. Benton and N. Lipovetzky and E. Onaindia and D.E. Smith and S. Srivastava",
booktitle = "Proceedings of the Twenty-Ninth International Conference on Automated Planning and Scheduling",
publisher = "Association for the Advancement of Artificial Intelligence (AAAI)",

}

RIS

TY - GEN

T1 - Discovery of Optimal Solution Horizons in Non-Stationary Markov Decision Processes with Unbounded Rewards

AU - Neustroev, Greg

AU - de Weerdt, Mathijs

AU - Verzijlbergh, Remco

N1 - Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

PY - 2019

Y1 - 2019

N2 - Infinite-horizon non-stationary Markov decision processes provide a general framework to model many real-life decision-making problems, e.g., planning equipment maintenance. Unfortunately, these problems are notoriously difficult to solve, due to their infinite dimensionality. Often, only the optimality of the initial action is of importance to the decision-maker: once it has been identified, the procedure can be repeated to generate a plan of arbitrary length. The optimal initial action can be identified by finding a time horizon so long that data beyond it has no effect on the initial decision. This horizon is known as a solution horizon and can be discovered by considering a series of truncations of the problem until a stopping rule guaranteeing initial decision optimality is satisfied. We present such a stopping rule for problems with unbounded rewards. Given a candidate policy, the rule uses a mathematical program that searches for other possibly optimal initial actions within the space of feasible truncations. If no better action can be found, the candidate action is deemed optimal. Our rule runs faster than comparable rules and discovers shorter, more efficient solution horizons.

AB - Infinite-horizon non-stationary Markov decision processes provide a general framework to model many real-life decision-making problems, e.g., planning equipment maintenance. Unfortunately, these problems are notoriously difficult to solve, due to their infinite dimensionality. Often, only the optimality of the initial action is of importance to the decision-maker: once it has been identified, the procedure can be repeated to generate a plan of arbitrary length. The optimal initial action can be identified by finding a time horizon so long that data beyond it has no effect on the initial decision. This horizon is known as a solution horizon and can be discovered by considering a series of truncations of the problem until a stopping rule guaranteeing initial decision optimality is satisfied. We present such a stopping rule for problems with unbounded rewards. Given a candidate policy, the rule uses a mathematical program that searches for other possibly optimal initial actions within the space of feasible truncations. If no better action can be found, the candidate action is deemed optimal. Our rule runs faster than comparable rules and discovers shorter, more efficient solution horizons.

M3 - Conference contribution

VL - 29

SP - 292

EP - 300

BT - Proceedings of the Twenty-Ninth International Conference on Automated Planning and Scheduling

A2 - Benton, J.

A2 - Lipovetzky, N.

A2 - Onaindia, E.

A2 - Smith, D.E.

A2 - Srivastava, S.

PB - Association for the Advancement of Artificial Intelligence (AAAI)

ER -

ID: 55227725