Improved sampling strategies for ensemble-based optimization

K. R. Ramaswamy, R. M. Fonseca, O. Leeuwenburgh*, M.M. Siraj, P.M.J. Van den Hof

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

2 Citations (Scopus)
60 Downloads (Pure)

Abstract

We are concerned with the efficiency of stochastic gradient estimation methods for large-scale nonlinear optimization in the presence of uncertainty. These methods aim to estimate an approximate gradient from a limited number of random input vector samples and corresponding objective function values. Ensemble methods usually employ Gaussian sampling to generate the input samples. It is known from the optimal design theory that the quality of sample-based approximations is affected by the distribution of the samples. We therefore evaluate six different sampling strategies to optimization of a high-dimensional analytical benchmark optimization problem, and, in a second example, to optimization of oil reservoir management strategies with and without geological uncertainty. The effectiveness of the sampling strategies is analyzed based on the quality of the estimated gradient, the final objective function value, the rate of the convergence, and the robustness of the gradient estimate. Based on the results, an improved version of the stochastic simplex approximate gradient method is proposed based on UE(s2) sampling designs for supersaturated cases that outperforms all alternative approaches. We additionally introduce two new strategies that outperform the UE(s2) designs previously suggested in the literature.

Original languageEnglish
Pages (from-to)1057–1069
Number of pages13
JournalComputational Geosciences
Volume24
Issue number3
DOIs
Publication statusPublished - 2020

Keywords

  • Approximate gradient
  • Ensemble methods
  • Robust optimization
  • Sampling strategies

Fingerprint

Dive into the research topics of 'Improved sampling strategies for ensemble-based optimization'. Together they form a unique fingerprint.

Cite this