Parallel Nonstationary Direct Policy Search for Risk-Averse Stochastic Optimization

Parallel Nonstationary Direct Policy Search for Risk-Averse Stochastic Optimization

0.00 Avg rating0 Votes
Article ID: iaor20171391
Volume: 29
Issue: 2
Start Page Number: 332
End Page Number: 349
Publication Date: May 2017
Journal: INFORMS Journal on Computing
Authors: , , ,
Keywords: stochastic processes, risk, markov processes, simulation, decision, programming: markov decision, heuristics, energy
Abstract:

This paper presents an algorithmic strategy to nonstationary policy search for finite‐horizon, discrete‐time Markovian decision problems with large state spaces, constrained action sets, and a risk‐sensitive optimality criterion. The methodology relies on modeling time‐variant policy parameters by a nonparametric response surface model for an indirect parametrized policy motivated by Bellman’s equation. The policy structure is heuristic when the optimization of the risk‐sensitive criterion does not admit a dynamic programming reformulation. Through the interpolating approximation, the level of nonstationarity of the policy, and consequently, the size of the resulting search problem can be adjusted. The computational tractability and the generality of the approach follow from a nested parallel implementation of derivative‐free optimization in conjunction with Monte Carlo simulation. We demonstrate the efficiency of the approach on an optimal energy storage charging problem, and illustrate the effect of the risk functional on the improvement achieved by allowing a higher complexity in time variation for the policy. The online supplement is available at https://doi.org/10.1287/ijoc.2016.0733.

Reviews

Required fields are marked *. Your email address will not be published.