Optimizing long-term hydro-power production using Markov decision processes

Optimizing long-term hydro-power production using Markov decision processes

0.00 Avg rating0 Votes
Article ID: iaor19971534
Country: United Kingdom
Volume: 3
Issue: 3/4
Start Page Number: 223
End Page Number: 241
Publication Date: Jul 1996
Journal: International Transactions in Operational Research
Authors: ,
Keywords: energy
Abstract:

Modelling the long-term operation of hydroelectric systems is one of the classic applications of Markov decision processes (MDP). The computation of optimal policies, for MDP models, is usually done by dynamic programming (DP) on a discretized state space. A major difficulty arises when optimizing multi-reservoir systems, because the computational complexity of DP increases exponentially with the number of sites. This so-called ‘curse of dimensionality’ has so far restricted the applicability of DP to very small systems (2 or 3 sites). Practitioners have thus had to resort to other methodologies for the long-term planning, often at the expense of rigour, and without reliable error estimates. This paper surveys recent research on MDP computation, with application to hydro-power systems. Three main approaches are discussed: (i) discrete DP, (ii) numerical approximation of the expected future reward function, and (iii) analytic solution of the DP recursion.

Reviews

Required fields are marked *. Your email address will not be published.