Given a linear iteration of the form x := F(x), we consider modified versions of the form x := F(x + γd), where d is a fixed direction, and γ is chosen to minimize the norm of the residual ‖x + γd – F(x + γd)‖. We propose ways to choose d so that the convergence rate of the modified iteration is governed by the subdominant eigenvalue of the original. In the special case where F relates to a Markovian decision problem, we obtain a new extrapolation method for value iteration. In particular, our method accelerates the Gauss–Seidel version of the value iteration method for discounted problems in the same way that MacQueen's error bounds accelerate the standard version. Furthermore, our method applies equally well to Markov Renewal and undiscounted problems.