This paper deals with a Markovian decision process with an absorbing set J0. We are interested in the largest number β* ≥ 1, called the critical discount factor, such that for all discount factors β smaller than β* the limit V of the N-stage value function VN for N → ∞ exists and is finite for each choice of the one-stage reward function. Several representations of β* are given. The equality of 1/β* with the maximal Perron/Frobenius eigenvalue of the MDP links our problem and our results to topics studied intensively (mostly for β = 1) in the literature. We derive in a unified way a large number of conditions, some of which are known, which are equivalent either to β < β* or to β* < 1. In particular, the latter is equivalent to transience of the MDP. A few of our findings are extended with the aid of results in Rieder to models with standard Borel state and action space. We also complement an algorithm of policy iteration type, due to Mandl and Seneta, for the computation of β*. Finally we determine β* explicitly in two models with stochastically monotone transition law.