Consider a system which can be modeled by a continuous time Markov chain with state space {0,1,2,...}. An observer, after observing this system to be in state i at a given time, faces three alternatives: (1) pay a termination cost f(i) and stop the process, (2) pay a continuation cost B and observe the system again after a random amount of time and (3) pay a reneging cost C and stop the process. A policy is called simple (or monotone) if there are two integers r<s such that the decision in state i is 1 if i•r, 2 if r<i<s and 3 if i≥s. The concept of a restrained Markov chain is introduced and necessary and sufficient conditions are given for a Markov chain to be restrained. It is shown that if the Markov chain under observation is restrained a simple optimal policy exists. Algorithms are given to compute it. In the special case of exponential retrial times a simpler algorithm is developed. Applications to queueing systems are given.