In the simulation of Markov chains, importance sample involes replacing the original transition matrix, say P, with a suitably chosen transition matrix Q that tends to visit the states of interest more frequently. The likelihood ratio of P relative to Q is an important random variable in the importance sampling method. It always has expectation one, and for any interesting pair of transition matrices P and Q, there is a sample path length that causes the likelihood ratio to be close to zero with probability close to one. This may cause the variance of the importance sampling estimator to be larger than the variance of the traditional estimator. We develop sufficient conditions for ensuring the tightness of the distribution of the logarithm of the likelihood ratio for all sample path lengths, and we show that when these conditions are satisfied, the likelihood ratio is approximately lognormally distributed with expected value one. These conditions can be used to eliminate some choices of the alternative transition matrix Q that are likely to result in a variance increase. We also show that if the likelihood ratio is to remain well behaved for all sample path lengths, the alternative transition matrix Q has to approach the original transition matrix P as the sample path length increases. The practical significance of this result is that importance sampling can be difficult to apply successfully in simulations that involve long sample paths.