In this paper the authors consider a singularly perturbed Markov decision process with the limiting average cost criterion. They assume that the underlying process is composed of n separate irreducible processes, and that the small perturbation is such that it ‘unites’ these processes into a single irreducible process. The authors formulate the underlying control problem for the singularly perturbed MDP, and call it the ‘limit Markov control problem’ (limit MCP). They prove the validity of the ‘the limit control principle’ which states that an optimal solution to the perturbed MDP can be approximated by an optimal solution of the limit MCP for any sufficiently small perturbation. The authors also demonstrate that the limit Markov control problem is equivalent to a suitably constructed nonlinear program in the space of long-run state-action frequencies. This approach combines the solutions of the original separated irreducible MDPs with the stationary distribution of a certain ‘aggregated MDP’ and creates a framework for future algorithmic approaches.