This paper is concerned with the linear programming formulation of Markov control processes with Borel state and action spaces, and the average cost (AC) criterion. The one-stage cost function may be unbounded. A linear program EP and its dual, EP*, are introduced. Their values, infEP and supEP*, bound the value (say, infAC) of the AC problem, i.e., supEP*•infAC•infEP. Conditions are provided for the existence of no duality gap, viz., supEP*=infEP, and also for strong duality, so that both EP and EP* are solvable and their optimal values satisfy maxEP*=minEP. The latter implies (i) the existence of an AC-optimal control policy and that (ii) the AC optimality equation holds almost everywhere. These results are applied to a general vector-valued, additive-noise system with quadratic costs.