A convergence proof for linear mean value cross decomposition

A convergence proof for linear mean value cross decomposition

0.00 Avg rating0 Votes
Article ID: iaor19942414
Country: Germany
Volume: 39
Start Page Number: 157
End Page Number: 186
Publication Date: Jun 1994
Journal: Mathematical Methods of Operations Research (Heidelberg)
Authors:
Keywords: game theory
Abstract:

The mean value cross decomposition method for linear programming problems is a modification of ordinary cross decomposition that eliminates the need for using the Benders or Dantzig-Wolfe master problem. It is a generalization of the Brown-Robinson method for a finite matrix game and can also be considered as a generalization of the Kornai-Liptak method. It is based on the subproblem phase in cross decomposition, where the paper iterates between the dual subproblem and the primal subproblem. As input to the dual subproblem it uses the average of a part of all dual solutions of the primal subproblem, and as input to the primal subproblem the average of a part of all primal solutions of the dual subproblem is used. In this paper a new proof of convergence is given for this procedure. Previously convergence has only been shown for the application to a special separable case (which covers the Kornai-Liptak method), by showing equivalence to the Brown-Robinson method.

Reviews

Required fields are marked *. Your email address will not be published.