Reduction constraints for the global optimization of nonlinear problems

Reduction constraints for the global optimization of nonlinear problems

0.00 Avg rating0 Votes
Article ID: iaor20051542
Country: United Kingdom
Volume: 11
Issue: 1
Start Page Number: 33
End Page Number: 41
Publication Date: Jan 2004
Journal: International Transactions in Operational Research
Authors:
Abstract:

Convergence of branch-and-bound algorithms for the solution of NLPs is obtained by finding ever-nearer lower and upper bounds to the objective function. The lower bound is calculated by constructing a convex relaxation of the NLP. Reduction constraints are new linear problem constraints which are (a) linearly independent from existing constraints: (b) redundant with reference to the original NLP formulation; (c) not redundant with reference to its convex relaxation. Thus, they can be successfully employed to reduce the feasible region of the convex relaxation without cutting the feasible region of the original NLP.

Reviews

Required fields are marked *. Your email address will not be published.