Smoothing methods for convex inequalities and linear complementarity problems

Smoothing methods for convex inequalities and linear complementarity problems

0.00 Avg rating0 Votes
Article ID: iaor19971121
Country: Netherlands
Volume: 71
Issue: 1
Start Page Number: 51
End Page Number: 69
Publication Date: Nov 1995
Journal: Mathematical Programming (Series A)
Authors: ,
Keywords: neural networks
Abstract:

A smooth approximation equ1 to the plus function equ2 is obtained by integratng the sigmoid function equ3, commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization problems, the solution of which approximates the solution of the original problem to a high degree of accuracy for equ4 sufficiently large. In the special case when a Slater constraint qualification is satisfied, an exact solution can be obtained for finite equ5 Speedup over MINOS 5.4 was as high as 1142 times for linear inequalities of size equ6, and 580 times for convex inequalities with 400 variables. Linear complementarity problems are converted into a system of smooth nonlinear equations and are solved by a quadratically convergent Newton method. For monotone LCPs with as many as 10000 variables, the proposed approach was as much as 63 times faster than Lemke's method.

Reviews

Required fields are marked *. Your email address will not be published.