A comparison of nonlinear optimization methods for supervised learning in multilayer feedforward neural networks

A comparison of nonlinear optimization methods for supervised learning in multilayer feedforward neural networks

0.00 Avg rating0 Votes
Article ID: iaor1999413
Country: Netherlands
Volume: 93
Issue: 2
Start Page Number: 358
End Page Number: 368
Publication Date: Sep 1996
Journal: European Journal of Operational Research
Authors: ,
Keywords: neural networks
Abstract:

One impediment to the use of neural networks in pattern classification problems is the excessive time required for supervised learning in larger multilayer feedforward networks. The use of nonlinear optimization techniques to perform neural network training offers a means of reducing that computing time. Two key issues in the implementation of nonlinear programming are the choice of a method for computing search direction and the degree of accuracy required of the subsequent line search. This paper examines these issues through a designed experiment using six different pattern classification tasks, four search direction methods (conjugate gradient, quasi-Newton, and two levels of limited memory quasi-Newton), and three levels of line search accuracy. It was found that for the simplest pattern classification problems, the conjugate gradient performed well. For more complicated pattern classification problems, the limited memory Broyden–Fletcher–Goldfarb–Shanno (BFGS) or the BFGS should be preferred. For very large problems, the best choice seems to be the limited memory BFGS. It was also determined that, for the line search methods used in this study, increasing accuracy did not improve efficiency.

Reviews

Required fields are marked *. Your email address will not be published.