Training neural networks with the GRG2 nonlinear optimizer

Training neural networks with the GRG2 nonlinear optimizer

0.00 Avg rating0 Votes
Article ID: iaor1997291
Country: Netherlands
Volume: 69
Issue: 1
Start Page Number: 83
End Page Number: 91
Publication Date: Aug 1993
Journal: European Journal of Operational Research
Authors: ,
Keywords: programming: nonlinear
Abstract:

Neural networks represent a new approach to artificial intelligence. By using biologically motivated intensively interconnected networks of simple processing elements, certain pattern recognition tasks can be accomplished much faster than with currently used techniques. The most popular means of training these networks is back propagation, a gradient descent technique. The introduction of back propagation revolutionized research in neural networks, but the method has serious drawbacks in training speed and scalability to large problems. This paper compares the use of a general-purpose nonlinear optimizer, GRG2, with back propagation in training neural networks. Parity problems of increasing size are used to evaluate the scability of each method to larger problems. It was found that GRG2 not only found solutions much faster, but also found much better solutions. The use of nonlinear programming methods in training therefore has the potential to allow neural networks to be applied to problems that have previously been beyond their capabilities.

Reviews

Required fields are marked *. Your email address will not be published.