Training the random neural network using quasi-Newton methods

Training the random neural network using quasi-Newton methods

0.00 Avg rating0 Votes
Article ID: iaor20011985
Country: Netherlands
Volume: 126
Issue: 2
Start Page Number: 331
End Page Number: 339
Publication Date: Oct 2000
Journal: European Journal of Operational Research
Authors: ,
Abstract:

Training in the random neural network (RNN) is generally specified as the minimization of an appropriate error function with respect to the parameters of the network (weights corresponding to positive and negative connections). We propose here a technique for error minimization that is based on the use of quasi-Newton optimization techniques. Such techniques offer more sophisticated exploitation of the gradient information compared to simple gradient descent methods, but are computationally more expensive and difficult to implement. In this work we specify the necessary details for the application of quasi-Newton methods to the training of the RNN, and provide comparative experimental results from the use of these methods to some well-known test problems, which confirm the superiority of the approach.

Reviews

Required fields are marked *. Your email address will not be published.