Toward global optimization of neural networks: A comparison of the genetic algorithm and backpropagation

Toward global optimization of neural networks: A comparison of the genetic algorithm and backpropagation

0.00 Avg rating0 Votes
Article ID: iaor2001460
Country: Netherlands
Volume: 22
Issue: 2
Start Page Number: 171
End Page Number: 185
Publication Date: Feb 1998
Journal: Decision Support Systems
Authors: , ,
Keywords: constraint handling languages, genetic algorithms
Abstract:

The recent surge in activity of neural network research in business is not surprising since the underlying functions controlling business data are generally unknown and the neural network offers a tool that can approximate the unknown function to any degree of desired accuracy. The vast majority of these studies rely on a gradient algorithm, typically a variation of backpropagation, to obtain the parameters (weights) of the model. The well-known limitations of gradient search techniques applied to complex nonlinear optimization problems such as artificial neural networks have often resulted in inconsistent and unpredictable performance. Many researchers have attemped to address the problems associated with the training algorithm by imposing constraints on the search space or by restructuring the architecture of the neural network. In this paper we demonstrate that such constraints and restructuring are unnecessary if a sufficiently complex initial architecture and an appropriate global search algorithm is used. We further show that the genetic algorithm cannot only serve as a global search algorithm but by appropriately defining the objective function it can simultaneously achieve a parsimonious architecture. The value of using the genetic algorithm over backpropagation for neural network optimization is illustrated through a Monte Carlo study which compares each algorithm on in-sample, interpolation, and extrapolation data for seven test functions.

Reviews

Required fields are marked *. Your email address will not be published.