A neural network realization of a gradient method with a convolution integral by using neurons with ‘memory’ (inertial neurons)

A neural network realization of a gradient method with a convolution integral by using neurons with ‘memory’ (inertial neurons)

0.00 Avg rating0 Votes
Article ID: iaor1999846
Country: Japan
Volume: 11
Issue: 3
Start Page Number: 112
End Page Number: 119
Publication Date: Mar 1998
Journal: Transactions of the Institute of Systems, Control and Information Engineers
Authors: , ,
Keywords: optimization, neural networks
Abstract:

First, a new type of models for trajectory methods to solve optimization problems is considered. In this new model, a velocity of the trajectory is given by a convolution integral form with all gradients of the minimization function on the trajectory for the past time. The new trajectory method can be called a gradient method with the optimizer's ‘memory’ with respect to the past gradient information, and the model can be transformed into the second order differential equation model whose trajectory tides over trapping into local optima under a suitable initial velocity. Next, in order to solve quadratic programming problems with variables constrained on the closed interval [0,1]s, the gradient method with ‘memory’ is realized by neural networks as operational circuits composed of neurons, each of which has two integral elements. The trajectory by realized neural networks has possibility to overcome trapping into local minima, while the Hopfield type with first order differential equation model traps into them. Last, the numerical simulation results for simple test problems demonstrate properties of these presented neural networks.

Reviews

Required fields are marked *. Your email address will not be published.