Stability of neural networks and convergence of their sensitivity computation algorithms

Stability of neural networks and convergence of their sensitivity computation algorithms

0.00 Avg rating0 Votes
Article ID: iaor1989634
Country: Japan
Volume: J72-D-2
Issue: 3
Start Page Number: 427
End Page Number: 432
Publication Date: Mar 1989
Journal: Transactions of the Institute of Electronics, Information and Communication Engineers
Authors:
Keywords: gradient methods, numerical analysis, neural networks
Abstract:

Asymptotic stability of equilibrium states and convergence properties of some algorithms for computing sensitivity are investigated for a continuous-time model of neural networks and a discrete-time one. On the basis of a stability theory of dynamical systems, a class of networks is defined where every interaction between neurons is sufficiently weak. This class includes any feedforward network such as perceptrons with weak (possibly zero) feedbacks and also networks where all neurons are mutually interacting weakly. A continuous-time model, i.e. a system of differential equations describing the dynamics of this class of networks is proven to be globally asymptotically stable. The rate of convergence to its equilibrium state is inversely proportional to the strength of the interaction between neurons. A discrete-time model describing generally asynchronous behaviors of the networks are also shown to converge globally. In addition an analog method and also a general digital method for computing the sensitivity, i.e. gradient vectors of the potential of every neuron with respect to any synapse weight are globally convergent for this class of networks. [In Japanese.]

Reviews

Required fields are marked *. Your email address will not be published.