Article ID: | iaor19981577 |
Country: | Japan |
Volume: | E80-A |
Issue: | 6 |
Start Page Number: | 1150 |
End Page Number: | 1156 |
Publication Date: | Jun 1997 |
Journal: | Transactions of the Institute of Electronics, Information and Communication Engineers A |
Authors: | Tanaka Toshiyuki, Kuriyama Hideki, Ochiai Yoshiko, Taki Masao |
Keywords: | cybernetics, gradient methods |
Neural networks can be used as associative memories which can learn problems of acquiring input–output relations presented by examples. The learning time problem addresses how long it takes for a neural network to learn a given problem by a learning algorithm. As a solvable model to this problem we analyze the learning dynamics of the linear associative memory with the least-mean-square algorithm. Our result shows that the learning time τ of the linear associative memory diverges in τ ∝ (1 – ρ)–2 as the memory rate ρ approaches 1. It also shows that the learning time exhibits the exponential dependence on ρ when ρ is small.