Article ID: | iaor2002103 |
Country: | Netherlands |
Volume: | 99 |
Issue: | 1 |
Start Page Number: | 385 |
End Page Number: | 400 |
Publication Date: | Dec 2000 |
Journal: | Annals of Operations Research |
Authors: | Perantonis Stavros J., Ampazis Nikolaos, Virvilis Vassilis |
Keywords: | neural networks |
Conventional supervised learing in neural networks is carried out by performing unconstrained minimization of a suitably defined cost function. This approach has certain drawbacks, which can be overcome by incorporating additional knowledge in the training formalism. In his paper, two types of such additional knowledge are examined: Network specific knowledge (associated with the neural network irrespectively of the problem whose solution is sought) or problem specific knowledge (which helps to solve a specific learning task). A constrained optimization framework is introduced for incorporating these types of knowledge into the learning formalism. We present three examples of improvement in the learning behaviour of neural networks using additional knowledge in the context of our constrained optimization framework. The two network specific examples are designed to improve convergence and learning speed in the broad class of feedforward networks, while the third problem specific example is related to the efficient factorization of 2-D polynomials using suitably constructed sigma–pi networks.