Learning by structural constraints: Structurally constrained resource-allocating networks

Learning by structural constraints: Structurally constrained resource-allocating networks

0.00 Avg rating0 Votes
Article ID: iaor20002988
Country: Poland
Issue: 3/4
Start Page Number: 151
End Page Number: 159
Publication Date: Jan 1996
Journal: Badania Operacyjne I Decyzje
Authors:
Abstract:

Generally, classical neural network algorithms construct classification systems by means of hyperplanes, very general and problem unspecific. In order to build efficient classifiers yielding good generalization power with a restricted number of input factors, we propose structurally constrained neural networks (SCNN). The idea of SCNN can be applied to hetero- and auto-associative memories, topographic feature maps as well as supervised learning. In this article, a particular class of neural networks, namely Structurally Constrained Resource-Allocating Networks (SCRAN) are introduced. The structures of the individual hidden neurons are constrained by continuity. In geometric terms, the constraint corresponds to irregular, concave polytopes which are equivalent to hyperplanes in classical networks. The method was tested on the problem of creditworthiness analysis in specific industry sectors. The performed sensitivity analysis with cross-validation showed SCRAN to outperform standard resource-allocating networks in their learning and generalization ability. Moreover, the number of needed input factors is considerably reduced.

Reviews

Required fields are marked *. Your email address will not be published.