Parameter redundancy in neural networks: an application of Chebyshev polynomials

Parameter redundancy in neural networks: an application of Chebyshev polynomials

0.00 Avg rating0 Votes
Article ID: iaor20082761
Country: Netherlands
Volume: 4
Issue: 3
Start Page Number: 227
End Page Number: 242
Publication Date: Jul 2007
Journal: Computational Management Science
Authors:
Abstract:

This paper deals with feedforward neural networks containing a single hidden layer and with sigmoid/logistic activation function. Training such a network is equivalent to implementing nonlinear regression using a flexible functional form, but the functional form in question is not easy to deal with. The Chebyshev polynomials are suggested as a way forward, providing an approximation to the network which is superior to Taylor series expansions. Application of these approximations suggests that the network is liable to a ‘naturally occurring’ parameter redundancy, which has implications for the training process as well as certain statistical implications. On the other hand, parameter redundancy does not appear to damage the fundamental property of universal approximation.

Reviews

Required fields are marked *. Your email address will not be published.