Continuity of approximation by neural networks in Lp spaces

Continuity of approximation by neural networks in Lp spaces

0.00 Avg rating0 Votes
Article ID: iaor20021320
Country: Netherlands
Volume: 101
Issue: 1
Start Page Number: 143
End Page Number: 147
Publication Date: Jan 2001
Journal: Annals of Operations Research
Authors: , ,
Keywords: statistics: data envelopment analysis
Abstract:

Devices such as neural networks typically approximate the elements of some function space X by elements of a nontrivial finite union M of finite-dimensional spaces. It is shown that if X = LP (Ω) (1 < p < ∞ and Ω ⊂ Rd), then for any positive constant Γ and any continuous function φ from X to M, ∥f − φ(f)∥ > ∥f − M∥ + Γ for some f in X. Thus, no continuous finite neural network approximation can be within any positive constant of a best approximation in the Lp-norm.

Reviews

Required fields are marked *. Your email address will not be published.