Article ID: | iaor19991411 |
Country: | United States |
Volume: | 44 |
Issue: | 3 |
Start Page Number: | 416 |
End Page Number: | 430 |
Publication Date: | Mar 1998 |
Journal: | Management Science |
Authors: | Shaw Michael J., Piramuthu Selwyn, Ragavan Harish |
Keywords: | artificial intelligence: decision support |
Recent years have seen the growth in popularity of neural networks for business decision support because of their capabilities for modeling, estimating, and classifying. Compared to other AI methods for problem solving such as expert systems, neural network approaches are especially useful for their ability to learn adaptively from observations. However, neural network learning performed by algorithms such as back-propagation are known to be slow due to the size of the search space involved and also the iterative manner in which the algorithm works. In this paper, we show that the degree of difficulty in neural network learning is inherent in the given set of training examples. We propose a technique for measuring such learning difficulty, and then develop a feature construction methodology that helps transform the training data so that both the learning speed and classification accuracy of neural network algorithms are improved. We show the efficacy of the proposed method for financial risk classification, a domain characterized by frequent data noise, lack of functional structure, and high attribute interactions. Moreover, the empirical studies also provide insights into the structural characteristics of neural networks with respect to the input data used as well as possible mechanisms to improve the learning performance.