Article ID: | iaor200953666 |
Country: | United States |
Volume: | 20 |
Issue: | 1 |
Start Page Number: | 46 |
End Page Number: | 54 |
Publication Date: | Jan 2008 |
Journal: | INFORMS Journal On Computing |
Authors: | Sheng Olivia R Liu, Chang Namsik |
Keywords: | information, decision: rules |
One widely used knowledge–discovery technique is a decision–tree inducer that generates classifiers in the form of a single decision tree. As the number of prespecified decision–outcome classes increases, however, the trees so generated often become overly complex with regard to the number of leaves and nodes, and the classification accuracy consequently drops. In contrast, the multi–decision–tree induction (MDTI) approach, which constructs different decision trees for different decision–outcome classes, may reduce rule cardinality, and improve both rule conciseness and classification accuracy over a traditional single–decision tree inducer. This paper analytically and empirically compares the two techniques based on these measures. The analysis and results show that, in some situations, MDTI outperforms the traditional approach in terms of cardinality, conciseness, and classification accuracy of the acquired knowledge structures.