Article ID: | iaor200214 |
Country: | United States |
Volume: | 118 |
Issue: | 1/2 |
Start Page Number: | 69 |
End Page Number: | 113 |
Publication Date: | Apr 2000 |
Journal: | Artificial Intelligence |
Authors: | Craven M., DiPasquo D., Freitag D., McCallum A., Mitchell T., Nigam K., Slattery S. |
Keywords: | world-wide-web |
The World Wide Web is a vast source of information accessible to computers, but understandable only to humans. The goal of the research described here is to automatically create a computer understandable knowledge base whose content mirrors that of the World Wide Web. Such a knowledge base would enable much more effective retrieval of Web information, and promote new uses of the Web to support knowledge-based inference and problem solving. Our approach is to develop a trainable information extraction system that takes two inputs. The first is an ontology that defines the classes (e.g., company, person, employee, product) and relations (e.g., employed by, produced by) of interest when creating the knowledge base. The second is a set of training data consisting of labeled regions of hypertext that represent instances of these classes and relations. Given these inputs, the system learns to extract information from other pages and hyperlinks on the Web. This article describes our general approach, several machine learning algorithms for this task, and promising initial results with a prototype system that has created a knowledge base describing university people, courses, and research projects.