Title

Proposal for Evaluating Ontology Refinement Methods

Authors

Enrique Alfonseca (Departamento de Ingenieria Informatica Universidad Autonoma de Madrid, 28049 Madrid (Spain))

Suresh Manandhar (Computer Science Department University of York YO10 5DD York (U.K.))

Session

EP1: Evaluation

Abstract

Ontologies are a tool for Knowledge Representation that is now widely used, but the effort employed to build an ontology is still high. There are a few automatic and semi-automatic methods for extending ontologies with domain-specific information, but they use different training and test data, and different evaluation metrics. The work described in this paper is an attempt to build a benchmark corpus that can be used for comparing these systems. We provide standard evaluation metrics as well as two different annotated corpora: one in which every unknown word has been labelled with the places where it should be added onto the ontology, and other in which only the high-frequency unknown terms have been annotated.

Keywords

Ontology, Ontology learning, Ontology refinement, Bechmark corpora, Evaluation

Full Paper

38.pdf