Summary of the paper

Title Benchmarking Textual Annotation Tools for the Semantic Web
Authors Diana Maynard
Abstract This paper investigates the state of the art in automatic textual annotation tools, and examines the extent to which they are ready for use in the real world. We define some benchmarking criteria for measuring the usability of annotation tools, and examine those factors which are particularly important for a real user to be able to determine which is the most suitable tool for their use. We discuss factors such as usability, accessibility, interoperability and scalability, and evaluate a set of annotation tools according to these factors. Finally, we draw some conclusions about the current state of research in annotation and make some suggestions for the future.
Language Language-independent
Topics Named Entity recognition, Information Extraction, Information Retrieval, Tools, systems, applications
Full paper Benchmarking Textual Annotation Tools for the Semantic Web
Slides -
Bibtex @InProceedings{MAYNARD08.11,
  author = {Diana Maynard},
  title = {Benchmarking Textual Annotation Tools for the Semantic Web},
  booktitle = {Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)},
  year = {2008},
  month = {may},
  date = {28-30},
  address = {Marrakech, Morocco},
  editor = {Nicoletta Calzolari (Conference Chair), Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odijk, Stelios Piperidis, Daniel Tapias},
  publisher = {European Language Resources Association (ELRA)},
  isbn = {2-9517408-4-0},
  note = {},
  language = {english}

Powered by ELDA © 2008 ELDA/ELRA