LREC 2000 2nd International Conference on Language Resources & Evaluation  
Home Basic Info Archaeological Zappeion Registration Conference

Conference Papers

Program
Papers
Sessions
Abstracts
Authors
Keywords
Search

Papers by paper title: A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Papers by ID number: 1-50, 51-100, 101-150, 151-200, 201-250, 251-300, 301-350, 351-377.

List of all papers and abstracts.


Previous Paper   Next Paper  

Title Dialogue Annotation for Language Systems Evaluation
Authors Charfuelan Marcela (Dep. SSR ETSIT-UPM Ciudad Universitaria Madrid, Spain, marcela@gaps.ssr.upm.es)
Relano Gil Jose (Dep. SSR ETSIT-UPM Ciudad Universitaria Madrid, Spain, jrelanio@gaps.ssr.upm.es)
Rogriguez Gancedo M. Carmen (Dep. SSR ETSIT-UPM Ciudad Universitaria Madrid, Spain, mcarmen@gaps.ssr.upm.es)
Tapias Merino Daniel (Speech Tecnology Group, Telefonica Investigacion y, Desarrollo, S.A. C. Emilio Vargas, 6 28043 Madrid, Spain, daniel@craso.tid.es)
Gomez Luis Hernandez  (Dep. SSR ETSIT-UPM Ciudad Universitaria Madrid, Spain, luis@gaps.ssr.upm.es)
Keywords Annotated Dialogue Corpora, Annotation frameworks, Annotation Tools, Benchmark, Log files, SLDSs Evaluation Procedures
Session Session SP4 - Tools for Evaluation and Processing of Spoken Language Resources
Abstract The evaluation of Natural Language Processing (NLP) systems is still an open problem demanding further research progress from the research community to establish general evaluation frameworks. In this paper we present an experimental multilevel annotation process to be followed during the testing phase of Spoken Language Dialogue Systems (SLDSs). Based on this process we address some issues related to an annotation scheme of evaluation dialogue corpora and particular annotation tools and processes.

 

face="Verdana">