LREC 2000 2nd International Conference on Language Resources & Evaluation  
Home Basic Info Archaeological Zappeion Registration Conference

Conference Papers

Program
Papers
Sessions
Abstracts
Authors
Keywords
Search

Papers by paper title: A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Papers by ID number: 1-50, 51-100, 101-150, 151-200, 201-250, 251-300, 301-350, 351-377.

List of all papers and abstracts.


Previous Paper   Next Paper  

Title How to Evaluate Your Question Answering System Every Day ... and Still Get Real Work Done
Authors Breck Eric J. (The MITRE Corporation, 202 Burlington Rd., Bedford, Massachusetts 01730-1420 USA, ebreck@mitre.org)
Burger John D. (The MITRE Corporation, 202 Burlington Rd., Bedford, Massachusetts 01730-1420 USA, john@mitre.org)
Ferro Lisa (The MITRE Corporation, 202 Burlington Rd., Bedford, Massachusetts 01730-1420 USA, lferro@mitre.org)
Hirschman Lynette (The MITRE Corporation, 202 Burlington Road;Bedford,MA 01730 USA, lynette@mitre.org)
House David (The MITRE Corporation, 202 Burlington Rd., Bedford, Massachusetts 01730-1420 USA, dhouse@mitre.org)
Light Marc (The MITRE Corporation, 202 Burlington Rd., Bedford, Massachusetts 01730-1420 USA, light@mitre.org)
Mani Inderjeet (The MITRE Corporation, 202 Burlington Rd., Bedford, Massachusetts 01730-1420 USA, imani@mitre.org)
Keywords Automatic Evaluation, Question Answering, Reading Comprehension
Session Session EO5 - Information Retrieval and Question Answering Evaluation
Abstract In this paper, we report on Qaviar, an experimental automated evaluation system for question answering applications. The goal of our research was to find an automatically calculated measure that correlates well with human judges' assessment of answer correctness in the context of question answering tasks. Qaviar judges the response by computing recall against the stemmed content words in the human-generated answer key. It counts the answer correct if it exceeds a given recall threshold. We determined that the answer correctness predicted by Qaviar agreed with the human 93% to 95% of the time. 41 question-answering systems were ranked by both Qaviar and human assessors, and these rankings correlated with a Kendall’s Tau measure of 0.920, compared to a correlation of 0.956 between human assessors on the same data.

 

ce="Verdana">