SUMMARY : Session O32-EW Coreference, Summarisation and Evaluation

 

Title Automated Summarization Evaluation with Basic Elements.
Authors E. Hovy, C. Lin, L. Zhou, J. Fukumoto
Abstract As part of evaluating a summary automati-cally, it is usual to determine how much of the contents of one or more human-produced “ideal” summaries it contains. Past automated methods such as ROUGE compare using fixed word ngrams, which are not ideal for a variety of reasons. In this paper we describe a framework in which summary evaluation measures can be instantiated and compared, and we implement a specific evaluation method using very small units of content, called Basic Elements that address some of the shortcomings of ngrams. This method is tested on DUC 2003, 2004, and 2005 systems and produces very good correlations with human judgments.
Keywords Basic Elements, evaluation, correlation, DUC
Full paper Automated Summarization Evaluation with Basic Elements.