|LREC 2000 2nd International Conference on Language Resources & Evaluation
Previous Paper Next Paper
|A Comparison of Summarization Methods Based on Task-based Evaluation
|Hajime Mochizuki (School of Information Science, Japan Advanced Institute of Science and Technology, Tatsunokuchi, Ishikawa 923-1292, Japan, E-mail:email@example.com)
Manabu Okumura (School of Information Science, Japan Advanced Institute of Science and Technology, Tatsunokuchi, Ishikawa 923-1292, Japan, E-mail: firstname.lastname@example.org)
|Automatic Text Summarization, Information Retrieval, Task-Based Evaluation
|Session EO2 - Evaluation of Tools
|A task-based evaluation scheme has been adopted as a new method of evaluation for automatic text summarization systems. It evaluates the performance of a summarization system in a given task, such as information retrieval and text categorization. This paper compares ten different summarization methods based on information retrieval tasks. In order to evaluate the system performance, the subjects’ speed and accuracy are measured in judging the relevance of texts using summaries. We also analyze the similarity of summaries in order to investigate the similarity of the methods. Furthermore, we analyze what factors can affect evaluation results, and describe the problems that arose from our experimental design, in order to establish a better evaluation scheme.