Title Modelling Legitimate Translation Variation for Automatic Evaluation of MT Quality
Author(s) Bogdan Babych (1), Anthony Hartley (2)

(1) Centre for Translation Studies, University of Leeds, Leeds, LS2 9JT, UK, bogdan@comp.leeds.ac.uk ; (2) Centre for Translation Studies, University of Leeds, Leeds, LS2 9JT, UK, a.hartley@leeds.ac.uk 

Session O21-EW
Abstract Automatic methods for MT evaluation are often based on the assumption that MT quality is related to some kind of distance between the evaluated text and a professional human translation (e.g., an edit distance or the precision of matched N-grams). However, independently produced human translations are necessarily different, conveying the same content by dissimilar means. Such legitimate translation variation is a serious problem for distance-based evaluation methods, because mismatches do not necessarily mean degradation in MT quality. In this paper we explore the link between legitimate translation variation and statistical measures of a words salience within a given document, such as tf.idf scores. We show that the use of such scores extends the N-gram distance measures in a way that allows us to accurately predict multiple quality parameters of the text, such as translation adequacy and fluency. However legitimate translation variation also reveals fundamental limits on the applicability of distance-based MT evaluation methods and on data-driven architectures for MT.
Keyword(s) Machine translation, automatic evaluation, legitimate translation variation, term weighting, translatability limits
Language(s) English, French
Full Paper 707.pdf