Summary of the paper

Title Translation Errors and Incomprehensibility: a Case Study using Machine-Translated Second Language Proficiency Tests
Authors Takuya Matsuzaki, Akira Fujita, Naoya Todo and Noriko H. Arai
Abstract This paper reports on an experiment where 795 human participants answered to the questions taken from second language proficiency tests that were translated to their native language. The output of three machine translation systems and two different human translations were used as the test material. We classified the translation errors in the questions according to an error taxonomy and analyzed the participants' response on the basis of the type and frequency of the translation errors. Through the analysis, we identified several types of errors that deteriorated most the accuracy of the participants' answers, their confidence on the answers, and their overall evaluation of the translation quality.
Topics Evaluation Methodologies, Machine Translation, SpeechToSpeech Translation, Usability, User Satisfaction
Full paper Translation Errors and Incomprehensibility: a Case Study using Machine-Translated Second Language Proficiency Tests
Bibtex @InProceedings{MATSUZAKI16.921,
  author = {Takuya Matsuzaki and Akira Fujita and Naoya Todo and Noriko H. Arai},
  title = {Translation Errors and Incomprehensibility: a Case Study using Machine-Translated Second Language Proficiency Tests},
  booktitle = {Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)},
  year = {2016},
  month = {may},
  date = {23-28},
  location = {Portoro┼ż, Slovenia},
  editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Sara Goggi and Marko Grobelnik and Bente Maegaard and Joseph Mariani and Helene Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis},
  publisher = {European Language Resources Association (ELRA)},
  address = {Paris, France},
  isbn = {978-2-9517408-9-1},
  language = {english}
 }
Powered by ELDA © 2016 ELDA/ELRA