Summary of the paper

Title Involving Language Professionals in the Evaluation of Machine Translation
Authors Eleftherios Avramidis, Aljoscha Burchardt, Christian Federmann, Maja Popović, Cindy Tscherwinka and David Vilar
Abstract Significant breakthroughs in machine translation only seem possible if human translators are taken into the loop. While automatic evaluation and scoring mechanisms such as BLEU have enabled the fast development of systems, it is not clear how systems can meet real-world (quality) requirements in industrial translation scenarios today. The taraXÜ project paves the way for wide usage of hybrid machine translation outputs through various feedback loops in system development. In a consortium of research and industry partners, the project integrates human translators into the development process for rating and post-editing of machine translation outputs thus collecting feedback for possible improvements.
Topics Machine Translation, SpeechToSpeech Translation, Evaluation methodologies, Usability, user satisfaction
Full paper Involving Language Professionals in the Evaluation of Machine Translation
Bibtex @InProceedings{AVRAMIDIS12.294,
  author = {Eleftherios Avramidis and Aljoscha Burchardt and Christian Federmann and Maja Popović and Cindy Tscherwinka and David Vilar},
  title = {Involving Language Professionals in the Evaluation of Machine Translation},
  booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
  year = {2012},
  month = {may},
  date = {23-25},
  address = {Istanbul, Turkey},
  editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Uğur Doğan and Bente Maegaard and Joseph Mariani and Asuncion Moreno and Jan Odijk and Stelios Piperidis},
  publisher = {European Language Resources Association (ELRA)},
  isbn = {978-2-9517408-7-7},
  language = {english}
 }
Powered by ELDA © 2012 ELDA/ELRA