Summary of the paper

Title Language Modeling Approach for Retrieving Passages in Lecture Audio Data
Authors Koichiro Honda and Tomoyosi Akiba
Abstract Spoken Document Retrieval (SDR) is a promising technology for enhancing the utility of spoken materials. After the spoken documents have been transcribed by using a Large Vocabulary Continuous Speech Recognition (LVCSR) decoder, a text-based ad hoc retrieval method can be applied directly to the transcribed documents. However, recognition errors will significantly degrade the retrieval performance. To address this problem, we have previously proposed a method that aimed to fill the gap between automatically transcribed text and correctly transcribed text by using a statistical translation technique. In this paper, we extend the method by (1) using neighboring context to index the target passage, and (2) applying a language modeling approach for document retrieval. Our experimental evaluation shows that context information can improve retrieval performance, and that the language modeling approach is effective in incorporating context information into the proposed SDR method, which uses a translation model.
Topics Speech resource/database, Information Extraction, Information Retrieval, Machine Translation, SpeechToSpeech Translation
Full paper Language Modeling Approach for Retrieving Passages in Lecture Audio Data
Slides -
Bibtex @InProceedings{HONDA10.462,
  author = {Koichiro Honda and Tomoyosi Akiba},
  title = {Language Modeling Approach for Retrieving Passages in Lecture Audio Data},
  booktitle = {Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)},
  year = {2010},
  month = {may},
  date = {19-21},
  address = {Valletta, Malta},
  editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis and Mike Rosner and Daniel Tapias},
  publisher = {European Language Resources Association (ELRA)},
  isbn = {2-9517408-6-7},
  language = {english}
Powered by ELDA © 2010 ELDA/ELRA