Summary of the paper

Title Towards Fully Automatic Annotation of Audio Books for TTS
Authors Olivier Boeffard, Laure Charonnat, Sébastien Le Maguer and Damien Lolive
Abstract Building speech corpora is a first and crucial step for every text-to-speech synthesis system. Nowadays, the use of statistical models implies the use of huge sized corpora that need to be recorded, transcribed, annotated and segmented to be usable. The variety of corpora necessary for recent applications (content, style, etc.) makes the use of existing digital audio resources very attractive. Among all available resources, audiobooks, considering their quality, are interesting. Considering this framework, we propose a complete acquisition, segmentation and annotation chain for audiobooks that tends to be fully automatic. The proposed process relies on a data structure, Roots, that establishes the relations between the different annotation levels represented as sequences of items. This methodology has been applied successfully on 11 hours of speech extracted from an audiobook. A manual check, on a part of the corpus, shows the efficiency of the process.
Topics Corpus (creation, annotation, etc.), Speech resource/database, Tools, systems, applications
Full paper Towards Fully Automatic Annotation of Audio Books for TTS
Bibtex @InProceedings{BOEFFARD12.632,
  author = {Olivier Boeffard and Laure Charonnat and Sébastien Le Maguer and Damien Lolive},
  title = {Towards Fully Automatic Annotation of Audio Books for TTS},
  booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
  year = {2012},
  month = {may},
  date = {23-25},
  address = {Istanbul, Turkey},
  editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Uğur Doğan and Bente Maegaard and Joseph Mariani and Asuncion Moreno and Jan Odijk and Stelios Piperidis},
  publisher = {European Language Resources Association (ELRA)},
  isbn = {978-2-9517408-7-7},
  language = {english}
Powered by ELDA © 2012 ELDA/ELRA