Summary of the paper

Title AVAB-DBS: an Audio-Visual Affect Bursts Database for Synthesis
Authors Kevin El Haddad, Huseyin Cakmak, Stéphane Dupont and Thierry Dutoit
Abstract It has been shown that adding expressivity and emotional expressions to an agent's communication systems would improve the interaction quality between this agent and a human user. In this paper we present a multimodal database of affect bursts, which are very short non-verbal expressions with facial, vocal, and gestural components that are highly synchronized and triggered by an identifiable event. This database contains motion capture and audio data of affect bursts representing disgust, startle and surprise recorded at three different levels of arousal each. This database is to be used for synthesis purposes in order to generate affect bursts of these emotions on a continuous arousal level scale.
Topics Emotion Recognition/Generation, Speech Resource/Database, Speech Synthesis
Full paper AVAB-DBS: an Audio-Visual Affect Bursts Database for Synthesis
Bibtex @InProceedings{ELHADDAD16.407,
  author = {Kevin El Haddad and Huseyin Cakmak and Stéphane Dupont and Thierry Dutoit},
  title = {AVAB-DBS: an Audio-Visual Affect Bursts Database for Synthesis},
  booktitle = {Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)},
  year = {2016},
  month = {may},
  date = {23-28},
  location = {Portoro┼ż, Slovenia},
  editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Sara Goggi and Marko Grobelnik and Bente Maegaard and Joseph Mariani and Helene Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis},
  publisher = {European Language Resources Association (ELRA)},
  address = {Paris, France},
  isbn = {978-2-9517408-9-1},
  language = {english}
Powered by ELDA © 2016 ELDA/ELRA