Silver Sponsors
Bronze Sponsors

» Conference Programme » Keynote Speeches and Invited Talk

Keynote Speeches and Invited Talk


Keynote Speech 1: Sören Auer - Thursday, May 24, 9:00 - Chairperson: Jan Odijk

TheWeb of Data: Decentralized, collaborative, interlinked and interoperable - Sören Auer and Sebastian Hellmann

Recently the publishing and integration of structured data on the Web gained traction with initiatives such as Linked Data, RDFa and In this article we outline some fundamental principles and aspects of the emerging Web of Data. We stress the importance of open licenses as an enabler for collaboration, sharing and reuse of structured data on the Web. We discuss some features of the RDF data model and its suitability for integrating structured data on the Web. Two particularly crucial aspects are performance and scalability as well as conceptual interoperability, when using the Web as a medium for data integration. Last but not least we outline our vision of a Web of interlinked linguistic resources, which includes the establishment of a distributed ecosystem of heterogeneous NLP tools and services by means of structural, conceptual and access interoperability employing background knowledge from the Web of Data

Invited Talk: Kemal Oflazer - Thursday, May 24, 13:10 - Chairperson: Mehmed Özkan

The Turkish Language and its Challenges for Language Processing - Kemal Oflazer

Turkish is the language of over 70 million people in and around Turkey and the Turkic language family has over 150M speakers. Yet work on Turkish NLP has had a relatively short history. This talk aims to present an overview of aspects of Turkish that makes it interesting for NLP in general, and the challenges one faces as the language evolves relatively rapidly. The talk will cover the current state of Turkish NLP and computational resources available for the community.

Keynote Speech 2: Catherine Pelachaud - Friday, May 25, 9:00 - Chairperson: Asuncion Moreno

Socio-emotional virtual agent - Catherine Pelachaud

In this talk I will present our current work toward endowing virtual agents with communicative and emotional capabilities. I will start describing an interactive system of an agent dialoging with human users in an emotionally colored manner. The agent is an active listener and can display a large range of back-channels. Four agents have been defined with specific emotionally traits. They are defined by specific facial models but also behaviors. They can also mimic user’s expressions such as smiles and head movements. Through their behaviors, the agents show various attitudes and levels of engagement. Lately we have concentrated our study on smile. Smiles can have many communicative functions, such as being a smile of politeness, happiness or embarrassment. These smiles differ in their morphology, being of shorter or longer duration, symmetric or not. Agents are perceived with different attitudes when they display these smiles.

Update Tuesday 8 May 2012
The others articles in the same section :
Update Monday 14 May 2012
Update Thursday 29 March 2012
Published on Thursday 17 May 2012
Update Thursday 24 May 2012

Printable version of this article Send the article by e-mail title=        


Home     |     News     |     Site Map     |     Restricted     
  Update Thursday 11 October 2012