Title

An Annotated Japanese Sign Language Corpus

Authors

Atsuko Koizumi (Central Research Laboratory, Hitachi, Ltd. 1-280, Higashi-koigakubo, Kokubunji-shi, Tokyo 185-8601, Japan)

Hirohiko Sagawa (Central Research Laboratory, Hitachi, Ltd. 1-280, Higashi-koigakubo, Kokubunji-shi, Tokyo 185-8601, Japan)

Masaru Takeuchi (Central Research Laboratory, Hitachi, Ltd. 1-280, Higashi-koigakubo, Kokubunji-shi, Tokyo 185-8601, Japan)

Session

MMP2: Resources Of The Sign Languages

Abstract

Sign language is characterized by its interactivity and multimodality, which cause difficulties in data collection and annotation. To address these difficulties, we have developed a video-based Japanese sign language (JSL) corpus and a corpus tool for annotation and linguistic analysis. As the first step of linguistic annotation, we transcribed manual signs expressing lexical information as well as non-manual signs (NMSs) - including head movements, facial actions, and posture - that are used to express grammatical information. Our purpose is to extract grammatical rules from this corpus for the sign-language translation system underdevelopment. From this viewpoint, we will discuss methods for collecting elicited data, annotation required for grammatical analysis, as well as corpus tool required for annotation and grammatical analysis. As the result of annotating 2800 utterances, we confirmed that there are at least 50 kinds of NMSs in JSL, using head (seven kinds), jaw! (six kinds), mouth (18 kinds), cheeks (one kind), eyebrows (four kinds), eyes (seven kinds), eye gaze (two kinds), bydy posture (five kinds). We use this corpus for designing and testing an algorithm and grammatical rules for the sign-language translation system underdevelopment.

Keywords

Sign language, Japanese sign language, Corpus, Multimodal corpus, Annotation, Non-Manual sign

Full Paper

318.pdf