International Workshop OntoImage’2006     Language Resources for Content-Based Image Retrieval

in conjunction with LREC’2006

home | background | important dates | submission of papers | location | organizers  | questions


Deadline for Abstract Submissions: February 20th 2006
Submission per email to:


The “semantic gap” between low-level image description (texture, shape, color) and what people perceive in images is recognized as a difficult and unsolved problem by all content-based image researchers. With the growing number of images being created and stored each day, progress must be made on automatic methods for indexing and retrieving images based on their content. The vocabulary that would be most effective for this indexing is poorly understood today.

A number of approaches for using text to retrieve image have been proposed (hand assignment of descriptions [e.g. FLICKR, GRAPHICOBSESSION]; extracting terms from captions and from text linked to images [GOOGLE]; associating word and image characteristics [Barnard et al., 2003; Dance et al., 2004]. Recently, ontologies have been suggested as the proper representation for image indexing [Town & Sinclair, 2004]. Much work remains to be done to understand the problem from a lexical point of view.

This half-day workshop will present up to 9 papers examining problems concerning the lexical characteristics of the language that can or should be used to describe image content. The goal of the workshop is to address the following questions:

·         What vocabulary should be used to describe images?

·         How can vocabulary items be automatically associated with images for indexing and retrieval?

·         What visual characteristics of images can be discovered by text processing (color, size, location, shape, texture)?

·         What elements in a lexicon correspond to visualisable objects?

·         What language resources exist and what other ones are needed for content-based image retrieval?

·         Can specific ontologies be created and automatically populated for content-based image retrieval?

·         Can text processing identify what objects would appear against what background (e.g. bird against a sky)?




K. Barnard, P. Duygulu, N. de Freitas, D. Forsyth, D. Blei, M. Jordan, "Matching words and pictures", Journal of Machine Learning Research, 3:1107:1135, 2003

C. Dance, J. Willamowski, L. Fan, C. Bray, G. Csurka “Visual categorization with bags of keypoints”, ECCV International Workshop on Statistical Learning in Computer Vision, Prague, 2004.

C. Town and D. Sinclair. Language-based querying of image collections on the basis of an extensible ontology. IVC, 22(3):251--267, March 2004


We expect extended abstracts describing research addressing one of the above questions of about 1000 words to be submitted as WORD, PDF or ASCII documents per email to the following address: The final papers should not have more than 4 pages.



Deadline for Abstracts

February 20, 2006

Notification of acceptance

March 10, 2006

Final Papers

March 31, 2006


May 22, 2006 (afternoon)


Magazzini del Cotone Conference Center, Genoa, Italy.

Co-located with  LREC’2006



Gregory Grefenstette

Commissariat à l’Energie Atomique (CEA LIST)



Mark Sanderson

Department of Information Studies, University of Sheffield



Françoise Preteux

Institut National des Télécommunications (ARTEMIS)




Questions about the workshop can be addressed to:

See also


home | background | important dates | submission of papers | location | organizers | program committee | questions