Please use this identifier to cite or link to this item:
Title: Face2Text : collecting an annotated image description corpus for the generation of rich face descriptions
Authors: Gatt, Albert
Tanti, Marc
Muscat, Adrian
Paggio, Patrizia
Farrugia, Reuben
Borg, Claudia
Camilleri, Kenneth
Rosner, Michael
Keywords: Optical data processing
Natural language processing (Computer science)
Image transmission
Natural language generation (Computer science)
Issue Date: 2020-05
Publisher: European Language Resources Association (ELRA)
Citation: Gatt, A., Tanti, M., Muscat, A., Paggio, P., Farrugia, R. A., Borg, C., ... & Van der Plas, L. (2018). Face2Text: collecting an annotated image description corpus for the generation of rich face descriptions. 11th Edition of the Language Resources and Evaluation Conference (LREC’18).
Abstract: The past few years have witnessed renewed interest in NLP tasks at the interface between vision and language. One intensively-studied problem is that of automatically generating text from images. In this paper, we extend this problem to the more specific domain of face description. Unlike scene descriptions, face descriptions are more fine-grained and rely on attributes extracted from the image, rather than objects and relations. Given that no data exists for this task, we present an ongoing crowdsourcing study to collect a corpus of descriptions of face images taken ‘in the wild’. To gain a better understanding of the variation we find in face description and the possible issues that this may raise, we also conducted an annotation study on a subset of the corpus. Primarily, we found descriptions to refer to a mixture of attributes, not only physical but also emotional and inferential, which is bound to create further challenges for current image-to-text methods.
Appears in Collections:Scholarly Works - InsLin

Files in This Item:
File Description SizeFormat 
L18-1525.pdf316.84 kBAdobe PDFView/Open

Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.