Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/104595
Title: Face2Text revisited : improved data set and baseline results
Authors: Tanti, Marc
Abdilla, Shaun
Muscat, Adrian
Borg, Claudia
Farrugia, Reuben A.
Gatt, Albert
Keywords: Natural language generation (Computer science)
Face perception
Visual perception
Issue Date: 2022
Publisher: European Language Resources Association (ELRA)
Citation: Tanti, M., Abdilla, S., Muscat, A., Borg, C., Farrugia, R. A., & Gatt, A. (2022). Face2Text revisited : improved data set and baseline results. Workshop on People in Vision, Language, and the Mind, Marseille. 41-47.
Abstract: Current image description generation models do not transfer well to the task of describing human faces. To encourage the development of more human-focused descriptions, we developed a new data set of facial descriptions based on the CelebA image data set. We describe the properties of this data set, and present results from a face description generator trained on it, which explores the feasibility of using transfer learning from VGGFace/ResNet CNNs. Comparisons are drawn through both automated metrics and human evaluation by 76 English-speaking participants. The descriptions generated by the VGGFace-LSTM + Attention model are closest to the ground truth according to human evaluation whilst the ResNet-LSTM + Attention model obtained the highest CIDEr and CIDEr-D results (1.252 and 0.686 respectively). Together, the new data set and these experimental results provide data and baselines for future work in this area.
URI: https://www.um.edu.mt/library/oar/handle/123456789/104595
Appears in Collections:Scholarly Works - InsLin

Files in This Item:
File Description SizeFormat 
Face2Text_revisited_improved_data_set_and_baseline_results_2022.pdf
  Restricted Access
1.04 MBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.