Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/104611
Title: Visually grounded generation of entailments from premises
Authors: Jafaritazehjani, Somaye
Gatt, Albert
Tanti, Marc
Keywords: Natural language processing (Computer science)
Semantics
Artificial intelligence
Issue Date: 2019
Publisher: Association for Computational Linguistics
Citation: Jafaritazehjani, S., Gatt, A., & Tanti, M. (2019). Visually grounded generation of entailments from premises. Proceedings of the 12th International Conference on Natural Language Generation, Japan. 178-188.
Abstract: Natural Language Inference (NLI) is the task of determining the semantic relationship between a premise and a hypothesis. In this paper, we focus on the generation of hypotheses from premises in a multimodal setting, to generate a sentence (hypothesis) given an image and/or its description (premise) as the input. The main goals of this paper are (a) to investigate whether it is reasonable to frame NLI as a generation task; and (b) to consider the degree to which grounding textual premises in visual information is beneficial to generation. We compare different neural architectures, showing through automatic and human evaluation that entailments can indeed be generated successfully. We also show that multimodal models outperform unimodal models in this task, albeit marginally.
URI: https://www.um.edu.mt/library/oar/handle/123456789/104611
Appears in Collections:Scholarly Works - InsLin

Files in This Item:
File Description SizeFormat 
Visually_grounded_generation_of_entailments_from_premises_2019.pdf
  Restricted Access
1.09 MBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.