Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/22385
Title: What is the role of recurrent neural networks (RNNs) in an image caption generator?
Authors: Tanti, Marc
Gatt, Albert
Camilleri, Kenneth P.
Keywords: Computational linguistics
Image analysis
Natural language processing (Computer science)
Linguistic analysis (Linguistics)
Corpora (Linguistics)
Issue Date: 2017
Publisher: Cornell University
Citation: Tanti, M., Gatt, A., & Camilleri, K. P. (2017). What is the role of recurrent neural networks (RNNs) in an image caption generator?. arXiv preprint arXiv:1708.02043.
Abstract: In neural image captioning systems, a recurrent neural network (RNN) is typically viewed as the primary `generation' component. This view suggests that the image features should be `injected' into the RNN. This is in fact the dominant view in the literature. Alternatively, the RNN can instead be viewed as only encoding the previously generated words. This view suggests that the RNN should only be used to encode linguistic features and that only the final representation should be `merged' with the image features at a later stage. This paper compares these two architectures. We find that, in general, late merging outperforms injection, suggesting that RNNs are better viewed as encoders, rather than generators.
URI: https://www.um.edu.mt/library/oar//handle/123456789/22385
Appears in Collections:Scholarly Works - FacEngSCE
Scholarly Works - InsLin

Files in This Item:
File Description SizeFormat 
inlg2017-rnn.pdf163.47 kBAdobe PDFView/Open


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.