Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/50207
Title: On architectures for including visual information in neural language models for image description
Authors: Tanti, Marc
Keywords: Neurolinguistics
Neural networks (Computer science)
Natural language processing (Computer science)
Image processing -- Digital techniques
Issue Date: 2019
Citation: Tanti, M. (2019). On architectures for including visual information in neural language models for image description (Doctoral dissertation).
Abstract: A neural language model is a neural network that can be used to generate a sentence by suggesting probable next words given a partially complete sentence (a prefix). A recurrent neural network reads in the partial sentence and produces a hidden state vector which represents information about which words can follow. If a likely word from those suggested is selected and attached to the sentence prefix, another word after that can be selected as well, and so on until a complete sentence is generated in an iterative word by word fashion. Rather than just generating random sentences, a neural language model can instead be conditioned into generating descriptions for images by also providing visual information apart from the sentence prefix. This visual information can be included into the language model through different points of entry resulting in different neural architectures. We identify four main architectures which we call init-inject, pre-inject, par-inject, and merge. We analyse these four architectures and conclude that the best performing one is init-inject, which is when the visual information is injected into the initial state of the recurrent neural network. We confirm this using both automatic evaluation measures and human annotation. We then analyse how much influence the images have on each architecture. This is done by measuring how different the output probabilities of a model are when a partial sentence is combined with a completely different image from the one it is meant to be combined with. We find that init-inject tends to quickly become less influenced by the image as more words are generated. A different architecture called merge, which is when the visual information is merged with the recurrent neural network’s hidden state vector prior to output, loses visual influence much more slowly, suggesting that it would work better for generating longer sentences. We also observe that the merge architecture can have its recurrent neural network pre-trained in a text-only language model (transfer learning) rather than be initialised randomly as usual. This results in even better performance than the other architectures, provided that the source language model is not too good at language modelling or it will overspecialise and be less effective at image description generation. Our work open up new avenues of research in neural architectures, explainable AI, and transfer learning.
Description: PH.D.LINGUISTICS
URI: https://www.um.edu.mt/library/oar/handle/123456789/50207
Appears in Collections:Dissertations - InsLin - 2019

Files in This Item:
File Description SizeFormat 
19PHDLIN001.pdf5.13 MBAdobe PDFView/Open


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.