Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/52959
Title: Using deep learning in a board game support system
Authors: Attard, Owen
Keywords: Board games
Machine learning
Question-answering systems
Issue Date: 2019
Citation: Attard, O. (2019). Using deep learning in a board game support system (Master's dissertation).
Abstract: Nowadays almost anyone who has a smartphone has access to a question answering (QA) system such as Siri, Google Assistant or Brixby where, given a question, they are able to fetch an answer. QA systems aid in giving answers usually about a particular domain instantly by gathering information for a dataset. This research proposes a support system which makes use of a question answering solution in order to answer questions about board game rules. The proposed system works as follows: given raw rules, the system generates a set of questions for the rules and then trains the models on these rules. This allows the system to learn any board game given that it has written rules. The solution proposed is split into three modules which are the question generation module, the processing module and the interface. The solution developed for question generation managed to generate good questions using only the rules. The only drawback of the system used in this research is that it was not able to generate questions that require ‘no’ as an answer, but it was able to generate yes questions. The no questions had to be generated by hand as it was a limitation of the question generation system evaluated. When it comes to the processing modules two solutions were implemented, one based on Dynamic Memory Tensor Network (DMTN) and another one based on Dynamic Memory Network Plus (DMN+). The system used two games, Chess and Checkers, for the evaluation as they are similar but still radically different which allowed the evaluation of the system in scaling complexity. For each game, three datasets were created: one which is the unmodified version of the dataset; the second dataset is a human-modified dataset where the questions were manipulated to make more sense and replicate what would be asked by users; and the third dataset contains the automatically generated yes questions along with the manually created no questions only. This approach was replicated for both chess and checkers. This research showed that DMTN performed better when compared to DMN+ where DMTN was≈ 0.6% better in answering WH (I.E. Where, How, What, which) questions while when it came to ‘yes-no’ questions, DMTN was≈ 5% better. When comparing the split (required) rules, with the full rules the split rules performed ≈ 7.6% better. The DMTN solution achieved an accuracy of 80.23% when using the split rules. The solution though did perform poorly when answering WH questions with an average accuracy 19.52%. This is due to the evaluation approach used as the system provided answer that did not exactly match the expected output even though in some cases both answers were correct. Ideally automatically generating a series of question-answer pairs from a set of textual game instructions would have been fully automated. However, human modification of the generated questions led to an overall improvement (from ≈ 73% to ≈ 83% accuracy to answer yes-no questions). Future research will target these shortcomings and will also investigate alternative ways of evaluating the accuracy of the answers to WH-questions, as currently the evaluation method expects a single word answer and our system generates a sentence for an answer.
Description: M.SC.ARTIFICIAL INTELLIGENCE
URI: https://www.um.edu.mt/library/oar/handle/123456789/52959
Appears in Collections:Dissertations - FacICT - 2019
Dissertations - FacICTAI - 2019

Files in This Item:
File Description SizeFormat 
19MAIPT002.pdf
  Restricted Access
1.16 MBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.