Please use this identifier to cite or link to this item:
Title: Rotation, translation, and cropping for zero-shot generalization
Authors: Ye, Chang
Khalifa, Ahmed
Bontrager, Philip
Togelius, Julian
Keywords: Computer games -- Design
Image processing
Artificial intelligence
Machine learning
Issue Date: 2020
Publisher: Institute of Electrical and Electronics Engineers
Citation: Ye, C., Khalifa, A., Bontrager, P., & Togelius, J. (2020). Rotation, translation, and cropping for zero-shot generalization. 2020 IEEE Conference on Games (CoG), Osaka. 57-64.
Abstract: Deep Reinforcement Learning (DRL) has shown impressive performance on domains with visual inputs, in particular various games. However, the agent is usually trained on a fixed environment, e.g. a fixed number of levels. A growing mass of evidence suggests that these trained models fail to generalize to even slight variations of the environments they were trained on. This paper advances the hypothesis that the lack of generalization is partly due to the input representation, and explores how rotation, cropping and translation could increase generality. We show that a cropped, translated and rotated observation can get better generalization on unseen levels of two-dimensional arcade games from the GVGAI framework. The generality of the agents is evaluated on both human-designed and procedurally generated levels.
Appears in Collections:Scholarly Works - InsDG

Files in This Item:
File Description SizeFormat 
  Restricted Access
2.66 MBAdobe PDFView/Open Request a copy

Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.