Please use this identifier to cite or link to this item:
https://www.um.edu.mt/library/oar/handle/123456789/92538
Title: | Blending output from generative adversarial networks to texture high-resolution 2D town maps for roleplaying games |
Authors: | Siracusa, Gianfranco Seychell, Dylan Bugeja, Mark |
Keywords: | Computer networks Image transmission Concept mapping Games |
Issue Date: | 2021 |
Publisher: | IEEE |
Citation: | Siracusa, G., Seychell, D., & Bugeja, M. (2021). Blending output from generative adversarial networks to texture high-resolution 2D town maps for roleplaying games. 2021 IEEE Conference on Games (CoG), Copenhagen. 1-8. |
Abstract: | The recent success of Generative Adversarial Networks (GAN s) in image and video applications led to the development of numerous variants specialised for particular tasks, such as conditional GANs for image-to-image translation. In spite of the research done in fine-tuning architectures and applying them to different subjects, techniques still deal with stand-alone images, such as nature scenes, city landmarks, faces and others. The task of producing contiguous colour data - namely adjacent parts of the same image, not textures - has not been attempted before in literature related to generative machine learning techniques. Achieving this feat would allow large images to be processed in smaller parts, hence removing the architectural maximum to the output resolution that can be achieved by the network. Concurrent state-of-the-art architectures for conditional image-to-image translation are in the range of 2k x 1k pixels and typically take several days to train on powerful hardware. The proposed contiguous technique, in this case applied on fantasy maps for roleplaying games, can achieve higher resolutions with smaller networks that can be trained faster, within a single day. The technique is capable of maintaining as much quality as allowed by the detail of the semantic layouts provided, even at 4k and higher, but it suffers when detail in these is too sparse. A sample of images produced by the system were shown to survey participants who judged their appeal as 3.49 on a Likert scale of 5, and segmentation analysis reported an average weighted inter-class accuracy score of 0.689 (0.448 unweighted). |
URI: | https://www.um.edu.mt/library/oar/handle/123456789/92538 |
Appears in Collections: | Scholarly Works - FacICTAI |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Blending_output_from_generative_adversarial_networks_to_texture_high_resolution_2D_town_maps_for_roleplaying_games.pdf Restricted Access | 6.66 MB | Adobe PDF | View/Open Request a copy |
Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.