Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/135741
Title: The procedural content generation benchmark : an open-source testbed for generative challenges in games
Authors: Khalifa, Ahmed
Gallotta, Roberto
Barthet, Matthew
Liapis, Antonios
Togelius, Julian
Yannakakis, Georgios N.
Keywords: Application software
Computational intelligence
User interfaces (Computer systems)
Video games -- Programming
Artificial intelligence
Benchmarking (Management)
Issue Date: 2025
Publisher: ACM
Citation: Khalifa, A., Gallotta, R., Barthet, M., Liapis, A., Togelius, J., & Yannakakis, G. N. (2025, April). The Procedural Content Generation Benchmark: An Open-source Testbed for Generative Challenges in Games. In Proceedings of the 20th International Conference on the Foundations of Digital Games, Vienna & Graz, pp. 1-12.
Abstract: This paper introduces the Procedural Content Generation Benchmark for evaluating generative algorithms on different game content creation tasks. The benchmark comes with 12 game-related problems with multiple variants on each problem. Problems vary from creating levels of different kinds to creating rule sets for simple arcade games. Each problem has its own content representation, control parameters, and evaluation metrics for quality, diversity, and controllability. This benchmark is intended as a first step towards a standardized way of comparing generative algorithms. We use the benchmark to score three baseline algorithms: a random generator, an evolution strategy, and a genetic algorithm. Results show that some problems are easier to solve than others, as well as the impact the chosen objective has on quality, diversity, and controllability of the generated artifacts.
URI: https://www.um.edu.mt/library/oar/handle/123456789/135741
Appears in Collections:Scholarly Works - InsDG



Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.