Please use this identifier to cite or link to this item:
https://www.um.edu.mt/library/oar/handle/123456789/115294
Title: | Comparative performance analysis of AI algorithms in competitive Pokémon Showdown battles |
Authors: | Bonavia Zammit, Nathan (2023) |
Keywords: | Pokémon video games Artificial intelligence Algorithms |
Issue Date: | 2023 |
Citation: | Bonavia Zammit, N. (2023). Comparative performance analysis of AI algorithms in competitive Pokémon Showdown battles (Bachelor's dissertation). |
Abstract: | In this Final Year Project, we investigate Artificial Intelligence bots that play the online strategy game Pokémon Showdown, a web-based Pokémon battle simulator. We first ˊ explain the mechanics of Pokémon Battling and how it has evolved throughout the ˊ years. We then propose two algorithms that are be assigned to a bot respectively. These bots compete on Pokémon Showdown which acts as an optimal environment to ˊ allow these bots to compete against online players. A set number of battles will take place and will be recorded. The comparison of the two algorithms is done using the Elo rating that the platform provides. We also look at the win rate of these bots, as well as the advantages and disadvantages they encountered, along with proposals for future work. Through this, we are able to understand the biggest challenges that each of the algorithms encounters and which one of them performed the best and why. The two algorithms used are a Minimax algorithm, a search algorithm that looks at future possible turns and prioritises always picking the safest route available, and a Q-Learning algorithm that takes the results of previous battles and tries to learn after each one. These bots are then compared to each other along with two other bots which are referred to as the ‘simple’ bots. These bots will be a random bot that picks a move on a random basis without any thought behind it, and a high-damage bot that always tries to pick the highest damaging move, regardless of the actual battle taking place. These bots acted as baselines for other algorithms. The Minimax bot achieved a high win rate in the earlier games but struggled after finding harder opponents which affected its ability to predict the safer routes, whilst the Q-Learning bot plateaued as it did not have any knowledge at the beginning, and then dropped drastically in results. The simple bots were used to showcase the effectiveness of the Minimax and Q-learning bots in comparison since they had very low win rates and Elo ratings, although the Q-learning bot achieved similar if not worse results. After our implementation, we evaluated the performance of the two primary AI algorithms in the context of competitive Role-Playing Game (RPG) game play, specifically in Pokémon battles. Our analysis determined that a search algorithm like ˊ Minimax had a higher success rate than a reinforcement learning algorithm like Q-Learning since they were competing in an uncontrolled online environment. Although neither of the two had any huge success in the competitive scene, Minimax struggled less when it came to adapting to various different online users and therefore led to a higher win rate. On the other hand, Q-Learning struggled in the early stages since it had no prior knowledge whatsoever, and since after losing it kept being paired with players with lower Elo ratings, it never manages to improve. |
Description: | B.Sc. IT (Hons)(Melit.) |
URI: | https://www.um.edu.mt/library/oar/handle/123456789/115294 |
Appears in Collections: | Dissertations - FacICT - 2023 Dissertations - FacICTAI - 2023 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2308ICTICT390905071903_1.PDF Restricted Access | 1.12 MB | Adobe PDF | View/Open Request a copy |
Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.