Zhao, RichardSaadat, Kimiya2024-11-212024-11-212024-11-18Saadat, K. (2024). Single-player to two-player knowledge transfer in Atari 2600 games (Master's thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca.https://hdl.handle.net/1880/120090Playing two-player games using reinforcement learning and self-play can be challenging due to the complexity of two-player environments and the potential instability in the training process. It is proposed that a reinforcement learning algorithm can train more efficiently and achieve improved performance in a two-player game by leveraging the knowledge from the single-player version of the same game. This study examines the proposed idea in ten different Atari 2600 environments using the Atari 2600 RAM as the input state. The advantages of using transfer learning from a single-player training process over training in a two-player setting from scratch are discussed, and the results are demonstrated in several metrics, such as the training time and average total reward. Finally, a method for calculating RAM complexity and its relationship to performance after transfer is discussed. Results show that in most cases transferred agent is performing better than the agent trained from scratch while taking less time to train. Moreover, it is shown that RAM complexity can be used as a weak predictor to predict the transfer's effectiveness.enUniversity of Calgary graduate students retain copyright ownership and moral rights for their thesis. You may use this material in any way that is permitted by the Copyright Act or through licensing that has been assigned to the document. For uses that are not allowable under copyright legislation or licensing, you are required to seek permission.Reinforcement LearningTransfer LearningGame-Playing AIAtari 2600Deep Q NetworksMultiplayer GamesDeep Reinforcement LearningDeep LearningComputer ScienceArtificial IntelligenceSingle-player to Two-player Knowledge Transfer in Atari 2600 Gamesmaster thesis