Learning and planning in videogames via task decomposition

Dann, M 2019, Learning and planning in videogames via task decomposition, Doctor of Philosophy (PhD), Science, RMIT University.


Document type: Thesis
Collection: Theses

Attached Files
Name Description MIMEType Size
Dann.pdf Thesis application/pdf 6.05MB
Title Learning and planning in videogames via task decomposition
Author(s) Dann, M
Year 2019
Abstract Artificial intelligence (AI) methods have come a long way in tabletop games, with computer programs having now surpassed human experts in the challenging games of chess, Go and heads-up no-limit Texas hold'em. However, a significant simplifying factor in these games is that individual decisions have a relatively large impact on the state of the game. The real world, however, is granular. Human beings are continually presented with new information and are faced with making a multitude of tiny decisions every second. Viewed in these terms, feedback is often sparse, meaning that it only arrives after one has made a great number of decisions. Moreover, in many real-world problems there is a continuous range of actions to choose from, and attaining meaningful feedback from the environment often requires a strong degree of action coordination. Videogames, in which players must likewise contend with granular time scales and continuous action spaces, are in this sense a better proxy for real-world problems, and have thus become regarded by many as the new frontier in games AI.

Seemingly, the way in which human players approach granular decision-making in videogames is by decomposing complex tasks into high-level subproblems, thereby allowing them to focus on the "big picture". For example, in Super Mario World, human players seem to look ahead in extended steps, such as climbing a vine or jumping over a pit, rather than planning one frame at a time. Currently though, this type of reasoning does not come easily to machines, leaving many open research problems related to task decomposition. This thesis focuses on three such problems in particular: (1) The challenge of learning subgoals autonomously, so as to lessen the issue of sparse feedback. (2) The challenge of combining discrete planning techniques with extended actions whose durations and effects on the environment are uncertain. (3) The questions of when and why it is beneficial to reason over high-level continuous control variables, such as the velocity of a player-controlled ship, rather than over the most low-level actions available. We address these problems via new algorithms and novel experimental design, demonstrating empirically that our algorithms are more efficient than strong baselines that do not leverage task decomposition, and yielding insight into the types of environment where task decomposition is likely to be beneficial.
Degree Doctor of Philosophy (PhD)
Institution RMIT University
School, Department or Centre Science
Subjects Adaptive Agents and Intelligent Robotics
Virtual Reality and Related Simulation
Keyword(s) reinforcement learning
videogames
options framework
sparse rewards
infinite mario
exploration
video games
Versions
Version Filter Type
Access Statistics: 41 Abstract Views, 37 File Downloads  -  Detailed Statistics
Created: Fri, 31 May 2019, 12:05:11 EST by Keely Chapman
© 2014 RMIT Research Repository • Powered by Fez SoftwareContact us