DOI: 10.1145/1178418.1178432Modern, commercial computer games rely primarily on AI techniques that were developed several decades ago, and until recently there has been little impetus to change this. Despite the fact that the computer-controlled agents in such games often possess abilities far in advance of the limits imposed on human participants, competent players are capable of easily beating their artificial opponents, suggesting that approaches based on the analysis and imitation of human play may produce superior agents, in terms of both performance and believability.In this article, we describe our work in imitating the observed goal-oriented behaviors of a human player, based on concepts from data analysis and reinforcement learning. Since even the most intelligent artificial agent will be quickly identified as such if it is observed to move in a robotic manner, we also seek to incorporate mechanisms that will result in believably human-like motion. We then present some illustrative examples, demonstrating the effectiveness of our model. Finally, we discuss future work in this field.