Current computer games are being set in increasingly more complex and dynamic virtual environments. Massively multiplayer online games, for example, are played in persistent virtual worlds, which evolve and change as players create and personalize their own virtual property. In contrast, technologies for controlling the behavior of nonplayer characters that populate virtual game worlds are frequently limited to preprogrammed rules. Characters using fixed rule-sets lack the ability to adapt in time with their environment. Motivated reinforcement learning offers an alternative to character design that can achieve nonplayer characters that both evolve and adapt in dynamic environments. This article presents and evaluates two computational models of motivation for use in nonplayer characters in persistent computer game worlds. These models represent motivation as an ongoing search for novelty, interest, and competence. Two metrics are introduced to evaluate the adaptability of characters controlled by motivated reinforcement learning agents using different models of motivation. These metrics characterize the behavior of nonplayer characters in terms of the variety and complexity of learned behaviors. An empirical evaluation of characters in simulated game scenarios shows that characters motivated by the search for competence are more adaptable in dynamic environments than those motivated by interest and novelty alone.