Accepted Papers: Main Track (no abstracts)

  1. Garry Greenwood and Phillipa Avery
    Does the Moran Process Hinder Our Understanding of Cooperation in Human Populations?

  2. Scott Watson, Wolfgang Banzhaf and Andrew Vardy
    Automated Design for Playability in Computer Game Agents

  3. Markus Guhe and Alex Lascarides
    The Effectiveness of Persuasion in The Settlers of Catan

  4. Joan Marc Llargues Asensio, Juan Peralta Donate and Paulo Cortez
    Evolving Artificial Neural Networks Applied to Generate Virtual Characters

  5. Tobias Graf and Marco Platzner
    Common Fate Graph Patterns in Monte Carlo Tree Search for Computer Go

  6. Maciej Swiechowski and Jacek Mandziuk
    Prolog versus specialized logic inference engine in General Game Playing

  7. Christian Bauckhage, Rafet Sifa, Anders Drachen, Christian Thurau and Fabian Hadiji
    Beyond Heatmaps: Spatio-Temporal Clustering using Behavior-Based Partitioning of Game Levels

  8. Luciano Gualà, Stefano Leucci and Emanuele Natale
    Bejeweled, Candy Crush and other Match-Three Games are (NP-)Hard

  9. Daniel Ashlock and Cameron Mcguinness
    Automatic Generation of Fantasy Role-playing Modules

  10. Diego Perez, Spyridon Samothrakis and Simon Lucas
    Knowledge-based Fast Evolutionary MCTS for General Video Game Playing

  11. Wendy Ashlock and Daniel Ashlock
    Shaped Prisoner's Dilemma Automata

  12. Markus Thill, Samineh Bagheri, Patrick Koch and Wolfgang Konen
    Temporal Difference Learning with Eligibility Traces for the Game Connect Four

  13. Paolo Burelli, Georgios Triantafyllidis and Ioannis Patras
    Non-invasive Player Experience Estimation from Body Motion and Game Context

  14. Daniel Scales and Tommy Thompson
    SpelunkBots API - An AI Toolset for Spelunky

  15. Antonios Liapis, Georgios Yannakakis and Julian Togelius
    Designer Modeling for Sentient Sketchbook

  16. Stefan Edelkamp and Erion Plaku
    Multi-Goal Motion Planning with Physics-based Game Engines

  17. Julian Runge, Peng Gao, Florent Garcin and Boi Faltings
    Churn Prediction for High-value Players in Casual Social Games

  18. Fabian Hadiji, Rafet Sifa, Anders Drachen, Christian Thurau, Kristian Kersting and Christian Bauckhage
    Predicting Player Churn in Free-to-play Games

  19. Michael Leece and Arnav Jhala
    Opponent State Modeling in RTS games with limited information using Markov Random Fields

  20. Roman Garnett, Thomas G„rtner, Timothy Ellersiek, Eyj¢lfur GuÐmondsson and P‚tur àskarsson
    Predicting Unexpected Influxes of Players in EVE Online

  21. Nicolas A. Barriga, Marius Stanescu and Michael Buro
    Parallel UCT Search on GPUs

  22. Jayden Ivanovic, Fabio Zambetta, Xiaodong Li and Jessica Rivera Villicana
    Reinforcement Learning to Control a Commander for Capture The Flag

  23. Trevor Sarratt, David Pynadath and Arnav Jhala
    Converging to a Player Model in Monte-Carlo Tree Search

  24. Stefan Edelkamp and Christoph Greulich
    Solving Physical Traveling Salesman Problems with Policy Adaptation

  25. Hyun-Tae Kim and Kyung-Joong Kim
    Learning to Recommend Game Contents for Real-Time Strategy Gamers

  26. Markus Guhe and Alex Lascarides
    Game strategies for The Settlers of Catan

  27. Shaun Bangay and Owen Makin
    Generating an Attribute Space for analyzing Balance in Single Unit RTS Game Combat

  28. Maxime Sanselone, St‚phane Sanchez, C‚dric Sanza, David Panzoli and Yves Duthen
    Control of non-playing characters in a medical learning game with Monte Carlo Tree Search

  29. Swen Gaudl and Joanna J. Bryson
    Extended Ramp Goal Module: Low-Cost Behaviour Arbitration for Real-Time Controllers based on Biological Models of Dopamine Cells

  30. Spyridon Samothrakis, Samuel Roberts, Diego Perez and Simon Lucas
    Rolling Horizon methods for Games with Continuous States and Actions

  31. M.J.W. Tak, Marc Lanctot and Mark H. M. Winands
    Monte Carlo Tree Search Variants for Simultaneous Move Games

  32. Sylvain Labranche, Nicolas Sola, Sophie Callies and Eric Beaudry
    Using Partial Satisfaction Planning to Automatically Select NPCs' Goals and Generate Plans in a Simulation Game

  33. Ibrahim Mahmoud, Lianchao Li, Dieter Wloka and Mostafa Ali
    Believable NPCs in Serious Games: HTN Planning Approach Based on Visual Perception

  34. Nick Sephton, Peter Cowling and Edward Powley
    Heuristic Move Pruning in Monte Carlo Tree Search for the Strategic Card Game Lords of War

  35. Vanus Vachiratamporn, Koichi Moriyama, Ken-Ichi Fukui and Masayuki Numao
    An Implementation of Affective Adaptation in Survival Horror Games

  36. Mateusz Bialas, Shoshannah Tekofsky and Pieter Spronck
    Cultural Influences on Play Style

  37. Tom Vodopivec and Branko Ster
    Enhancing Upper Confidence Bounds for Trees with Temporal Difference Values

  38. Chong-U Lim and D. Fox Harrell
    An Approach to General Videogame Evaluation and Automatic Generation using a Description Language

  39. David Lupien St-Pierre and Olivier Teytaud
    The Nash and the Bandit Approaches for Adversarial Portfolios

  40. Jonathan Tremblay, Pedro Andrade Torres and Clark Verbrugge
    An Algorithmic Approach to Analyzing Combat and Stealth Games

  41. Jiao Jian Wang and Olana Missura
    Racing Tracks Improvisation

  42. Niels Justesen, B lint Tillman, Julian Togelius and Sebastian Risi
    Script- and Cluster-based UCT for StarCraft

  43. Pier Luca Lanzi, Daniele Loiacono and Riccardo Stucchi
    Evolving Maps for Match Balancing in First Person Shooters

  44. Jose-Maria Pena, Javier Viedma Ortiz-Canavate, Santiago Muelas, Antonio Latorre and Luis Pena
    Designer-driven 3D Buildings Generated Using Variable Neighborhood Search

  45. Marc Lanctot, Mark H. M. Winands, Tom Pepels and Nathan R. Sturtevant
    Monte Carlo Tree Search with Heuristic Evaluations using Implicit Minimax Backups

  46. Siming Liu, Sushil Louis and Christopher Ballinger
    Evolving Effective Micro Behaviors in RTS Game

  47. Christopher Ballinger and Sushil Louis
    Learning Robust Build-Orders from Previous Opponents with Coevolution

  48. Rafet Sifa, Christian Bauckhage and Anders Drachen
    The Playtime Principle: Large-scale Cross-games Interest Modeling

  49. Marcin Szubert and Wojciech Jaskowski
    Temporal Difference Learning of N-Tuple Networks for the Game 2048

  50. Mike Preuss, Antonios Liapis and Julian Togelius
    Searching for Good and Diverse Game Levels

  51. Steve Dahlskog and Julian Togelius
    A Multi-level Level Generator

  52. Christoffer Holmg†rd, Antonios Liapis, Julian Togelius and Georgios N. Yannakakis
    Evolving Personas for Player Decision Modeling

  53. Lucas Ferreira and Claudio Toledo
    A Search-based Approach for Generating Angry Birds Levels.

  54. Luca Galli, Pier Luca Lanzi and Daniele Loiacono
    Applying Data Mining to Extract Design Patterns from Unreal Tournament Levels

Accepted Papers: Main Track (with abstracts)

Garry Greenwood and Phillipa Avery. Does the Moran Process Hinder Our Understanding of Cooperation in Human Populations?
Abstract: Iterated Prisoner’s Dilemma games in lattices or regular networks provide a natural framework for studying cooperation in finite populations. The Moran process, coupled with frequency dependent fitness, is widely used to model the evolutionary population dynamics. However, over the past decade there seems to be little progress made in understanding the proliferation of cooperation in human populations. In this paper we will show Moran updating cannot accurately mimic evolutionary dynamics in human populations and its continued use in mathematical models—particularly in spatial games—is unlikely to provide any new insights about human cooperation.
Scott Watson, Wolfgang Banzhaf and Andrew Vardy. Automated Design for Playability in Computer Game Agents
Abstract: This paper explores whether a novel approach to the creation of agent controllers has potential to overcome some of the drawbacks that have prevented novel controller rchitectures from being widely implemented. This is done by using an evolutionary algorithm to generate finite state machine controllers for agents in a simple role playing game. The concept of minimally playable games is introduced to serve as the basis of method of evaluating the fitness of a game’s agent controllers.
Markus Guhe and Alex Lascarides. The Effectiveness of Persuasion in The Settlers of Catan
Abstract: We present a method for obtaining a useful symbolic model of persuasion in a complex game, where players’ preferences over the outcomes of negotiations over resources are incomplete, uncertain, and dynamic. We focus on the problem of identifying the stage in the game where successfully persuading an agent to perform a particular action will have the most impact on one’s chances to win. Our approach exploits empirical data via game simulations, where the set up allows us to investigate individual aspects of the policy in suitably controlled ways. We demonstrate the effectiveness of the approach within the domain of The Settlers of Catan and present some specific lessons that can be learned for this particular game, e.g. that a tipping point in the game occurs for persuasion moves when the leading player reaches 7 Victory Points.
Joan Marc Llargues Asensio, Juan Peralta Donate and Paulo Cortez. Evolving Artificial Neural Networks Applied to Generate Virtual Characters
Abstract: Computer game industry is one of the most profitable nowadays. Although this industry has evolved fast in the last years in different fields, Artificial Intelligence (AI) seems to be stuck. Many games still make use of simple state machines to simulate AI. New models can be designed and proposed to replace this jurassic technique. In this paper we propose the use of Artificial Neural Networks (ANN) as a new model. ANN will be then in charge of receiving information from the game (sensors) and carry out actions (actuators) according to the information received. The search for the best ANN is a complex task that strongly affects the task performance while often requiring a high computational time. In this work, we present ADANN, a system for the automatic evolution and adaptation of artificial neural networks based on evolutionary ANN (EANN). This approach use Genetic Algorithm (GA) that evolves fully connected Artificial Neural Network. The testing game is called Unreal Tournament 2004. The new agent obtained has been put to the test jointly with CCBot3, the winner of BotPrize 2010 competition [1], and have showed a significant improvement in the humannesss ratio. Additionally, we have confronted our approach and CCBot3 (winner of BotPrize competition in 2010) to First-person believability assessment (BotPrize original judging protocol), demonstrating that the active involvement of the judge has a great impact in the recognition of human-like behaviour.
Tobias Graf and Marco Platzner. Common Fate Graph Patterns in Monte Carlo Tree Search for Computer Go
Abstract: In Monte Carlo Tree Search often extra knowledge in form of patterns is used to guide the search and improve the playouts. Shape patterns, which are frequently used in Computer Go, do not describe tactical situations well, so that this knowledge has to be added manually. This is a tedious process which cannot be avoided as it leads to big improvements in playing strength. The common fate graph which is a special graphical representation of the board provides an alternative which handles tactical situations much better. In this paper we use the results of linear time graph kernels to extract features from the common fate graph and use them in a Bradley-Terry model to predict expert moves. We include this prediction model into the tree search and the playout part of a Go program using Monte Carlo Tree Search. This leads to better prediction rates and an improvement in playing strength of about 190 ELO.
Maciej Świechowski and Jacek Mańdziuk. Prolog versus specialized logic inference engine in General Game Playing
Abstract: In this paper, we study the problem of applying an inference engine, i.e. a mechanism able to derive answers from a knowledge base, to General Game Playing (GGP). Our focus is on the General Game Playing Competition framework proposed by the Stanford Logic Group. This particular embodiment of a multi-game playing uses the Game Description Language (GDL) for representing the knowledge base by means of game rules. The GDL is a variant of Datalog and a subset of Prolog, so there is a strong bond with formal logic languages. We present basic principles of our system which is dedicated to GDL. The motivation is to exceed the performance of Prolog engines applied to GDL, which is currently a common approach. Empirical results show that our solution is indeed faster in 6 out of 7 tested games.
Christian Bauckhage, Rafet Sifa, Anders Drachen, Christian Thurau and Fabian Hadiji. Beyond Heatmaps: Spatio-Temporal Clustering using Behavior-Based Partitioning of Game Levels
Abstract: Evaluating the spatial behavior of players allows for comparing design intent with emergent behavior. However, spatial analytics for game development is still in its infancy and current analysis mostly relies on aggregate visualizations such as heatmaps. In this paper, we propose the use of advanced spatial clustering techniques to evaluate player behavior. In particular, we consider the use of DEDICOM and DESICOM, two techniques that operate on asymmetric spatial similarity matrices and can simultaneously uncover preferred locations and likely transitions between them. Our results highlight the ability of asymmetric techniques to partition game maps into meaningful areas and to retain information about player movements between these areas.
Luciano Gualà, Stefano Leucci and Emanuele Natale. Bejeweled, Candy Crush and other Match-Three Games are (NP-)Hard
Abstract: The twentieth century has seen the rise of a new type of video games targeted at a mass audience of ``casual'' gamers. Many of these games require the player to swap items in order to form matches of three and are collectively known as \emph{tile-matching match-three games}. Among these, the most influential one is arguably \emph{Bejeweled} in which the matched items (gems) pop and the above gems fall in their place. Bejeweled has been ported to many different platforms and influenced an incredible number of similar games. Very recently one of them, named \emph{Candy Crush Saga} enjoyed a huge popularity and quickly went viral on social networks. We generalize this kind of games by only parameterizing the size of the board, while all the other elements (such as the rules or the number of gems) remain unchanged. Then, we prove that answering many natural questions regarding such games is actually \NP-Hard. These questions include determining if the player can reach a certain score, play for a certain number of turns, and others. We also provide a playable web-based implementation of our reduction, which is accessible at \url{http://candycrush.isnphard.com}.
Daniel Ashlock and Cameron Mcguinness. Automatic Generation of Fantasy Role-playing Modules
Abstract: Fantasy role playing games are a type of pencil-and-paper game in which a group of players, moderated by a referee, engage in cooperative story telling. This study leverages earlier level map generation technology to substantially automate the production of modules for fantasy role playing games. A goblin outpost is used as an example type of module for demonstrating the system. Advantages of the system include being able to produce a large number of different but tactically similar adventure modules with substantial control of content and feel by the designer. In addition to presenting a modified version of the map generator, the system presented in this study also automatically identifies rooms, abstracts the room adjacency relation, and populates the rooms with monsters, traps, and treasure. A example module, complete except for monster descriptions, is presented demonstrating proof of concept. Multiple maps for the goblin outpost are presented for contrast and a number of future directions for improvement are outlined.
Diego Perez, Spyridon Samothrakis and Simon Lucas. Knowledge-based Fast Evolutionary MCTS for General Video Game Playing
Abstract: General Video Game Playing is a game AI domain in which the usage of game-dependent domain knowledge is very limited or even non existent. This imposes obvious difficulties when seeking to create agents able to play sets of different games. Taken more broadly, this issue can be used as an introduction to the field of General Artificial Intelligence. This paper explores the performance of a vanilla Monte Carlo Tree Search algorithm, and analyzes the main difficulties encountered when tackling this kind of scenarios. Modifications are proposed to overcome these issues, strengthening the algorithm’s ability to gather and discover knowledge, and taking advantage of past experiences. Results show that the performance of the algorithm is significantly improved, although there remain unresolved problems that require further research. The framework employed in this research is publicly available and will be used in the General Video Game Playing competition at the IEEE Conference on Computational Intelligence and Games in 2014.
Wendy Ashlock and Daniel Ashlock. Shaped Prisoner's Dilemma Automata
Abstract: Finite state automata are one of the common representations used to encode agents to play the iterated prisoner's dilemma. A shape for a finite state automaton is a restriction on what transitions are permitted out of each state in the automaton. Eight shapes for agents that play iterated prisoner's dilemma, including a baseline shape with all possible transitions, are tested with an evolutionary algorithm by enforcing the shape during the co-evolution of agents. All eight shapes yield distinct distributions of behaviors in the evolved agents. Some of the observed types of play are entirely novel.
Markus Thill, Samineh Bagheri, Patrick Koch and Wolfgang Konen. Temporal Difference Learning with Eligibility Traces for the Game Connect Four
Abstract: Computational-intelligence-based systems that learn to play board games are often trained by self-play on the basis of temporal difference learning. Successful examples include Tesauro’s well known TD-Gammon and Lucas’ Othello agent. For other board games of moderate complexity like Connect Four, we found in previous work that a successful system requires a very rich initial feature set with more than half a million of weights and several millions of training game plays. In this work we study the benefits of eligibility traces added to this system. To the best of our knowledge, eligibility traces have not been used before for such a large system. Different versions of eligibility traces (standard, resetting, and replacing traces) are compared. We show that eligibility traces speed up the learning by a factor of two and that they increase the asymptotic playing strength.
Paolo Burelli, Georgios Triantafyllidis and Ioannis Patras. Non-invasive Player Experience Estimation from Body Motion and Game Context
Abstract: In this paper, we investigate on the relationship between player experience and body movements in a non-physical 3D computer game. During an experiment, the participants played a series of short game sessions and rated their experience while their body movements were tracked using a depth camera. The data collected was analysed and a neural network was trained to find the mapping between player body movements, player in-game behaviour and player experience. The results reveal that some aspects of player experience, such as anxiety or challenge, can be detected with high accuracy (up to 81\%). Moreover, taking into account the playing context, the accuracy can be raised up to 86\%. Following such a multi-modal approach, it is possible to estimate the player experience in a non-invasive fashion during the game and, based on this information, the game content could be adapted accordingly.
Daniel Scales and Tommy Thompson. SpelunkBots API - An AI Toolset for Spelunky
Abstract: This paper describes the Spelunkbots API, a pro- posed benchmark for computational and articial intelligence algorithms. The benchmark, developed by the authors, is an API designed to allow for AI controllers to be written for the game Spelunky: a challenging 2D platforming game that requires players to commit quick reactive behaviours as well as long-term deliberation in order to maximise reward and minimise comple- tion time. In this paper we highlight the features of the Spelunky game and a rationale for its relevance as an AI benchmark. Followed by a description of how the original Spelunky source code has been modified to provide a testing environment for bots. As well as some examples of simple hand-written bots, we discuss the relevance of this benchmark in the context of current AI methods.
Antonios Liapis, Georgios Yannakakis and Julian Togelius. Designer Modeling for Sentient Sketchbook
Abstract: This paper documents the challenges in creating a computer-aided level design tool which incorporates computer-generated suggestions which appeal to the human user. Several steps are suggested in order to make the suggestions more appropriate to a specific user's overall style, current focus, and end-goals. Designer style is modeled via choice-based interactive evolution which adapts the impact of different dimensions of quality based on the designer's choice of certain suggestions over others. Modeling process is carried out similarly to style, but adapting to the current focus of the designer's actions. Goals are modeled by estimating the visual patterns of the designer's final artifact and changing the parameters of the algorithm to enforce such patterns on generated suggestions.
Stefan Edelkamp and Erion Plaku. Multi-Goal Motion Planning with Physics-based Game Engines
Abstract: Toward enhancing automation in video games, this paper proposes an efficient approach for multi-goal motion planning, where a mobile agent needs to visit several regions in a complex environment containing numerous obstacles. The approach works in conjunction with differential equations and physics-based simulations of vehicle dynamics, efficiently planning collision-free, dynamically-feasible, and low-cost solution trajectories. We combine discrete search with sampling-based motion planning to map this challenging task to graph search. The approach imposes a discrete abstraction obtained by a workspace decomposition and then precomputes shortest paths to each goal. As the sampling-based motion planner expands a tree of collision-free and dynamically-feasible trajectories, it relies on a fast TSP solver to compute low-cost tours which can effectively guide the motion-tree expansion. The tours are adjusted based on progress made and a partition of the motion tree into equivalent groups, giving the approach the flexibility to discover new tours that are compatible with the vehicle dynamics and collision constraints. Comparisons to related work show significant computational speedups and reduced solution costs.
Julian Runge, Peng Gao, Florent Garcin and Boi Faltings. Churn Prediction for High-value Players in Casual Social Games
Abstract: Predicting when a player will leave a game creates a unique opportunity to increase a player’s lifetime and revenue contribution. The player can be incentivized to stay, strategically cross-linked to other games in the company’s portfolio or in a last step be passed on to competing companies. This paper focuses on predicting churn for high-value players of casual social games and attempts to assess the business impact that can be derived from a predictive churn model. We compare the prediction performance of four common classification algorithms over two casual social games with millions of players each. Further, we implement a hidden Markov model to explicitly address temporal dynamics. We find that a neural network achieves the best prediction performance in terms of area under curve (AUC). To assess the business value of churn prediction, we design and implement an AB test over one of the games. We use free in-game currency as an incentive to retain players. Test results indicate that contacting players short before the churn event based on the model’s predictions improves the effectiveness of communication efforts with players by factor three to four. They further show that giving out free in-game currency is not able to significantly impact churn rate of players. This suggests that players can only be retained by remarkably changing their gameplay experience ahead of the churn event and that cross-linking may be the more effective measure to deal with churning players.
Fabian Hadiji, Rafet Sifa, Anders Drachen, Christian Thurau, Kristian Kersting and Christian Bauckhage. Predicting Player Churn in Free-to-play Games
Abstract: Free-to-Play or "freemium" games represent a fundamental shift in the business models of the game industry, facilitated by the increasing use of online distribution platforms and the introduction of increasingly powerful mobile platforms. The ability of a game development company to analyze and derive insights from behavioral telemetry is crucial to the success of these games which rely on in-game purchases and in-game advertising to generate revenue, and for the company to remain competitive in a global marketplace. The ability to model, understand and predict future player behavior has a crucial value, allowing developers to obtain data-driven insights to inform design, development and marketing strategies. One of the key challenges is modeling and predicting player churn.This paper presents the first cross-game study of churn prediction in Free-to-Play games. Churn in games is discussed and thoroughly defined as a formal problem, aligning with industry standards. Furthermore, a range of features which are generic to games are defined and evaluated for their usefulness in predicting player churn, e.g. playtime, session length and session intervals. Using these behavioral features, combined with the individual retention model for each game in the dataset used, we develop a broadly applicable churn prediction model, which does not rely on game-design specific features. The presented classifiers are applied on a dataset covering five free-to-play games with resulting high accuracy churn prediction.
Michael Leece and Arnav Jhala. Opponent State Modeling in RTS games with limited information using Markov Random Fields
Abstract: One of the critical problems in adversarial and imperfect information domains is modeling an opponent's state from the information available to the acting agent. In the domain of real time strategy games, this information consists of the portion of the map and enemy units visible to the agent at any given point in the match. From this, we wish to infer the true values of the opponent's state, to inform both current actions and planning ahead. We present a graphical model for opponent modeling in StarCraft: Brood War that uses observed quantities to infer distributions for unseen features. We train and test this model using replays of professional play, and show that our results improve upon prior work. In addition, we present a new metric for measuring aggregate performance of a model within this domain. Finally, we consider possible use cases and extensions for this model.
Roman Garnett, Thomas Gärtner, Timothy Ellersiek, Eyjólfur Guðmondsson and Pétur Óskarsson. Predicting Unexpected Influxes of Players in EVE Online
Abstract: EVE Online is a massively multiplayer online role-playing game (MMORPG) taking place in a large galaxy consisting of about 7 500 star systems. In comparison to many other online role-playing games, the users interact in the same instance of a persistent player-driven universe. Given the number of simultaneous pilots online at the same time—a number which at times reaches up to more than 50 000 concurrent accounts logged on to the same server—the EVE Online universe can present atypically difficult load-balancing challenges when the users decide to operate in a coordinated fashion, for example, to launch an attack on a particular system. We will present an scalable, automated statistical method for predicting such unexpected user gatherings by considering the evolving shortest-path distances from each user to each system. Here we present a case study analyzing nearly 300 million user movements in the EVE Online universe from over 700 thousand user accounts over a period of three months. We demonstrate an ability to predict sudden spikes in user presence (corresponding to actual events) before they happen, suggesting our techniques could be useful for automated load-balancing in such massive online games.
Nicolas A. Barriga, Marius Stanescu and Michael Buro. Parallel UCT Search on GPUs
Abstract: We propose two parallel UCT search (Upper Confidence bounds applied to Trees) algorithms that take advantage of modern GPU hardware. Experiments using the game of Ataxx are conducted, and the algorithm's speed and playing strength is compared to sequential UCT running on the CPU and Block Parallel UCT that runs its simulations on a GPU. Empirical results show that our proposed Multiblock Parallel algorithm outperforms other approaches and can take advantage of the GPU hardware without the added complexity of searching multiple trees.
Jayden Ivanovic, Fabio Zambetta, Xiaodong Li and Jessica Rivera Villicana. Reinforcement Learning to Control a Commander for Capture The Flag
Abstract: Capture the flag (CTF) is a popular game mode for many blockbuster games. Agents in these games struggle against the players who learn to adapt to their strategies leading to the players dissatisfaction. We present our work on using Reinforcement Learning (RL) algorithms to learn a controller of a commander in the AI Sandbox platform, a flexible simulation environment which allows users across the world to participate in a variety of challenges and competitive games. As a result of building an RL controller for a commander we found that performance varies significantly across opponents, maps and team sizes, where the RL controller shows adequate performance in a subset of the games played and struggles in others.
Trevor Sarratt, David Pynadath and Arnav Jhala. Converging to a Player Model in Monte-Carlo Tree Search
Abstract: Player models allow search algorithms to account for differences in agent behavior according to player's preferences and goals. However, it is often not until the first actions are taken that an agent can begin assessing which models are relevant to its current opponent. This paper investigates the integration of belief distributions over player models in the Monte-Carlo Tree Search algorithm. We describe a method of updating belief distributions through leveraging information sampled during the Monte-Carlo Tree Search. We then characterize the effect of tuning parameters of the Monte-Carlo Tree Search to convergence of belief distributions. Evaluation of this approach is done in comparison with value iteration for an iterated version of the prisoner's dilemma problem. We show that for a sufficient quantity of iterations, our approach converges to the correct model faster than the same model under value iteration.
Stefan Edelkamp and Christoph Greulich. Solving Physical Traveling Salesman Problems with Policy Adaptation
Abstract: In this paper we propose algorithms for solving variants of the so-called Physical Traveling Salesman Problem (PTSP), which simulate the continuous operation of a vehicle in a two-dimensional board with obstacles. Our solutions avoid the fragmented allocation of memory and precomputes cell-precise single-source shortest paths for each waypoint by using an engineered implementation of Dijkstra's algorithm. In contrast to previous approaches, graph search shows optimal time performance. To determine a first tour to follow, we solve ordinary and general TSPs. For moderately sized problems, we apply an optimal depth-first branch-and-bound TSP solver that warrants constant-time per search tree node. For larger problems, we apply randomized search with policy adaptation to learn from good tours.
Hyun-Tae Kim and Kyung-Joong Kim. Learning to Recommend Game Contents for Real-Time Strategy Gamers
Abstract: It is not surprising that there are currently many game videos, increasingly prevalent on the web, to watch professional player’s matches. Therefore, we need a kind of recommendation system to select the most preferable contents. Although there have been many personalization techniques proposed, there are few works on the recommendation of personal game contents. In this paper, we attempt to propose to use machine learning based on game contents with user's explicit preference (like or dislike). Especially, we incorporate game domain knowledge on the design of features for learning. As a test bed, we select a famous real-time strategy game, StarCraft, because there are many game replays played by professional players. To generate training samples, each participant is invited to review replays and express his/her preference. Using the data, machine learning algorithms build a set of models to predict preference on new replays. For five participants, we labeled two hundred StarCraft replays. The exploratory study on the selection of the features and classification algorithms show the importance of the careful selection of them. The experimental results show that the proposed recommendation system can predict the users' preference with an accuracy of 79% when we select appropriate models and features.
Markus Guhe and Alex Lascarides. Game strategies for The Settlers of Catan
Abstract: We present an empirical framework for testing game strategies in The Settlers of Catan, a complex win–lose game that lacks any analytic solution. This framework provides the means to change different components of an autonomous agent’s strategy, and to test them in suitably controlled ways via performance metrics in game simulations and via comparisons of the agent’s behaviours with those exhibited in a corpus of humans playing the game. We provide changes to the game strategy that not only improve the agent’s strength, but corpus analysis shows that they also bring the agent closer to a model of human players.
Shaun Bangay and Owen Makin. Generating an Attribute Space for analyzing Balance in Single Unit RTS Game Combat
Abstract: Achieving balance in a large real-time strategy (RTS) game requires consideration of all contributing factors. We propose an attribute space representation as a common framework for reasoning about balance in the combat scenarios found in these games. Attribute space search is used to identify balanced configurations and game features such as special abilities and terrain act as modifiers to a basis attribute space. The approach is evaluated through the development of an attribute space representation for single unit conflict. An efficient and accurate predictive model is developed corresponding to a simulation of a common RTS combat representation employing the typical attributes of range, speed, health and damage. This attribute space is used to relate the properties of each attribute to the resulting game balance and to identify new balance points when individual attributes are enhanced.
Maxime Sanselone, Stéphane Sanchez, Cédric Sanza, David Panzoli and Yves Duthen. Control of non-playing characters in a medical learning game with Monte Carlo Tree Search
Abstract: In this paper, we apply the Monte Carlo Tree Search (MCTS) method for controlling at once several virtual characters in a 3D multi-player learning game. The MCTS is used as a search algorithm to explore a search space where every potential solution reflects a specific state of the game environment. Functions representing the interaction abilities of each character are provided to the algorithm to leap from one state to another. We show that the MCTS algorithm successfully manages to plan the actions for several virtual characters in a synchronized fashion, from the initial state to one or more desirable end states. Besides, we demonstrate the ability of this algorithm to fulfill a specific requirement of a learning game AI : guiding the non playing characters to follow a predefined and constrained learning scenario.
Swen Gaudl and Joanna J. Bryson. Extended Ramp Goal Module: Low-Cost Behaviour Arbitration for Real-Time Controllers based on Biological Models of Dopamine Cells
Abstract: The current industrial focus in virtual agents and digital games is on complex and realistic systems that more accurately simulate the real world. This trend introduces a multitude of control parameters generally accompanied by high computational costs. The resulting complexity limits the applicability of AI systems in those domains. A possible solution to this problem it to focus on light-weight flexible systems which can be simultaneously created, controlled and run in parallel. The resulting systems are then able to control individual game characters, scaling up to large numbers of characters, forming even complex social systems. Here we demonstrate our light-weight systems-engineering approach for enriching behaviour arbitration through employing a biomimetic model. The focus of the provided augmentation for existing light-weight action selection systems is on an easy control of the behaviour maintenance, inhibition and switching of high-level goals. This can aid agent design in cases where static priorities and pre-defined arbitration are undesirable. The model underlying our approach is biomimetic, based on neuro-cognitive research on the dopaminic cells responsible for controlling goal switching and maintenance in the mammalian brain. This promising model is applicable to selection problems with multiple conflicting goals. We evaluate our mechanism using an existing approach as a baseline, based on our included success and evaluation criteria.
Spyridon Samothrakis, Samuel Roberts, Diego Perez and Simon Lucas. Rolling Horizon methods for Games with Continuous States and Actions
Abstract: It is often the case that games have continuous dynamics and allow for continuous actions, possibly with with some added noise. For larger games with complicated dynamics, having agents learn offline behaviours in such a setting is a daunting task. On the other hand, provided a generative model is available, one might try to spread the cost of search/learning in a rolling horizon fashion (e.g. as in Monte Carlo Tree Search). In this paper we compare T-HOLOP (Truncated Hierarchical Open Loop Planning), an open loop planning algorithm at least partially inspired by MCTS, with a version of evolutionary planning that uses CMA-ES (which we call EVO-P) in two planning benchmark problems (Inverted Pendulum and the Double Integrator) and in Lunar Lander, a classic arcade game. We show that EVO-P outperforms T-HOLOP in the classic benchmarks, while T-HOLOP is unable to find a solution using the same heuristics. We conclude that off-the-shelf evolutionary algorithms can be used successfully in a rolling horizon setting, and that a different type of heuristics might be needed under different optimisation algorithms.
M.J.W. Tak, Marc Lanctot and Mark H. M. Winands. Monte Carlo Tree Search Variants for Simultaneous Move Games
Abstract: Monte Carlo Tree Search (MCTS) is a widely-used technique for game-tree search in sequential turn-based games. The extension to simultaneous move games, where all players choose moves simultaneously each turn, is non-trivial due to the complexity of this class of games. In this paper, we describe simultaneous move MCTS and analyze its application in a set of nine disparate simultaneous move games. We use several possible variants, Decoupled UCT, Sequential UCT, Exp3, and Regret Matching. These variants include both deterministic and stochastic selection strategies and we characterize the game-play performance of each one. The results indicate that the relative performance of each variant depends strongly on the game and the opponent, and that parameter tuning can also not be as straightforward as the purely sequential case. Overall, Decoupled UCT performs best despite its theoretical shortcomings.
Sylvain Labranche, Nicolas Sola, Sophie Callies and Eric Beaudry. Using Partial Satisfaction Planning to Automatically Select NPCs' Goals and Generate Plans in a Simulation Game
Abstract: Controlling and coordinating a large number of Non-Playable Characters (NPCs) is an important challenge in video games. In order to obtain a realistic behaviour, traditional approaches required hand-written rule based scripts or finite state machines. In the last decade, a new approach to artificial intelligence has emerged. Indeed, more and more game developers are considering planning as a way to model autonomous agents. However, the gap between theory and application is still quite not filled. In this paper, we address different approaches of i) goal distribution and selection for autonomous planning agents by a coordination module and ii) autonomous goal selection by planning agents using partial satisfaction planning. Our techniques result in a simpler way to model NPCs, but still effective in terms of CPU and behaviour.
Ibrahim Mahmoud, Lianchao Li, Dieter Wloka and Mostafa Ali. Believable NPCs in Serious Games: HTN Planning Approach Based on Visual Perception
Abstract: On the way of the creation of virtual autonomous characters that are lifelike, intelligent and convey empathy, a strategic planning architecture is presented in the domain of serious games, where games are designed and built for an educational primary purpose other than pure entertainment. Our serious game is in the domain of emergency management; firefighters and first responders in accidents situations. Two contributions are presented in this paper; firstly, a visual perception system for non-player characters (NPCs) along with a short-term memory (working memory) are implemented so that NPCs will have access to only limited information and have to build their plans and make their decisions based on what they can perceive from their surrounding environment. Secondly, an HTN planning approach is implemented based on SHOP for the domain of our serious game. Five modules were developed in this part; Controller, World Model, Domain, Interface and HTN Planner. The plan is generated online based on the information gathered from the visual perception system and stored in the working memory. Our experiment shows the effectiveness of the proposed approach to create believable NPC behavior.
Nick Sephton, Peter Cowling and Edward Powley. Heuristic Move Pruning in Monte Carlo Tree Search for the Strategic Card Game Lords of War
Abstract: Move pruning is a technique used in game tree search which incorporates heuristic knowledge to reduce the number of moves under consideration from a particular game state. This paper investigates Heuristic Move Pruning on the strategic card game Lords of War. We use heuristics to guide our pruning and experiment with different techniques of applying pruning and their relative effectiveness. We also present a technique of artificially rolling forward a game state in an attempt to more accurately determine which moves are appropriate to prune from the decision tree. We demonstrate that heuristic move pruning is effective in Lords of War, and also that artificially rolling forward the game state can increase the effectiveness of heuristic move pruning.
Vanus Vachiratamporn, Koichi Moriyama, Ken-Ichi Fukui and Masayuki Numao. An Implementation of Affective Adaptation in Survival Horror Games
Abstract: In this research, we investigated two important aspects of affective gaming, which are the recognition of the player's affective states and the adaptation of the game based on the player's current affective state, by using survival horror games as an experimental environment. In the previous work, we analyzed the affective states of players collected through our own affect annotation tool during their gameplay and constructed affective-state prediction models that predicted their affective states from their brainwave and heart rate signals collected concurrently. In this work, we developed an affective survival horror game based on the previous findings and conducted another experiment to compare between affective and non-affective versions of the game. In the affective version, the timing of horror events were implicitly changed based on the player's affective state. The result showed that the implicit change of timing made no significant difference in the evaluation of the players, but the game itself shows good potential for future research in the affective gaming field.
Mateusz Bialas, Shoshannah Tekofsky and Pieter Spronck. Cultural Influences on Play Style
Abstract: In general, video game researchers do not differentiate between players’ nationalities. Cultural theories, however, show that cultural differences concern numerous values, including values associated with interaction with media. We therefore ask the question whether there exist cross-cultural differences in play style. For our investigation we use a large sample database containing Battlefield 3 game statistics. Hofstede’s cultural dimensions theory was used to construct three game style categories in which players are most likely to exhibit cultural differences: competitiveness, cooperation, and tactical choices. Using ANOVA tests, we found clear differences between the play style of players of different nationalities in the competitiveness and cooperation categories. MANOVA tests showed that that national culture accounts for 5.6% of variance in competitive, and 4.2% in cooperative play style. Pairwise comparisons showed that in particular German and Swedish players demonstrated cooperative behavior significantly more often than players from the United Kingdom and United States.
Tom Vodopivec and Branko Šter. Enhancing Upper Confidence Bounds for Trees with Temporal Difference Values
Abstract: Upper confidence bounds for trees (UCT) is one of the most popular and generally effective Monte Carlo tree search (MCTS) algorithms. However, in practice it is relatively weak when not aided by additional enhancements. Improving its performance without reducing generality is a current research challenge. We introduce a new domain-independent UCT enhancement based on the theory of reinforcement learning. Our approach estimates state values in the UCT tree by employing temporal difference (TD) learning, which is known to outperform plain Monte Carlo sampling in certain domains. We present three variants of a new method that adapts the TD(λ) algorithm to the tree policy and backpropagation step. Experimental evaluations on four games (Gomoku, Hex, Connect Four, and Tic Tac Toe) reveal that our approach increases UCT’s level of play comparably to the rapid action value estimation (RAVE) enhancement. Furthermore, it proves highly compatible with a modified all moves as first heuristic, where it considerably outperforms RAVE. The findings suggest that integration of TD learning into MCTS deserves further research, which may form a new class of MCTS enhancements.
Chong-U Lim and D. Fox Harrell. An Approach to General Videogame Evaluation and Automatic Generation using a Description Language
Abstract: In this paper, we present an approach for automated evaluation and generation of videogames made with PuzzleScript, a description-based scripting language for authoring games, which was created by game designer Stephen Lavelle. We have developed a system that automatically discovers solutions for a multitude of videogames that each possess different game mechanics, rules, level designs, and win conditions. In our approach, we first developed a set of general level state heuristics, which estimates how close a given game level is to being solved. It is used to adapt the best-first search algorithm to implement a general evaluation approach for PuzzleScript games called GEBestFS. Next, we developed an evolutionary framework that automatically generates novel game mechanics from scratch by evolving game design rulesets and evaluating them using GEBestFS. This was achieved by developing a set of general ruleset heuristics to assess the playability of a game based on its game mechanics. From the results of our approach, we showcase that a description-based language enables the development of general methods for automatically evaluating games authored with it. Additionally, we illustrate how an evolutionary approach can be used together with these methods to to automatically design alternate or novel game mechanics for authored games.
David Lupien St-Pierre and Olivier Teytaud. The Nash and the Bandit Approaches for Adversarial Portfolios
Abstract: In this paper we study the use of a portfolio of policies for adversarial problems. We use two different portfolios of policies and apply it to the game of Go. The first portfolio is composed of different version of the GnuGo agent. The second portfolio is composed of fixed random seeds which create a fully observable game. First we demonstrate that learning an offline combination of these policies using the notion of Nash Equilibrium generates a stronger opponent. Second, we show that we can learn online such distribution through a bandit approach. The advantages of our approach are (i) diversity (the Nash- Portfolio is more variable than its components) (ii) adaptivity (the Bandit-Portfolio adapts to the opponent) (iii) simplicity (iv) increased performance. Due to the importance of games on mobile devices, designing artificial intelligences for small computational power is crucial; Our approach is particularly suited for mobile device since it create a stronger opponent simply by biaising the distribution over the policies and moreover it generalizes quite well.
Jonathan Tremblay, Pedro Andrade Torres and Clark Verbrugge. An Algorithmic Approach to Analyzing Combat and Stealth Games
Abstract: Combat and stealth games give players the option of engaging or avoiding enemy agents at different points. Level-design in this context is complex, however, requiring a designer to understand how different design choices impact difficulty under multiple play-styles. In this work we describe a unified algorithmic approach that can perform abstract analysis of both combat and stealth behaviours. Our proposed solution builds on an existing stealth-level analysis tool, incorporating combat activities into the abstraction in order to allow exploration of level feasibility and difficulty. We demonstrate our approach on a non-trivial example level, showing how such a tool can be used to evaluate and control the player experience.
Jiao Jian Wang and Olana Missura. Racing Tracks Improvisation
Abstract: Procedural content generation is a popular technique in the game development. One of its typical applications is generation of game levels. This paper presents a method to generate tracks for racing games, by viewing racing track generation as a discrete sequence prediction problem. To solve it we combine two techniques from music improvisation. We show that this method is capable of generating new racing tracks which appear to be interesting to players.
Niels Justesen, Bálint Tillman, Julian Togelius and Sebastian Risi. Script- and Cluster-based UCT for StarCraft
Abstract: Controlling units in real-time strategy (RTS) games is a challenging problem in Artificial Intelligence (AI) as these games are fast-paced with simultaneous moves and massive branching factors. This paper presents two extensions to the algorithm UCT Considering Durations (UCTCD) for finding optimal sequences of actions to units engaged in combat using the RTS game StarCraft as a test bed. The first extension uses a script-based approach inspired by Portfolio Greedy Search and searches for sequences of scripts instead of actions. The second extension uses a cluster-based approach as it assigns scripts to clusters of units based on their type and position. Our results show that both our script-based and cluster-based UCTCD extensions outperform the original UCTCD with a winning percentage of 100% with 32 units or more. Additionally, our results show that unit clustering gives some improvement in large scenarios while it is less effective in small combats. We suggest further research of the behavior and possible variants of the cluster-based approach which can be applied to many other algorithms similarly to UCT. The algorithms were tested in our StarCraft combat simulator called JarCraft, a complete Java translation of the original C++ package SparCraft, made in hopes of making this research area more accessible.
Pier Luca Lanzi, Daniele Loiacono and Riccardo Stucchi. Evolving Maps for Match Balancing in First Person Shooters
Abstract: Match balancing is one of the most important design issue in the development of an adversarial multiplayer shooter. Therefore, matchmaking algorithms are generally used to build teams, such that players can have fun each other and enjoy the game experience.In this work we approach this problem from a completely different angle: we show that the design of the game content has a large impact on the match balancing and that procedural content generation might be a promising approach to improve it.In particular, we present a methodology to evolve maps for Cube 2: Saubertran, an open source first person shooter, and to improve the game balancing for specific combinations of players skills and strategies.We tested our approach on three different scenarios that involve players with different skill levels as well as players using completely different weapons.Our results are very promising and show that, in all the scenarios considered, our approach is able to evolve maps that result in a balanced game experience.
Jose-Maria Pena, Javier Viedma Ortiz-Canavate, Santiago Muelas, Antonio Latorre and Luis Pena. Designer-driven 3D Buildings Generated Using Variable Neighborhood Search
Abstract: This paper presents a mechanism to generate virtual buildings, for action or RPG games, considering designer constrains and guidelines. This mechanism is implemented as a pipeline of different Variable Neighborhood Search (VNS)optimization processes in which several subproblems are tackled (1) rooms locations, (2) connectivity graph, and (3) element placement. The core VNS algorithm includes some variants to improve its performance, such as, for example constrain handling and biased operator selection. The optimization process uses a toolkit of construction primitives implemented as “smart objects” providing basic elements such as rooms, doors, staircases and other connectors. The paper also shows experimental results of the application of different designer constrains to a wide range of buildings from small houses to a large castle with several underground levels.
Marc Lanctot, Mark H. M. Winands, Tom Pepels and Nathan R. Sturtevant. Monte Carlo Tree Search with Heuristic Evaluations using Implicit Minimax Backups
Abstract: Monte Carlo Tree Search (MCTS) has improved the performance of game-playing engines in domains such as Go, Hex, and general-game playing. MCTS has been shown to outperform outperform classic alpha-beta search in games where good heuristic evaluations are difficult to obtain. In recent years, combining ideas from traditional minimax search in MCTS has been shown to be advantageous in some domains, such as Lines of Action, Amazons, and Breakthrough. In this paper, we propose a new way to use heuristic evaluations to guide the MCTS search by storing the two sources of information, estimated win rates and heuristic evaluations, separately. Rather than using the heuristic evaluations to replace the playouts, our technique backs them up implicitly during its MCTS simulations. These learned evaluation values are then used to guide future simulations. Compared to current techniques, we show that using implicit minimax backups leads to stronger play performance in Breakthrough, Lines of Action, and Kalah.
Siming Liu, Sushil Louis and Christopher Ballinger. Evolving Effective Micro Behaviors in RTS Game
Abstract: We investigate using genetic algorithms to generate high quality micro management in combat scenarios for real-time strategy games. Macro and micro management are two key aspects of real-time strategy games. While good macro helps a player collect more resources and build more units, good micro helps a player win skirmishes against equal numbers and types of opponent units or win even when outnumbered. In this paper, we use influence maps and potential fields to generate micro management positioning and movement tactics. Micro behaviors are compactly encoded into fourteen parameters and we use genetic algorithms to search for effective micro management tactics for the given units. We tested the performance of our ECSLBot (the evolved player), obtained in this way against the default StarCraft AI, and two other state of the art bots, UAlbertaBot and Nova on several skirmish scenarios. The results show that the ECSLBot tuned by genetic algorithms outperforms the UAlbertaBot and Nova in kiting efficiency, target selection, and knowing when to flee to survive. We believe our approach is easy to extend to other types of units and can be easily adopted by other AI bots.
Christopher Ballinger and Sushil Louis. Learning Robust Build-Orders from Previous Opponents with Coevolution
Abstract: Learning robust, winning strategies from previous opponents in Real-Time Strategy games presents a challenging problem. In our paper, we investigate this problem by using case-injection into the teachset and population of a coevolutionary algorithm. Specifically, we take several winning build-orders we created through hand-tuning or coevolution and periodically inject them into the coevolutionary population and/or teachset. We compare the build-orders produced by three different case-injection methods to the robustness of build-orders produced without case-injection and measure their similarity to all the injected cases. Our results show that case injection works well with a coevolutionary algorithm. Case injection into the population quickly influences the strategies to play like some of the injected cases, without losing robustness. This work informs our ongoing research on finding robust build-orders for real-time strategy games.
Rafet Sifa, Christian Bauckhage and Anders Drachen. The Playtime Principle: Large-scale Cross-games Interest Modeling
Abstract: The collection and analysis of behavioral telemetry in digital games has in the past five years become an integral part of game development. One of the key challenges in game analytics is the development of methods for characterizing and predicting player behavior as it evolves over time. Characterizing behavior is necessary for monitoring player populations and gradually improve game design and the playing experience. Predicting behavior is necessary to describe player engagement and prevent future player churn. In this paper, methods and theory from kernel archetype analysis and random process models are utilized to evaluate the playtime behavior, i.e. time spent playing specific games as a function of time, of over 6 million players, across more than 3000 PC and console games from the Steam platform, covering a combined playtime of more than 5 billion hours. A number of conclusions can be derived from this large-scale analysis, notably that playtime as a function of time, across the thousands of games in the dataset, and irrespective of local differences in the playtime frequency distribution, can be modeled using the same model: the Weibull distribution. This suggests that there are fundamental properties governing player engagement as it evolves over time, which we here refer to as the Playtime Principle. Additionally, the analysis shows that there are distinct clusters, or archetypes, in the playtime frequency distributions of the investigated games. These archetypal groups correspond to specific playtime distributions. Finally, the analysis reveals information about player behavior across a very large dataset, showing for example that the vast majority of games are players for less than 10 hours, and very few players spend more than 30-35 hours on any specific game.
Marcin Szubert and Wojciech Jaśkowski. Temporal Difference Learning of N-Tuple Networks for the Game 2048
Abstract: The highly addictive stochastic puzzle game 2048 has recently invaded the Internet and mobile devices, stealing countless hours of players’ lives. In this study we investigate the possibility of creating a game-playing agent capable of winning this game without incorporating human expertise or performing game tree search. For this purpose we employ three variants of temporal difference learning to acquire i) action value, ii) state value, or iii) afterstate value functions, which are used to evaluate moves at 1-ply. To represent these functions we adopt n- tuple networks, which have recently been successfully applied to Othello and Connect 4. The conducted experiments demonstrate that the learning algorithm using afterstate value functions is able to consistently produce players winning over 97% of games. These results show that n-tuple networks combined with an appropriate learning algorithm have large potential, which could be exploited in other board games.
Mike Preuss, Antonios Liapis and Julian Togelius. Searching for Good and Diverse Game Levels
Abstract: In procedural content generation, one is often interested in generating a large number of game content artefacts that are not only good but also diverse, in terms of gameplay, visual impression or some other criterion. We investigate several search-based approaches to creating good and diverse game content, in particular approaches based on evolution strategies with or without diversity preservation mechanisms, novelty search and random search. The content domain is game levels, in particular map sketches for strategy games, which are meant to be used as suggestions in the Sentient Sketchbook design tool. Several diversity metrics are possible for this type of content: we investigate tile-based, objective-based and visual impression distance. We find that evolution with diversity preservation mechanisms can produce both good and diverse content, but only when using appropriate distance measures.
Steve Dahlskog and Julian Togelius. A Multi-level Level Generator
Abstract: Generating content at multiple levels of abstraction simultaneously is an open challenge in procedural content generation, as is representing and automatically replicating the style of a human designer. This paper addresses both of these challenges through extending a previously devised methodology for pattern-based level generation. Concretely, we analyse Super Mario Bros levels into three abstraction levels: micro-, meso- and macro-patterns. Micro-patterns are used as building blocks in a search-based PCG approach that searches for macro-patterns, which are defined as combinations of meso-patterns. Results show that we can successfully generate levels that replicate the macropatterns of selected input levels, and we argue that this constitutes an approach to automatically analysing and replicating style in level design.
Jeffrey Tsang. Comparing the Structure of Probabilistic 4- and 8-state Finite Transducer Representations for Prisoner’s Dilemma
Abstract: The fingerprint is a mathematical tool that turns a game-playing strategy into a representation-independent functional summary of its behaviour; several studies have used this to enable massive computational analyses of entire spaces of strategies. This study extends further by directly comparing two large (on the order of 10^96) state spaces, grids over the probabilistic 4- and 8-state finite transducers, as representations for playing iterated Prisoner’s Dilemma. We take uniformly random samples of size 65,536 from each, fingerprint each strategy in both samples, and compute all pairwise distances within each sample. Hierarchical clustering reduces each distance matrix to size 16,384 for embedding into Euclidean space with multidimensional scaling. Results indicate that several important dimensions are strongly preserved between the representations, and we can quantify them. We additionally find an important theoretical construct, two strategies that are not identical in behaviour, but with the same fingerprint: Random and periodic (CDDC).
Christoffer Holmgård, Antonios Liapis, Julian Togelius and Georgios N. Yannakakis. Evolving Personas for Player Decision Modeling
Abstract: This paper explores how evolved game playing agents can be used to represent a priori defined archetypical ways of playing a test-bed game, as procedural personas. The end goal of such procedural personas are substituting players when authoring game content manually, procedurally, or both (in a mixed-initiative setting). Building on previous work, we compare the performance of newly evolved agents to agents trained via Q-learning as well as a number of baseline agents. Comparisons are performed on the grounds of game playing ability, generalizability, and conformity among agents. Finally, all agents' decision making styles are matched to the decision making styles of human players in order to investigate whether the different methods can yield agents who mimic or differ from human decision making in similar ways. The experiments performed in this paper conclude that agents developed from a priori defined objectives can express human decision making styles and that they are more generalizable and versatile than Q-learning and hand-crafted agents.
Lucas Ferreira and Claudio Toledo. A Search-based Approach for Generating Angry Birds Levels.
Abstract: This paper presents a genetic algorithm (GA) for procedural generation of levels in the Angry Birds game. The GA evaluates the levels based on a simulation which measures the elements’ movement during a period of time. The algorithm’s objective is to minimize this metric to generate stable structures. There are also some restrictions integrated in the evaluation of a level that leads them to have determined characteristics. Since there is no open source code of the game, it was developed a game clone which is independent from our algorithm and it can be used to support experiments of PCG methods for this type of game. We performed experiments in order to evaluate the expressivity of the level generator and the results showed that the proposed algorithm generates levels with interesting stable structures.
Jeffrey Tsang. Applying Fingerprint Multilateration to Population Dynamics in Prisoner’s Dilemma Simulations
Abstract: By combining two techniques: fingerprinting, which turns a game-playing strategy into a representation-independent functional summary of its behaviour, and multilateration, which finds the location of a point in space using measured distances to a known set of anchor points, we develop a tool for automatically analyzing evolved Prisoner’s Dilemma agents. Using as anchor points the space of all deterministic 2-state transducers, we can place arbitrary strategies into 7-dimensional real space by numerical integration and solving linear equations, which is fast enough to be usable online. We present several proof-of-concept visualizations, including individual strategies in populations, and population trajectories over time, which are now accessible.
Luca Galli, Pier Luca Lanzi and Daniele Loiacono. Applying Data Mining to Extract Design Patterns from Unreal Tournament Levels
Abstract: We present a study on the application of data mining to extract design patterns from Unreal Tournament III levels used in the online gaming scene for \textit{Duel} and \textit{Team Deathmatch} games. The maps' topological structure and their morphology was extracted using ad-hoc bots we developed and several statistics have been computed using typical graph algorithms. The process resulted in datasets containing information about all the relevant positions in the maps (the nodes in the game navigation mesh used in the Unreal game engine) and their role in the game (i.e., whether they are navigation points, ammo pickups, weapon pickups, or powerup pickups). We have applied four data mining algorithms to the data to characterize both (i) the maps' type (\textit{Duel} and \textit{Team Deathmatch}) based on the feature of the nodes they contain and (ii) the node types (ammo, weapon, powerup, or navigation) based on their features. Our results suggest that the map type can be characterized in terms of the nodes they contain but it is difficult to characterize the role of nodes based solely on their features.


[Print Version]