PR Jordan, LJ Schvartzman, and MP Wellman

Ninth International Conference on Autonomous Agents and Multiagent Systems, pages 1131–1138, May 2010.

Copyright (c) 2010, IFAAMAS.

Abstract

Empirical analyses of complex games necessarily focus on a restricted set of strategies, and thus the value of empirical game models depends on effective methods for selectively exploring a space of strategies. We formulate an iterative framework for strategy exploration, and experimentally evaluate an array of generic exploration policies on three games: one infinite game with known analytic solution, and two relatively large empirical games generated by simulation. Policies based on iteratively finding a beneficial deviation or best response to the minimum-regret profile among previously explored strategies perform generally well on the profile-regret measure, although we find that some stochastic introduction of suboptimal responses can often lead to more effective exploration in early stages of the process. A novel formation-based policy performs well on all measures by producing low-regret approximate formations earlier than the deviation-based policies.

This substantially extends (and supersedes) a previous version:

Downloads