Y Wang, Q Ma and MP Wellman
21st International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2022 (Forthcoming).
In empirical game-theoretic analysis (EGTA), game models are extended iteratively through a process of generating new strategies based on experience with prior strategies. The strategy exploration problem in EGTA is how to direct this process so to construct effective models with minimal iteration. A variety of approaches have been proposed in the literature, including methods based on classic techniques and novel concepts. Comparing the performance of these alternatives can depend sensitively on criteria adopted and measures employed. We investigate some of the methodological considerations in evaluating strategy exploration, proposing and justifying new evaluation methods based on examples and experimental observations. In particular, we emphasize the fact that empirical games create a space of strategies and evaluation should reflect how well it covers the strategically relevant space. Based on this fact, we suggest that the minimum regret constrained profile (MRCP) provides a particularly robust basis for evaluating a space of strategies, and propose a local search method for computing MRCP. However, MRCP computation is not always feasible especially in large games. To evaluate strategy exploration in large games, we propose a new evaluation scheme that measures the strategic coverage of an empirical game. Specifically, we highlight consistency considerations for comparing across different approaches. We show that violation of the consistency considerations could yield misleading conclusions on the performance of different approaches. In accord with consistency considerations, we propose a profile-selection method, which effectively discovers the profile that can represent the strategic coverage of an empirical game through its regret information. We show that our evaluation scheme reveals the authentic learning performance of different approaches compared to previous evaluation methods.