M Gatchel and B Wiedenbeck
22nd International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), pp. 1044–1052, June 2023.
Abstract
To understand the impact of parameters in strategic environments, typical game-theoretic analysis involves selecting a small set of representative values and constructing and analyzing separate game models for each value. We introduce a novel technique to learn a single model representing a family of closely related games that differ in the number of symmetric players or other ordinal environment parameters. Prior work trains a multi-headed neural network to output mixed-strategy deviation payoffs, which can be used to compute symmetric ε-Nash equilibria. We extend this work by making environment parameters into input dimensions of the regressor, enabling a single model to learn patterns which generalize across the parameter space. For both continuous and discrete parameters, our results show that these generalized models outperform existing approaches, achieving better accuracy with roughly half as much data. This technique makes thorough analysis of the parameter space more tractable, and promotes analyses that capture relationships between parameters and incentives.