TH Nguyen, Y Wang, A Sinha, and MP Wellman

33rd AAAI Conference on Artificial Intelligence, Jan/Feb 2019.

Abstract

Allocating resources to defend targets from attack is often complicated by uncertainty about the attacker’s capabilities, objectives, or other underlying characteristics. In a repeated interaction setting, the defender can collect attack data over time to reduce this uncertainty and learn an effective defense. However, a clever attacker can manipulate the attack data to mislead the defender, influencing the learning process toward its own benefit. We investigate strategic deception on the part of an attacker with private type information, who interacts repeatedly with a defender. We present a detailed computation and analysis of both players’ optimal strategies given the attacker may play deceptively. Computational experiments illuminate conditions conducive to strategic deception, and quantify benefits to the attacker. By taking into account the attacker’s deception capacity, the defender can significantly mitigate loss from misleading attack actions.

Downloads