In the security domain, we are concerned with protecting resources from known or potential adversaries, who we assume may themselves possess sophisticated reasoning capacity. Viewed this way, the problem is to develop strategies for security games, choosing policies based on assessments of the capabilities, knowledge, and objectives of adversaries, recognizing that they may also be considering our own objectives, capabilities, and knowledge.
Our work addresses scenarios in cyber-security, where both attacker and defender may employ adaptive strategies in complex networked environments.
Related Projects and Publications:
- Empirical game-theoretic methods for adaptive cyber-defense
- Deception in Finitely Repeated Security Games
- A Learning and Masking Approach to Secure Learning
- Stackelberg Security Games: Looking Beyond a Decade of Success
- Multi-stage attack graph security games: Heuristic strategies, with empirical game-theoretic analysis
- A Stackelberg game model for botnet data exfilitration
- Adversarial and Uncertain Reasoning for Adaptive Cyber Defense: Building the Scientific Foundation
- SoK: Security and Privacy in Machine Learning
- A Moving Target Defense Approach to Mitigate DDoS Attacks against Proxy-Based Architectures
- Moving Target Defense against DDoS Attacks: An Empirical Game-Theoretic Analysis
- Gradient Methods for Stackelberg Security Games
- Empirical Game-Theoretic Analysis for Moving Target Defense
- Empirical Game-Theoretic Analysis of an Adaptive Cyber-Defense Scenario (Preliminary Report)
- Analyzing Incentives for Protocol Compliance in Complex Domains: A Case Study of Introduction-Based Routing
- Incentivizing responsible networking via introduction-based routing
- Strategic Modeling of Information Sharing Among Data Privacy Attackers