Explanations for Multi-Agent Reinforcement Learning

 
Abstract: 

In recent years, there has been significant growth in research regarding multi-agent reinforcement learning (MARL), enabling sequential decision making for a range of exciting multi-agent applications such as cooperative AI and autonomous driving. However, most of these systems function as black boxes causing possible user misunderstanding and misuse. Generating explanations about agent decisions is crucial for improving system transparency, increasing user satisfaction, and facilitating human-agent collaboration. However, existing works on explainable reinforcement learning (xRL) mostly focus on the single-agent setting. This dissertation focuses on generating explanations for MARL. Our contributions include methods for policy summarization, query-based explanations, and temporal explanations for centralized MARL. We address their effectiveness through computational experiments and user studies. Similarly, for decentralized MARL, we develop methods for policy summarization and query-based explanations, evaluating them using computational experiments and user studies. Additionally, we focus on the presentation of explanations, exploring the use of augmented reality and condensing techniques to better support users.  Lastly, we delve into the intersection of explainable AI and law by defining foreseeability for autonomous systems and presenting possible legal and technical challenges.

Committee:  

  • Seongkook Heo, Committee Chair (CS/SEAS/UVA)
  • Lu Feng, Advisor (CS, SIE/SEAS/UVA)
  • Tariq Iqbal (SYS, CS/SEAS/UVA)
  • Nicola Bezzo (SIE, ECE/SEAS/UVA)
  • Sarit Kraus (Bar-Ilan University)
OSZAR »