Samuel Barrett's Publications

Sorted by DateClassified by Publication TypeClassified by TopicSorted by First Author Last Name

Ad Hoc Teamwork Modeled with Multi-armed Bandits: An Extension to Discounted Infinite Rewards

Samuel Barrett and Peter Stone. Ad Hoc Teamwork Modeled with Multi-armed Bandits: An Extension to Discounted Infinite Rewards. In Tenth International Conference on Autonomous Agents and Multiagent Systems - Adaptive Learning Agents Workshop (ALA), May 2011.

Download

[PDF]136.1kB  

Abstract

Before deployment, agents designed for multiagent team settings are commonly developed together or are given standardized communication and coordination protocols. However, in many cases this pre-coordination is not possible because the agents do not know what agents they will encounter, resulting in ad hoc team settings. In these problems, the agents must learn to adapt and cooperate with each other on the fly. We extend existing research on ad hoc teams, providing theoretical results for handling cooperative multi-armed bandit problems with infinite discounted rewards.

BibTeX

@InProceedings{AAMAS11-ALA-Barrett,
  author = {Samuel Barrett and Peter Stone},
  title = {Ad Hoc Teamwork Modeled with Multi-armed Bandits: An Extension to Discounted Infinite Rewards},
 booktitle = "Tenth International Conference on Autonomous Agents and Multiagent Systems - Adaptive Learning Agents Workshop (ALA)",
  location = {Taipei, Taiwan},
  month = {May},
  year = {2011},
  abstract={
    Before deployment, agents designed for multiagent team settings are commonly
    developed together or are given standardized communication and coordination
    protocols. However, in many cases this pre-coordination is not possible
    because the agents do not know what agents they will encounter, resulting in
    ad hoc team settings. In these problems, the agents must learn to adapt and
    cooperate with each other on the fly. We extend existing research on ad hoc
    teams, providing theoretical results for handling cooperative multi-armed
    bandit problems with infinite discounted rewards.
  }
}

Generated by bib2html.pl (written by Patrick Riley ) on Thu Nov 10, 2022 23:47:08