ResearchSpace

A bayesian approach for learning and tracking switching, non-stationary opponents

Show simple item record

dc.contributor.author Hernandez-Leal, P
dc.contributor.author Rosman, Benjamin S
dc.contributor.author Taylor, ME
dc.contributor.author Sucar, LE
dc.contributor.author de Cote, EM
dc.date.accessioned 2016-07-20T10:59:56Z
dc.date.available 2016-07-20T10:59:56Z
dc.date.issued 2016-02
dc.identifier.citation Hernandez-Leal, P. Rosman, B.S. Taylor, M.E. Sucar, L.E. de Cote, E.M. 2016. A bayesian approach for learning and tracking switching, non-stationary opponents. In: Autonomous Agents and Multiagent Systems, 9-13 May 2016, Singapore en_US
dc.identifier.isbn 978-1-4503-4239-1
dc.identifier.uri http://dl.acm.org/citation.cfm?id=2937137
dc.identifier.uri http://hdl.handle.net/10204/8650
dc.description Autonomous Agents and Multiagent Systems, 9-13 May 2016, Singapore. Due to copyright restrictions, the attached PDF file only contains the abstract of the full text item. For access to the full text item, please consult the publisher's website. en_US
dc.description.abstract In many situations, agents are required to use a set of strategies (behaviors) and switch among them during the course of an interaction. This work focuses on the problem of recognizing the strategy used by an agent within a small number of interactions. We propose using a Bayesian framework to address this problem. Bayesian policy reuse (BPR) has been empirically shown to be efficient at correctly detecting the best policy to use from a library in sequential decision tasks. In this paper we extend BPR to adversarial settings, in particular, to opponents that switch from one stationary strategy to another. Our proposed extension enables learning new models in an online fashion when the learning agent detects that the current policies are not performing optimally. Experiments presented in repeated games show that our approach is capable of efficiently detecting opponent strategies and reacting quickly to behavior switches, thereby yielding better performance than state-of-the-art approaches in terms of average rewards. en_US
dc.language.iso en en_US
dc.publisher ACM en_US
dc.relation.ispartofseries Workflow;16651
dc.subject Policy reuse en_US
dc.subject Non-stationary opponents en_US
dc.subject Repeated games en_US
dc.title A bayesian approach for learning and tracking switching, non-stationary opponents en_US
dc.type Conference Presentation en_US
dc.identifier.apacitation Hernandez-Leal, P., Rosman, B. S., Taylor, M., Sucar, L., & de Cote, E. (2016). A bayesian approach for learning and tracking switching, non-stationary opponents. ACM. http://hdl.handle.net/10204/8650 en_ZA
dc.identifier.chicagocitation Hernandez-Leal, P, Benjamin S Rosman, ME Taylor, LE Sucar, and EM de Cote. "A bayesian approach for learning and tracking switching, non-stationary opponents." (2016): http://hdl.handle.net/10204/8650 en_ZA
dc.identifier.vancouvercitation Hernandez-Leal P, Rosman BS, Taylor M, Sucar L, de Cote E, A bayesian approach for learning and tracking switching, non-stationary opponents; ACM; 2016. http://hdl.handle.net/10204/8650 . en_ZA
dc.identifier.ris TY - Conference Presentation AU - Hernandez-Leal, P AU - Rosman, Benjamin S AU - Taylor, ME AU - Sucar, LE AU - de Cote, EM AB - In many situations, agents are required to use a set of strategies (behaviors) and switch among them during the course of an interaction. This work focuses on the problem of recognizing the strategy used by an agent within a small number of interactions. We propose using a Bayesian framework to address this problem. Bayesian policy reuse (BPR) has been empirically shown to be efficient at correctly detecting the best policy to use from a library in sequential decision tasks. In this paper we extend BPR to adversarial settings, in particular, to opponents that switch from one stationary strategy to another. Our proposed extension enables learning new models in an online fashion when the learning agent detects that the current policies are not performing optimally. Experiments presented in repeated games show that our approach is capable of efficiently detecting opponent strategies and reacting quickly to behavior switches, thereby yielding better performance than state-of-the-art approaches in terms of average rewards. DA - 2016-02 DB - ResearchSpace DP - CSIR KW - Policy reuse KW - Non-stationary opponents KW - Repeated games LK - https://researchspace.csir.co.za PY - 2016 SM - 978-1-4503-4239-1 T1 - A bayesian approach for learning and tracking switching, non-stationary opponents TI - A bayesian approach for learning and tracking switching, non-stationary opponents UR - http://hdl.handle.net/10204/8650 ER - en_ZA


Files in this item

This item appears in the following Collection(s)

Show simple item record