An agent continuously performing different tasks in the same domain has the opportunity to learn, over the course of its operational lifetime, about the behavioural regularities afforded by the domain. This paper addresses the problem of learning a task independent behaviour model based on the underlying structure of a domain which is common across multiple tasks presented to an autonomous agent. Our approach involves learning action priors: a behavioural model which encodes a notion of local common sense behaviours in the domain, conditioned on either the state or observations of the agent. This knowledge is accumulated and transferred as an exploration behaviour whenever a new task is presented to the agent. The effect is that as the agent encounters more tasks, it is able to learn them faster and achieve greater overall performance. This approach is illustrated in experiments in a simulated extended navigation domain.
Reference:
Rosman, B.S. 2014. Behavioural domain knowledge transfer for autonomous agents. Knowledge, Skill, and Behavior Transfer in Autonomous Robots, AAAI 2014 Fall Symposium Series, 13-15 November 2014, pp 1-8
Rosman, B. S. (2014). Behavioural domain knowledge transfer for autonomous agents. Association for the Advancement of Artificial Intelligence. http://hdl.handle.net/10204/7979
Rosman, Benjamin S. "Behavioural domain knowledge transfer for autonomous agents." (2014): http://hdl.handle.net/10204/7979
Rosman BS, Behavioural domain knowledge transfer for autonomous agents; Association for the Advancement of Artificial Intelligence; 2014. http://hdl.handle.net/10204/7979 .
Knowledge, Skill, and Behavior Transfer in Autonomous Robots, AAAI 2014 Fall Symposium Series, 13-15 November 2014.Due to copyright restrictions, the attached PDF file only contains the abstract of the full text item. For access to the full text item, please consult the publisher's website