dc.contributor.author |
Boloka, Tlou J
|
|
dc.contributor.author |
Makondo, N
|
|
dc.contributor.author |
Rosman, B
|
|
dc.date.accessioned |
2021-12-15T07:02:52Z |
|
dc.date.available |
2021-12-15T07:02:52Z |
|
dc.date.issued |
2021-01 |
|
dc.identifier.citation |
Boloka, T.J., Makondo, N. & Rosman, B. 2021. Knowledge transfer using model-based deep reinforcement learning. http://hdl.handle.net/10204/12199 . |
en_ZA |
dc.identifier.isbn |
978-1-6654-0345-0 |
|
dc.identifier.isbn |
978-1-6654-4788-1 |
|
dc.identifier.uri |
DOI: 10.1109/SAUPEC/RobMech/PRASA52254.2021.9377247
|
|
dc.identifier.uri |
http://hdl.handle.net/10204/12199
|
|
dc.description.abstract |
Deep reinforcement learning has recently been adopted for robot behavior learning, where robot skills are acquired and adapted from data generated by the robot while interacting with its environment through a trial-and-error process. Despite this success, most model-free deep reinforcement learning algorithms learn a task-specific policy from a clean slate and thus suffer from high sample complexity (i.e., they require a significant amount of interaction with the environment to learn reasonable policies and even more to reach convergence). They also suffer from poor initial performance due to executing a randomly initialized policy in the early stages of learning to obtain experience used to train a policy or value function. Model based deep reinforcement learning mitigates these shortcomings. However, it suffers from poor asymptotic performance in contrast to a model-free approach. In this work, we investigate knowledge transfer from a model-based teacher to a task-specific model-free learner to alleviate executing a randomly initialized policy in the early stages of learning. Our experiments show that this approach results in better asymptotic performance, enhanced initial performance, improved safety, better action effectiveness, and reduced sample complexity. |
en_US |
dc.format |
Abstract |
en_US |
dc.language.iso |
en |
en_US |
dc.relation.uri |
https://ieeexplore.ieee.org/document/9377247 |
en_US |
dc.relation.uri |
https://ieeexplore.ieee.org/xpl/conhome/9376875/proceeding?searchWithin=knowledge%20transfer |
en_US |
dc.source |
2021 Southern African Universities Power Engineering Conference/Robotics and Mechatronics/Pattern Recognition Association of South Africa (SAUPEC/RobMech/PRASA), Potchefstroom, South Africa, 27-29 January 2021 |
en_US |
dc.subject |
Deep reinforcement learning |
en_US |
dc.subject |
Robot behaviour learning |
en_US |
dc.subject |
Artificial intelligence |
en_US |
dc.title |
Knowledge transfer using model-based deep reinforcement learning |
en_US |
dc.type |
Conference Presentation |
en_US |
dc.description.pages |
6 |
en_US |
dc.description.note |
©2021 IEEE. Due to copyright restrictions, the attached PDF file only contains the abstract of the full text item. For access to the full text item, please consult the publisher's website: https://ieeexplore.ieee.org/document/9377247 |
en_US |
dc.description.cluster |
Manufacturing |
en_US |
dc.description.impactarea |
Industrial AI |
en_US |
dc.identifier.apacitation |
Boloka, T. J., Makondo, N., & Rosman, B. (2021). Knowledge transfer using model-based deep reinforcement learning. http://hdl.handle.net/10204/12199 |
en_ZA |
dc.identifier.chicagocitation |
Boloka, Tlou J, N Makondo, and B Rosman. "Knowledge transfer using model-based deep reinforcement learning." <i>2021 Southern African Universities Power Engineering Conference/Robotics and Mechatronics/Pattern Recognition Association of South Africa (SAUPEC/RobMech/PRASA), Potchefstroom, South Africa, 27-29 January 2021</i> (2021): http://hdl.handle.net/10204/12199 |
en_ZA |
dc.identifier.vancouvercitation |
Boloka TJ, Makondo N, Rosman B, Knowledge transfer using model-based deep reinforcement learning; 2021. http://hdl.handle.net/10204/12199 . |
en_ZA |
dc.identifier.ris |
TY - Conference Presentation
AU - Boloka, Tlou J
AU - Makondo, N
AU - Rosman, B
AB - Deep reinforcement learning has recently been adopted for robot behavior learning, where robot skills are acquired and adapted from data generated by the robot while interacting with its environment through a trial-and-error process. Despite this success, most model-free deep reinforcement learning algorithms learn a task-specific policy from a clean slate and thus suffer from high sample complexity (i.e., they require a significant amount of interaction with the environment to learn reasonable policies and even more to reach convergence). They also suffer from poor initial performance due to executing a randomly initialized policy in the early stages of learning to obtain experience used to train a policy or value function. Model based deep reinforcement learning mitigates these shortcomings. However, it suffers from poor asymptotic performance in contrast to a model-free approach. In this work, we investigate knowledge transfer from a model-based teacher to a task-specific model-free learner to alleviate executing a randomly initialized policy in the early stages of learning. Our experiments show that this approach results in better asymptotic performance, enhanced initial performance, improved safety, better action effectiveness, and reduced sample complexity.
DA - 2021-01
DB - ResearchSpace
DP - CSIR
J1 - 2021 Southern African Universities Power Engineering Conference/Robotics and Mechatronics/Pattern Recognition Association of South Africa (SAUPEC/RobMech/PRASA), Potchefstroom, South Africa, 27-29 January 2021
KW - Deep reinforcement learning
KW - Robot behaviour learning
KW - Artificial intelligence
LK - https://researchspace.csir.co.za
PY - 2021
SM - 978-1-6654-0345-0
SM - 978-1-6654-4788-1
T1 - Knowledge transfer using model-based deep reinforcement learning
TI - Knowledge transfer using model-based deep reinforcement learning
UR - http://hdl.handle.net/10204/12199
ER -
|
en_ZA |
dc.identifier.worklist |
25230 |
en_US |