DARLA: Improving the transfer of zeros in reinforcement learning
Authors: Irina Higgins *, Arka Pal *, Andrei Rusu, Loic Matthey, Chris Burgess, Alexander Pritzel, Matt Botvinick, Charles Blundell, Alexander Lerchner
Modern deep-reinforced learning agents rely on large amounts of knowledge to learn to function. In some situations, such as robotics, it may be impossible to obtain a lot of training data. Therefore, such subjects are often trained in a related task where information is easy to obtain (e.g., simulation) in the hope that the learned knowledge will become more common in an interesting task (e.g., reality). We propose DARLA, a learning agent for DisentAngled, who leverages their interpretive and structured vision to learn to act in a way that is robust to a variety of environmental changes – including a simulation of robotics to transfer reality. We show that DARLA significantly transcends all baselines and that its performance depends crucially on the quality of its vision.
For more information and related work, see paper.
See it in ICML:
Monday, August 7, 16: 42-17: 00 @ C4.5 (debate)
Monday, August 7, 6:30 pm to 10:00 pm @ Gallery # 123 (Poster)