Paper

Disentangling Controllable and Uncontrollable Factors of Variation by Interacting with the World

We introduce a method to disentangle controllable and uncontrollable factors of variation by interacting with the world. Disentanglement leads to good representations and is important when applying deep neural networks (DNNs) in fields where explanations are required. This study attempts to improve an existing reinforcement learning (RL) approach to disentangle controllable and uncontrollable factors of variation, because the method lacks a mechanism to represent uncontrollable obstacles. To address this problem, we train two DNNs simultaneously: one that represents the controllable object and another that represents uncontrollable obstacles. For stable training, we applied a pretraining approach using a model robust against uncontrollable obstacles. Simulation experiments demonstrate that the proposed model can disentangle independently controllable and uncontrollable factors without annotated data.

Results in Papers With Code
(↓ scroll down to see all results)