Abstract:
In quest of making artificial agents more autonomous and intelligent, equipping them with the ability of self-learning of skills plays a crucial role. In this thesis, we focus on intrinsically motivated exploration to enable efficient acquisition of skills for artificial agents. During the exploration, the agent uses the intrinsic motivation signal to self-select the exploration regions to proceed. This motivation signal drives the agent to explore the region that is neither too easy nor too difficult for the agent. First, we proposed a method that continuously partitions the sensorimotor space using the predictability principle to form specialized learning regions to better employ an existing intrinsic motivation framework. Our next study aims to utilize a latent space that facilitates the self-organization of the exploratory behaviors driven by the intrinsic motivation to learn a set of skills. To make this space reflect the dynamics of the interaction between the robot and the environment, we propose blending the outcome, action, and object information. Next, the latent space is clustered into different regions; each is then learned by separate predictors. The proposed approach is validated with a simulated robot that manipulates different objects using parameterized actions in a table-top environment. Our approach allows the robot to organize its own curriculum, enabling it to proceed from easier skills to more complex ones. The analysis of the curriculum deduces that grasp emerges before pushing, which is consistent with the skill emergence in infants. Furthermore, results show that the proposed method makes significantly lesser prediction errors than its counterparts in various settings.