Abstract:
Deep generative models aim to learn processes that are assumed to generate the data. To this end, deep latent variable models use probabilistic frameworks to learn a joint probability distribution over the data and its low-dimensional hidden variables. A challenging task for the deep generative models is learning complex probability distri butions over sequential data in an unsupervised setting. Ordinary Differential Equation Variational Auto-Encoder (ODE2VAE) is a deep generative model that aims to learn complex generative distributions of high-dimensional sequential data. The ODE2VAE model uses variational auto-encoders (VAEs) and neural ordinary differential equa tions (Neural ODEs) to model low-dimensional latent representations and continuous latent dynamics of the representations, respectively. In this thesis, we aim to explore the effects of the inductive bias in the ODE2VAE model by analyzing the learned dy namic latent representations over three different physical motion datasets. Then, we re-formulate the model for flexible regularization, and we extend the model architecture to facilitate the learning of the varying static features in the sequential data. Through the experiments, we uncover the effects of the inductive bias of the ODE2VAE model over the learned dynamical representations and demonstrate the ODE2VAE model’s shortcomings when it is used for modeling sequences with varying static features.