Özet:
Generative adversarial networks (GANs) are deep neural networks that are de signed to model complex data distributions. The idea is to create a discriminator net work that learns the borders of the data distribution and a generator network trained to maximize the discriminator’s loss to learn to generate samples from the data distri bution. Instead of learning a global generator, one variant trains multiple generators, each responsible from one local mode of the data distribution. In this thesis, we re view such approaches and propose the hierarchical mixture of generators that learns a hierarchical division in a tree structure as well as local generators in the leaves. Since these generators are combined softly, the whole model is continuous and can be trained using gradient-based optimization. Our experiments on five image data sets, namely, MNIST, FashionMNIST, CelebA, UTZap50K, and Oxford Flowers, show that our proposed model is as successful as the fully connected neural network. The learned hierarchical structure also allows for knowledge extraction.