Özet:
Classical data augmentation techniques are widely used by many image classifi cation applications in the absence of adequate training data. These data augmentation techniques consists of but not limited with reflection, random cropping, re-scaling exist ing images and transformations. These techniques are widely used in practice during training classifiers with extended versions of real-world datasets. Increasing dataset size with realistic synthetic data allows us to improve the classification accuracy by making use of additional realistic variety. With the great representational power of GANs, learning the distribution of real data with a consistent level of variety allows us to generate samples with nearly-unobserved discriminative features. In our ap proach we used the aforementioned generative capability of GANs by utilizing state of the art GAN augmentation framework titled as StyleGAN2-ADA. After the training SytleGAN2-ADA in class conditional setting, we extended the dataset with different numbers of additional generated samples in order to observe the correlation of accu racy and augmentation strength. We extended our approach by using StyleCLIP to experiment disentangled feature augmentations which is a novel approach in the field of GAN augmentation. To make use of StyleCLIP more efficiently, we fine-tuned CLIP with X-ray images and modified entities which are extracted from corresponding med ical reports. We used the DeepAUC framework which is proven to be efficient for multi- disease labelled X-ray classification tasks to test the performance of the GAN augmentation. In our approach, we observed that the classification accuracies were improved compared to without text-manipulated GAN augmented setting.