Abstract:
Deep Learning (DL) is the force behind the success of solving many complicated tasks in recent years. With the use of DL systems in safety-critical applications, it has become of great importance to make these systems robust against adversarial attacks. Adversarial data generation is an e ective tool to make DL systems robust against such attacks, with the help of adversarial training. Recent studies focus gradient-based adversarial attacks. Although they can successfully generate adversarial samples, high computation cost and lack of exibility over input generation arise the need for an e cient and exible adversarial attack methodology. In this thesis, we present a fast and customizable adversarial data generation framework towards bridging this gap. Convolutional autoencoders with custom loss functions, enable user-con gurable data generation within a much shorter time compared to the state-of-the-art attack method called PGD. We integrate suspiciousness metric from traditional software engineering and a feature importance metric into our custom loss functions. Experiments show that our technique produces adversarial samples faster than PGD and using these samples in adversarial training, allows comparable robustness against adversarial attacks.