Abstract:
Deep Neural Networks (DNN) are used extensively to solve challenging problems in computer vision, natural language processing, and speech recognition. However, recent studies such as adversarial attacks show that high accuracy is not enough to ensure the performance of DNNs. Additionally, deployment of DNN models on edge devices requires high resilience against bit errors in the DNN model. Therefore, robustness and resilience improvement methods are necessary. However, there is no study that discusses these methods together. In this thesis, we compare and analyze the effect of robustness and resilience improvement methods on resilience and robustness, respectively. We use adversarial training and bit error training as representatives of robustness and resilience improvement methods. We also introduce adversarial and bit error training, a combined training method of adversarial training and bit error training. For robustness, we compare test accuracy and robust accuracy of four trained DNN models. For resilience, we compare the performance against random bit errors with different bit error rates of four trained DNN models. The results show that resilience improvement methods improve the robustness, while the robustness improvement method can cause a decrease in resilience due to the test accuracy drop of models trained with adversarial training. We propose multiple bit error training (MBET), that utilizes more than 1- bit error rates inside the loss function during the training. We test MBET with four different DNN models on two datasets. The results show that MBET improves resilience and robustness compared to normal training.