Abstract:
Blur impairs the sharpness of visual features and the clarity of details. It may sometimes be desired for artistic effect. However, in general, it is regarded as a defect. There are different problems studied about blur, such as blur detection, segmentation, estimation, and deblurring, but despite its abundance in visual media such as pho tographs and videos, there is limited annotated data about blur. This lack of data inhibits the usage of deep learning models because they require a lot of annotated data. Annotating that much data is expensive and cumbersome. In this thesis, we in vestigate blur-vs-sharp classification using deep learning, also we experiment with weak supervision as a remedy against the lack of data for blur assessment and localization. We compare our results with the classical approaches found in the literature. We use the data we annotated from four different datasets, three of which are sign language datasets and the other one is an action recognition dataset. We focus our research on sign language videos where motion blur is frequently encountered. Sign languages are the primary communication method of Deaf community and for that reason sign language recognition (SLR) is an important task. Determining the intensity of blur and its location may be beneficial for SLR research.