Özet:
In this thesis, we propose a novel method that learns to deblur hand images in the presence of spatially-varying object motion blur and unpaired blurry/sharp training pairs. While some success has been achieved in image deblurring by learning disen tangled representations from synthetically blurred data, these methods do not perform well when objects in the frame are moving rapidly; consequently resulting in inferior pose estimation performances. This commonly occurs when the hands of a signer moves abruptly in a sign language setting. We propose to solve these problems by disentangling blur information from image content (hand texture, background). Lack of non-corresponding training pairs is dealt with cross-cycle consistency losses in blur ring/deblurring branches based on disentangled representations and spatially-variant blur is extracted from blur-degraded regions using partial convolutions. We test our results both qualitatively and quantitatively on a novel hand blur dataset consisting of real blurry images and sharp frames as well as a reference synthetically blurred dataset.