Özet:
Current sign language recognition systems rely on supervision for training successful models. However, in order to utilize the large amount of unlabeled sign language resources, unsupervised learning methods are needed. Motivated by the successful results of unsupervised term discovery in spoken languages, this work explores how to apply similar methods for sign terms discovery. The goal is to nd the repeating terms from continuous sign videos without any supervision. Using visual features extracted from RGB videos, it is shown that discovery algorithms designed for speech can also discover sign terms. The experiments are run on a large scale continuous sign corpus and the performance is evaluated using gloss level annotations, for which time boundaries are given. The evaluation metrics are also inherited from spoken term discovery. This work unveils the potential use of unsupervised term discovery algorithms for continuous sign languages.