Özet:
Transformer language models have paved the way for outstanding achievements on a wide variety of natural language processing tasks. The first step in transformer models is dividing the input into tokens. Over the years, various tokenization ap proaches have emerged. These approaches have further evolved from character and word-level representations to subword-level representations. However, the impact of tokenization on models performance has not been thoroughly discussed, especially for morphologically rich languages. In this thesis, we comprehensively analyze subword tokenizers for Turkish, which is a highly inflected and morphologically rich language. We define various metrics to evaluate how well tokenizers encode Turkish morphol ogy. Also, we examine how the tokenizer parameters like vocabulary and corpus size change the characteristics of tokenizers. Additionally, we propose a new tokenizer for agglutinative and morphologically rich languages. We demonstrate that our tokenizer reduces overall perplexity and enables better generalization performance. Downstream task experiments show that morphology supervision in tokenization improves model performance.