Özet:
In this study, we analyze the effect of different word embedding methods in representing Turkish texts, namely word2vec, fastText, and ELMo. Word embeddings are used for representing words in a high dimensional vector space such that similar words are placed nearby. This will help in different tasks, such as document classifi cation, machine translation, and so on. We conduct experiments on Turkish corpora of different sizes using word2vec, fastText, and ELMo, and compare them with bag of-words (BOW). Word2vec works at the word level; fastText works at the character (subword) level and the representation of a word is calculated by combining the rep resentations of subwords. ELMo is context-dependent, that is, the representation of a vector depends on other words in the sentence, whereas word2vec and fastText are context-independent. Learned word embeddings are evaluated on noun and verb inflec tions, semantic analogy tests, as well as on topic classification of news documents. Our experiments indicate that fastText vectors are better on classification tasks. Word2vec vectors are more useful on semantic analogies.