Arşiv ve Dokümantasyon Merkezi
Dijital Arşivi

Analyzing the generalizability of deep contextualized language representations for text classification

Basit öğe kaydını göster

dc.contributor Graduate Program in Computer Engineering.
dc.contributor.advisor Özgür, Arzucan.
dc.contributor.advisor Hürriyetoğlu, Ali.
dc.contributor.author Büyüköz, Berfu.
dc.date.accessioned 2023-03-16T10:04:50Z
dc.date.available 2023-03-16T10:04:50Z
dc.date.issued 2020.
dc.identifier.other CMPE 2020 B88
dc.identifier.uri http://digitalarchive.boun.edu.tr/handle/123456789/12437
dc.description.abstract This study evaluates the robustness of two state-of-the-art deep contextual language representations, ELMo and DistilBERT, on supervised learning of binary protest news classi cation and sentiment analysis of product reviews. A \cross-context" setting is enabled using test sets that are distinct from the training data. Speci cally, in the news classi cation task, the models are developed on local news from India and tested on the local news from China. In the sentiment analysis task, the models are trained on movie reviews and tested on customer reviews. This comparison is aimed at exploring the limits of the representative power of today's Natural Language Processing systems on the path to the systems that are generalizable to real-life scenarios. The models are ne-tuned and fed into a Feed-Forward Neural Network and a Bidirectional Long Short Term Memory network. Multinomial Naive Bayes and Linear Support Vector Machine are used as traditional baselines. The results show that, in binary text classi cation, DistilBERT is signi cantly better than ELMo on generalizing to the cross-context setting. ELMo is observed to be signi cantly more robust to the cross-context test data than both baselines. On the other hand, the baselines performed comparably well to ELMo when the training and test data are subsets of the same corpus (no cross-context). DistilBERT is also found to be 30% smaller and 83% faster than ELMo. The results suggest that DistilBERT can transfer generic semantic knowledge to other domains better than ELMo. DistilBERT is also favorable in incorporating into real-life systems for it requires a smaller computational training budget. When generalization is not the utmost preference and test domain is similar to the training domain, the traditional ML algorithms can still be considered as more economic alternatives to deep language representations.
dc.format.extent 30 cm.
dc.publisher Thesis (M.S.) - Bogazici University. Institute for Graduate Studies in Science and Engineering, 2020.
dc.subject.lcsh Natural language processing (Computer science)
dc.subject.lcsh Text editors (Computer programs)
dc.title Analyzing the generalizability of deep contextualized language representations for text classification
dc.format.pages xii, 74 leaves ;


Bu öğenin dosyaları

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster

Dijital Arşivde Ara


Göz at

Hesabım