Özet:
In this master’s thesis, we started with a baseline response retrieval and re ranking system that is composed of two steps: BM25 retrieval and BERT re-ranking. After investigating the effects of several parameters and BERT model size on the base line approach, a novel retrieval and re-ranking system with TF- IDF retrieval and Cross Encoder re-ranking steps was designed and implemented. With the application of Deep Learning models to the re-ranking step, consistent ranking performance improvements have been observed. The research focus of this thesis is a comparative performance study of different Transformer models. In the experiments carried on in this thesis, we showed that smaller transformer models can out- perform larger models. Additionally, this designed re-ranking system was re-purposed for a Question Answering task where the answer for a given question is searched as a subset of a passage. Even though the re-ranking system was directly used without undergoing any modifications regarding the QA task, promising results that are worth further research have been attained.