Abstract:
The purpose of this study is to investigate the use of expert judgments and automated textual analysis tools as instruments in generating context validity evidence for the use of reading test texts for EAP proficiency exams. Based on the linguistic task demands outlined in Khalifa and Weir’s (2009) validation framework for reading tests, the study aimed to explore the text features that influenced experts’ judgments on the difficulty and suitability of texts for an EAP reading test. Results obtained from the analysis of 10 texts through 24 automated textual analysis indices were correlated with the judgments of experts on different textual features of the texts in order to determine the automated indices that could readily replace expert judgments. Textual analyses results for 120 texts from four corpuses (a corpus of IELTS reading test texts and course books of three universities, namely İstanbul Şehir University, Boğaziçi University, and University of Bedfordshire) were compared to find the similarities and differences between the corpuses. Finally, through a descriptive analysis of the three university corpuses (90 texts of about 800 words each) optimal ranges of textual analysis index scores representing the majority of the university texts were offered. The findings from this study provide guidance to test developers in their efforts to generate context validity evidence by means of expert judgments and automated textual analysis tools.