andrew blaker


No profile


Testing and Evaluation Assessing the validity of essay marking rubrics more

Sat, Jul 9, 11:45-12:10 Asia/Tokyo

As English high school curricula becomes increasingly communication-oriented, it is becoming more necessary to develop university entrance tests which assess students’ ability to produce target language based on communicative goals rather than to translate between languages or select correct answers. A potential problem with these more communication-oriented test questions is they may risk sacrificing reliability for validity; however, the use of rubrics can ensure that both reliability and validity remain high (Jonsson and Svingby, 2007). This presentation looks at the results of a preliminary study to determine if a university entrance exam rubric results in high inter-rater reliability. The study looks at the test scores of three types of markers: 1) those trained to apply the rubric; 2) those who have seen the rubric but have not been trained to apply it; and 3) those who have not seen the rubric. It aims to answer the following questions: Does the rubric achieve a Cohen's kappa value greater than 0.7 for inter-rater reliability 1. between trained markers? 2. between trained and untrained markers? 3. between trained markers and markers who have not seen the rubric? The findings of this study will interest educators involved in test and assessment design.

Claire Murray andrew blaker Paul Mathieson Francesco Bolstad